JC& E - Teacxhers Technology
JC& E - Teacxhers Technology
JC& E - Teacxhers Technology
a r t i c l e i n f o a b s t r a c t
Article history: The study addressed two limitations of previous research on factors related to teachers’ integration of
Received 28 August 2009 technology in their teaching. It attempted to test a structural equation model (SEM) of the relationships
Received in revised form 28 November 2009 among a set of variables influencing preservice teachers’ use of technology specifically to support stu-
Accepted 30 November 2009
dent-centered learning. A review of literature led to a path model that provided the design and analysis
for the study, which involved 206 preservice teachers in the United States. The results show that the pro-
posed model had a moderate fit to the observed data, and a more parsimonious model was found to have
Keywords:
a better fit. In addition, preservice teachers’ self-efficacy of teaching with technology had the strongest
Elementary education
Improving classroom teaching
influence on technology use, which was mediated by their perceived value of teaching and learning with
Pedagogical issues technology. School’s contextual factors had moderate influence on technology use. Moreover, the effect of
Secondary education preservice teachers’ training on student-centered technology use was mediated by both perceived value
and self-efficacy of technology. The implications for teacher preparation include close collaboration
between teacher education program and field experience, focusing on specific technology uses.
Ó 2009 Elsevier Ltd. All rights reserved.
1. Introduction
Information and communication technology (ICT) has been a component in many recent educational reforms in many countries. In the
UK, British Educational Communications (Becta) initiated the Next Generation Learning campaign to ensure the effective and innovative use
of ICT in education. In the US, Cuban (1986, 2001) argues that historically American educators have pursued the use of technology in the
classroom as part of an attempt to increase productivity and efficiency.
Teachers are on the frontline of educational reforms. To improve the education of teachers in technology integration, both Becta (2009)
and the International Society of Technology in Education (ISTE, 2008) have developed a set of technology guidelines and standards for
teachers, students, and other stakeholders. These guidelines and standards have been widely adopted, adapted, or otherwise referenced
in country/state’s and local educational institution’s technology plans and standards.
Thus, teachers have become expected to incorporate technology into their curricula. However, research has found that many teachers do
not use technology in their teaching or use it effectively despite the availability of hardware and software (Cuban, 2001; Harrison et al.,
2002; Henning, Robinson, Herring, & McDonald, 2006). Numerous studies are conducted to examine factors that are related to teachers’
use of technology in classrooms (Baek, Jung, & Kim, 2008; Norton, McRobbie, & Cooper, 2000; see more examples in the following discus-
sion and Section 2).
There are two limitations of the ‘‘factor research.” First, teachers’ use of technology is not clearly defined. As Drent and Meelissen (2008)
aptly pointed out that:
Most of the research on the implementation of ICT in schools is focusing on factors that influence the use of ICT in general. It is often
assumed that the use of ICT will lead to changes in learning arrangements and teaching methods but factors influencing innovative ICT-
use are not explicitly analysed (p. 188).
Here ‘‘innovative use” means to use ICT to support student-centered learning instead of teachers using technology to prepare lessons,
finding information, or doing e-mail. Indeed, Becta’s survey showed that technology was used by teachers primarily for presentational pur-
poses rather than as a means to engage students in learning activities (Smith, Rudd, & Coghlan, 2008). According to Wozney, Venkatesh,
0360-1315/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compedu.2009.11.015
R.-J. Chen / Computers & Education 55 (2010) 32–42 33
and Abrami (2006), teachers’ use of computer technology was predominantly for informative (Internet and CD-ROM), expressive (word
processing), and administrative and evaluative (data keeping, lesson planning, and testing) purposes. Thus, research on teachers’ use of
technology should clearly define how technology is used, especially to provide opportunities for students to learn with technology.
The second limitation is that many researchers do not attempt to distinguish the relative importance of the factors or the distinction is
rudimentary. In Franklin’s study (2007), all factors were considered to significantly influence teachers’ technology use, a finding that does
not shed much new light on the field of technology and teacher education. Ertmer, Ottenbreit-Leftwich, and York (2006) moved a little
further and concluded that intrinsic factors (such as beliefs, confidence, and commitment) were stronger than extrinsic factors (such as ac-
cess to technology, support, and time). In this study, the researchers used mean scores of the factors as indicators of importance. But com-
paring means does not take into account the complex correlations among the factors and thus not adequate. In short, readers of this type of
research are left to wonder about the subtle ways the factors encourage or impede technology use. Does teachers’ training in technology
directly related to their use of technology? Or is this influence mediated by other factors such as teachers’ beliefs about teaching and
technology?
Methodologically, structural equation modeling (SEM) can help researchers answer these questions. For example, Y.-L. Chen (2008)
tested a SEM model that predicted the level of Internet use by English teachers in Taiwan. Teo (2009) attempted to build a SEM model
to predict preservice teachers’ level of technology acceptance. Wang, Wu, and Wang (2009) proposed a SEM model to account for adults’
intention to use mobile learning (by means of wireless Internet and mobile devices such as cell phones, PDAs, and digital audio players). In
these studies, the researchers were able to specify a path model to predict or explain the determinants of people’s intention or use of tech-
nology. Although these studies address the second limitation by decomposing differentiated effects of factors on technology use, they fall
short of addressing the first limitation and fail to clearly define how technology is used. For example, in Teo’s study, preservice teachers’
behavioral intention to use technology was measured by two general items on the questionnaire: ‘‘I will use computers in future” and ‘‘I
plan to use the computer often” (p. 311). It is not clear how exactly technology would be used in instructional activities.
The current study attempted to address the above two limitations. It sought to build a SEM model that accounts for preservice teachers’
use of technology to support students’ learning with technology. Based on a review of literature on psychology, teacher education, and edu-
cational technology, an initial theoretical model was proposed. SEM analysis was conducted to investigate to what degree the model fitted
the data collected from 206 preservice teachers. The model fitted the data well, and an alternative, more parsimonious model was found to
have a better fit. Both models were formulated in accordance with previous research, and meaningful interpretation was given to every
parameter of the model. In addition, the current study sought to decompose the effect of each factor into direct and mediated effects.
In short, the following research questions guided the design and data analysis of the study:
1. How does the SEM model fit the data? Is there an alternative model with a better fit?
2. What is the relative importance of the variables in explaining preservice teachers’ use of technology to facilitate student-centered
learning?
Answers to these questions would help to assess the usefulness of the proposed theoretical model. The investigation would also shed
some light on variables that affect preservice teachers’ decision making and their acceptance of or resistance to technology in the
classroom.
The SEM model in the study was derived from an in-depth review of the literature pertaining to psychology theories on human behavior
and preservice teachers’ learning to teach with technology. A special attention was given to identifying factors that are involved in preser-
vice teachers’ thought process and perceptions of support as they consider using technology in their teaching.
The expectancy–value theory (e.g., Feather, 1982; Wigfield, 1994) proposes that people’s intention to perform a particular task is a func-
tion of two variables. First, people must believe that there are benefits in performing a task; in other words, they will determine the value of
performing an action, and this determination will influence their behavior intention. Second, people must believe they can succeed; in
other words, they must have a high expectancy about their task performance. This notion is similar to social learning theorists’ concept
of self-efficacy (Bandura, 1977), which refers to the idea that people are likely to perform a certain behavior when they believe they are
capable of performing the behavior successfully.
The expectancy–value theory was used by Wozney et al. (2006) in their development of a comprehensive questionnaire to examine
teachers’ personal and instructional use of technology. They found that teachers’ expectancy of success and perceived value were the most
important factors accounted for their levels of computer use; teachers who believed that technology can greatly improve teaching and
learning tended to be creative in technology use. Indeed, recent research demonstrates that both preservice and inservice teachers’ ped-
agogical beliefs about the appropriate role of ICT in education are a critical indicator for classroom use of technology (Czerniak, Lumpe,
Haney, & Beck, 1999; Ertmer, 2005; Guerrero, Walker, & Dugdale, 2004; Schmidt, 1999).
Teachers’ use of technology is influenced by organizational context, in addition to teachers’ beliefs and other technology-related factors
(C.-H. Chen, 2008; Clausen, 2007; Hermans, Tondeur, van Braak, & Valcke, 2008; Higgins & Spitulnik, 2008; Hu, Clark, & Ma, 2003; Lim &
Chai, 2008; Schrum, 1999; Tearle, 2003). Specifically, research has shown that access to technology, a supportive school culture , and ade-
quate time for preservice teachers to explore educational use of technology are essential for successful technology integration. A discussion
of each of these issues is provided in the following.
First of all, access does not mean only the availability of hardware and software but also the appropriate type of technology and pro-
grams that support teaching and learning (Tondeur, Valcke, & van Braak, 2008). Access to appropriate technology means that the affor-
dances and constraints (Freidhoff, 2008) of a technological tool need to be carefully considered when the tool is incorporated in the
lesson. Moreover, a distinction of access needs to be made. Typically, teachers have easier access to technology than students. For example,
in Dexter and Reidel’s (2003) study of preservice teachers during their student teaching, nearly twice as many preservice teachers (34.7%)
indicated that computers were available for teacher use compared with being available for student use (14.4%). Clearly, a student-centered
approach to technology integration calls for students’ access to quality technology resources.
A supportive culture at the school site where preservice teachers’ practicum occurs is another important factor. Necessary support from
administration, cooperating teachers, and other teachers and technical staff has been shown as a key factor influencing preservice teachers’
intention and use of technology resources (Bullock, 2004; Dexter & Riedel, 2003). Particularly for preservice teachers, the practicum expe-
rience will be conducive to professional growth in ICT if they are placed in an encouraging environment in which they can feel comfortable
to try and even to fail as they integrate technology in their teaching. Moreover, one-on-one technology support was considered a necessary
part of many projects aimed at improving preservice teachers’ capacity to use technology in their teaching (Mims et al., 2006).
Finally, time is often cited as a key factor influencing teachers’ use of technology (e.g., Franklin, 2007), even for ‘‘tech-savvy” teachers
(Bauer & Kenton, 2005). The issue of time is dire for preservice teachers in their practicum as they work very hard to learn about the school
and students and prepare new lessons. Time is a deciding factor for the extent of effort preservice teachers can devote themselves in
exploring new ideas and materials, organizing various technology resources for effective student learning, working with cooperating teach-
ers, and reflecting on their teaching (Bullock, 2004; Russell, Bebell, O’Dwyer, & O’Connor, 2003).
2.4. Types and ways of preservice and new teachers’ technology use
As stated in Section 1, many studies examining factors associated with teachers’ use of technology fail to identify the types of technology
in the classrooms or how it is used. In fact, ‘‘technology use” is defined very differently in research, and many studies involve a generic
notion of teachers’ use of technology. The following two studies are exceptions.
Dexter and Riedel (2003) surveyed 201 preservice teachers about their use of different types of technology and the conditions facilitat-
ing or otherwise affecting that use during student teaching. Word-processing programs and Internet browsers were commonly used by
these preservice teachers. Spreadsheet, presentation, and database programs were not used often. In addition, the study showed that stu-
dents were not given adequate opportunities to use these technological tools. In general, students occasionally used word processors and
the Internet, and they rarely used spreadsheet, presentation, or database programs in their learning.
Russell, Bebell, O’Dwyer, and O’Connor (2003) investigated the relationship between teachers’ years of teaching experience and the
manners technology was used in their study of 2894 K-12 teachers in Massachusetts, United States. A special attention was given to under-
standing how new teachers (defined as those with less than 5 years of teaching experience) used technology in their teaching compared
with matured teachers (with 6–10 years of experience) and retirement-age teachers (with more than 15 years of experience). The compar-
isons were organized into the following four ways of technology use: (a) teacher use of technology for preparation, (b) teacher use of e-
R.-J. Chen / Computers & Education 55 (2010) 32–42 35
mail, (c) teacher use of technology for delivery of lessons, and (d) teacher-directed student use of technology. The results showed that new
teachers had a higher level of preparation use than both the matured and retirement-age teachers and that new teachers used e-mail more
intensively than retirement-age teachers. However, new teachers had lower level of teacher-directed student use than either mature teach-
ers or retirement-age teachers.
Russell, Bebell, O’Dwyer, and O’Connor (2003) noted that, although new teachers had higher technology skills than veteran teachers, they
did not display higher levels of technology use in the classroom, especially in the student use category. The researchers provided two reasons.
First, new teachers could focus on learning about how to use technology rather than on how to integrate technology in the content areas.
Second, the first few years of teaching are challenging, and new teachers typically spend most of their time and energy in getting acquainted
with curriculum and classroom management instead of technology integration. The researchers discussed implications for preparation of
preservice teachers and argued for a focus on specific instructional uses of technology instead of general technology skills.
Similar to the teacher-directed student use of technology in Russell et al. (2003) research, the current study focuses on preservice teach-
ers’ instructional use of technology to facilitate student-centered learning. This refers to the degree to which participating teachers pro-
vided opportunities for students to use technology to do projects, collect information/data, and share their ideas with peers by means
of classroom presentations. Therefore, the following activities are not considered: Teachers’ use of software programs to gather informa-
tion, write up lesson plans, and prepare instructional materials; teachers’ use of PowerPoint, document scanner, and interactive whiteboard
to present lessons; teachers’ use of e-mail, blog, and Web 2.0 applications for communication purposes.
3. Methods
The main purpose of the study was to develop a SEM model that adequately represents factors influencing preservice teachers’ use of
technology resources to support student-centered learning. Based on the above review of research, these factors are: Preservice teachers’
training in teaching with technology, perceived value of technology integration, perceived self-efficacy of teaching with technology, and con-
textual factors at the school site of their practicum. Each of these latent constructs has two to three indicators (see Table 1). Data were col-
lected via surveying 206 preservice teachers during their student teaching. AMOS statistics software program was used to calculate
parameter estimates and analyze the model fit.
A questionnaire was developed to measure the variables in the SEM model. The questionnaire consisted of both published (Cassidy &
Eachus, 2002; McGinnis et al., 2002) and researcher-developed instruments. A description of each variable is provided in Table 1 (see Sec-
tion 2 for a discussion of each of these variables and indicators).
Two methods were used to collect data from preservice teachers in a comprehensive university in the United States. First, an e-mail
invitation was sent to 115 preservice elementary teachers to complete the on-line version of the questionnaire. The participation was vol-
untary and anonymous. With a few reminders, 78 of them (or 68%) successfully completed the on-line survey. This process took 2 weeks
near the end of the preservice teachers’ student teaching experiences. In addition, to increase the return rate, the paper-based version of the
questionnaire was distributed by the researcher to 136 preservice secondary teachers during methods classes. This survey was also anon-
ymous. Eight teachers’ responses were not complete and were excluded from the sample. Therefore, the final sample consisted of 206 pre-
service teachers. The overall return rate for both surveys was 82%. A convenient sample was used to include as many preservice teachers as
possible on this campus because the SEM in the study required a relative large sample, preferably 150 participants and more. The partic-
ipants were between 22 and 31 years old. Among them, 28 were male, and 178 were female. On average, they used the computer
14.3 hours per week. It took approximately 20 minutes for the participants to complete the questionnaire.
According to Pedhazur (1997), growing out of multiple regression, SEM is a more powerful way for testing the tenability of causal mod-
els involving a set of independent and dependent variables. Unlike multiple regression, SEM takes into account measurement errors, cor-
related residuals, modeling of interactions, nonlinearities, and correlated independence. It is particularly useful in social and behavior
research where many variables (e.g., motivation, anxiety, and attitudes) are not directly observable. In SEM, such a variable is called a latent
variable. To capture validly and reliably of such a latent variable, more than one single indicator (observable variable) are necessary. SEM
also has the ability to model mediating variables rather than be restricted to an additive model. In other words, SEM allows researchers to
decompose the relationship between two variables into direct, indirect (through mediators), unanalyzed (due to correlated causes), and
spurious (due to common causes) effects. The sum of direct and indirect effects is the total effect. Moreover, researchers can use SEM
to compare alternative models to assess relative model fit, which is adopted in the current study.
Table 1
Latent variables and their indicators in the SEM model.
CONTEXT: Access to
technology, administrative &
technical support, and time
for integrating technology.
TRAINING: Technology
skills & teacher education
experience. EFFICACY: Computer
user self-efficacy of
teaching with technology.
Fig. 1. Proposed model illustrating preservice teachers’ use of technology for student-centered learning.
enous variable whose variation is assumed to be determined by causes outside the model. Another exogenous variable is CONTEXT. It is
hypothesized that CONTEXT, VALUE, and EFFICACY have a direct effect on USE. Lastly, TRAINING has an indirect effect on USE via VALUE
and EFFICACY.
4. Results
The descriptive statistics of the observed variables and internal consistency (Cronbach’s a values) are listed in Table 2. Kline (2005) pos-
its that an a value of .90 and up is considered excellent, .80 very good, and .70 acceptable. Accordingly, the observed variables in this study
had good internal consistency. The correlations among these variables are reported in Table 3.
R.-J. Chen / Computers & Education 55 (2010) 32–42 37
Table 2
Descriptive statistics of the variables and their reliabilities (n = 206).
LATENT VARIABLE Min. Max. Mean Standard deviation Skewness Kurtosis Cronbach’s a
Observed indicator
TRAINING
Program 11 33 23.15 5.02 0.32 0.72 .83
Skill 16 33 22.12 3.26 0.33 0.18 .90
VALUE
Teaching belief 19 37 28.22 4.13 0.02 0.81 .75
Learning belief 17 34 25.70 3.92 0.03 0.76 .81
EFFICACY
Computer self-efficacy 50 115 82.99 12.73 0.10 0.24 .93
Teach w/technology 17 37 27.11 4.09 0.16 0.33 .85
CONTEXT
Support 1 6 3.57 1.11 0.02 0.29 .88
Time 1 8 3.98 1.37 0.26 0.38 .95
Access 2 8 4.47 1.42 0.41 0.32 .92
USE
Info and data 0 4 1.55 1.18 0.28 0.82 –
Presentation 0 4 1.78 1.20 0.04 0.97 –
Project 0 4 1.14 0.98 0.32 0.87 –
Table 3
Intercorrelations between observed indicators (n = 206).
Variable 1 Prog 2 Skill 3 Teach 4 Learn 5 CSF 6 TwT 7 Supp 8 Time 9 Access 10 Info 11 Pres 12 Proj
Program –
Skill .577** –
Teaching belief .357** .318** –
Learning belief .344** .320** .589** –
Computer self-efficacy .284** .479** .299** .272** –
Teach w/technology .388** .420** .302** .323** .684** –
Support .282** .319** .229** .242** .198** .216** –
Time .341** .466** .316** .313** .308** .285** .563** –
Access .164* .205** .029 .044 .104 .167* .205** .144* –
Info and data .438** .475** .377** .375** .447** .481** .417** .599** .194** –
Presentation .506** .486** .360** .375** .489** .563** .397** .440** .335** .723** –
Project .471** .408** .332** .299** .305** .391** .271** .301** .101 .500** .580** –
The reliability and validity of the measurement model need to be satisfactory before proceeding to the structural model. The factor load-
ings for the measurement model are listed in Table 4. These factor loadings indicate the relationship between a latent variable and each of
its constituent observable indicators. The results show that the majority of the factor loadings were above 0.70, which indicated an inter-
nally consistent measure. For CONTEXT, the factor loading of Access was low, at 0.28, in the initial model. This observation suggests the
removal of Access from the model.
The reliability and convergent validity of the latent variables were estimated by construct reliability and average variance extracted,
both can be derived from the factor loadings. Specifically, construct reliability = (R standardized loading)2/[(R standardized load-
ing)2 + Rej], where ej is the measurement error of each indicator. Average variance extracted = R (standardized loading)2/[R (standardized
loading)2 + Rej]. The results are reported in Table 5.
To be considered adequate, construct reliability should be at least 0.70, and average variance extracted should be at least 0.50. Based on
these guidelines, in the initial model, CONTEXT had inadequate reliability and validity because of the low factor loading of Access (0.28, see
Table 4). The revised model did not include Access, and all latent variables in the revised model had satisfactory reliability and convergent
validity.
4.3. Model fit
A test of model fit indicates the degree of alignment between the theorized model and the collected data. As stated in Section 3.3.2,
various indicators should be considered to provide a broad picture of how the model matches the data. These indicators are reported in
Table 6. The fit indicators showed a moderate fit of the initial model.
AMOS reports model modification indexes that can help increase model fit. However, any changes to the initial model should be based
on previous research and sound theories instead of merely statistical reasons. Two changes were made to the initial model. First, a corre-
lation was established between Skill and Computer Self-efficacy. The reason was that teachers with advanced technology skills tend to have
38 R.-J. Chen / Computers & Education 55 (2010) 32–42
Table 4
Factor loadings for the measurement model.
Table 5
Reliability and convergent validity of the latent variables.
Table 6
Model fit indicators of the initial and revised models.
Note: df: degrees of freedom; GFI: goodness-of-fit index; AGFI: adjusted goodness-of-fit index; CFI: comparative fit index; SRMR: standardized root mean-square residual;
RMSEA: root mean-square error of approximation. The suggested guidelines are based on Bollen (1989), Kline (2005), and Pedhazur (1997).
high self-efficacy with regard to technology integration (Ropp, 1999). Second, parsimony is another consideration for model modification. It
is hoped that a model can explain a phenomenon using a small set of variables and relationships. In the initial model, the factor loading of
Access to CONTEXT was low. It was removed from the initial model. Applying the above two changes to the initial model improved the
model fit. The fit indicators are summarized in Table 6. It is apparent that the revised model had a better fit than the original model.
The above indexes are for overall model fit. It is also necessary to examine individual components. It is possible that the fit of some por-
tion of the model is poor despite a good overall model fit. Inspection of the residual covariances can help to identify particular observed
associations that are poorly explained by the model. For instance, the observed covariance between Program and Skill was 9.396, and the
predicted covariance between these two variables was 9.527. The residual covariance was therefore 0.131 (9.396 – 9.527), which was
0.099 when standardized. Since the size of residuals is affected by the units of the measures of the observed variables, standardized resid-
uals were used. Table 7 contains the standardized residual covariances in the upper triangle and the observed covariances in the lower
triangle.
Pedhazur (1997) suggests that a standardized residual is considered large if it is greater than 2.58 in absolute value. Accordingly, the
standardized residual covariances in Table 7 were considered small. Therefore, each component of the model was a good fit to the observed
data.
The revised structural equation model is shown in Fig. 2. USE was significantly determined by CONTEXT (b = 0.43, p < 0.001), VALUE
(b = 0.18, p = 0.02), and EFFICACY (b = 0.45, p < 0.001). These three factors accounted for an R2 of 0.72, indicating that the exogenous var-
iable, USE, could be strongly predicted. In addition, TRAINING significantly influenced VALUE (b = 0.49, p < 0.001) and EFFICACY (b = 0.60,
p < 0.001). The correlation between the two exogenous factors, CONTEXT and TRAINING, was 0.68 with p < 0.001. That is, preservice teach-
ers’ training in teaching with technology was highly related to how they perceived the resources and support at the school site.
R.-J. Chen / Computers & Education 55 (2010) 32–42 39
Table 7
Observed covariances and standardized residual covariances.
Variable 1 Prog 2 Skill 3 Teach 4 Learn 5 CSF 6 TwT 7 Supp 8 Time 9 Info 10 Pres 11 Proj
Program – 0.099 0.293 0.086 0.745 0.050 0.827 0.765 0.424 0.941 2.130
Skill 9.396 – 0.535 0.539 0.115 0.021 0.640 0.523 0.548 0.279 0.961
Teaching belief 7.370 4.265 – 0.051 0.318 0.228 0.271 1.001 0.303 0.258 0.774
Learning belief 6.732 4.068 9.488 – 0.066 0.027 0.425 0.933 0.248 0.087 0.306
Computer self-efficacy 18.065 19.773 15.633 13.531 – 0.068 0.201 0.844 0.019 0.156 0.561
Teach w/technology 7.930 5.570 5.077 5.145 35.433 – 0.413 0.004 0.443 0.138 0.121
Support 1.558 1.143 1.043 1.042 2.773 0.971 – 0.243 0.421 0.213 0.393
Time 2.338 2.071 1.782 1.673 5.340 1.590 0.849 – 0.658 0.516 0.645
Info and data 2.590 1.824 1.834 1.730 6.700 2.313 0.543 0.805 – 0.032 0.288
Presentation 3.027 1.885 1.772 1.751 7.411 2.738 0.522 0.718 1.018 – 0.254
Project 2.305 1.295 1.334 1.114 3.778 1.588 0.292 0.401 0.576 0.675 –
Note: Upper triangle: standardized residual covariances; lower triangle: observed covariances.
Fig. 2. SEM model of preservice teachers’ use of technology with standardized parameter estimates.
Table 8 contains the direct and indirect effects of each of the factors on preservice teachers’ use of technology. EFFICACY had the stron-
gest total effect on USE among the four factors. Thus, preservice teachers’ perceived confidence in using computers in general and integrat-
ing technology in their teaching in particular was a strong determinant of their decisions to use technology to facilitate students’ learning
with technology. Most of the total effect of EFFICACY on USE was a direct effect; the indirect effect mediated by VALUE was very weak.
The school context was also an important determinant. CONTEXT, measured by preservice teachers’ perceived level of support and ade-
quate time for technology use, had a moderate direct impact on preservice teachers’ providing opportunities for students to learn with
technology.
TRAINING did not have a direct effect on USE but a moderate indirect effect via both VALUE and EFFICACY. Thus, preservice teacher’s
levels of technology skills and their teacher education experience had a moderate influence on their actual use of technology. Lastly, VALUE
had a weak direct effect on USE. In other words, after accounting for the effects of other factors on USE, preservice teachers’ perceived value
of using technology in the teaching and learning process had only a small influence on USE.
Table 8
Direct, indirect, and total effects on USE (R2 = 0.72).
Although research abounds on factors influencing teachers’ use of technology, the current study involved SEM and contributed beyond
confirming conventional wisdom developed over the years. Research indicates that the introductory step for computers in school is using
them in administrative tasks and not as part of the learning process (Demetriadis et al., 2003). The current research focused on teachers’
use of technology specifically to support student-centered learning. Moreover, this study systemically analyzed a wide array of factors
influencing preservice teachers’ use of the technological resources available to them. The resulting model revealed the relative importance
of these factors in terms of their direct and mediated effects instead of merely providing a list of factors.
These factors can be roughly divided into two categories, and a close relationship exists between them. First, TRAINING, VALUE, and
EFFICACY are intrinsic to preservice teachers. They refer to preservice teachers’ technology skills, teacher education program experiences,
and underlying (sometimes deeply ingrained) beliefs and perceived efficacy about teaching and learning with technology. Together, these
three intrinsic constructs had a strong influence on preservice teachers’ use of technology. EFFICACY (effect size = 0.48) was the strongest
determinant of technology use, followed by TRAINING (effect size = 0.36). VALUE’s influence was weak (effect size = 0.18).
Recall that the expectancy–value theory stipulates that people’s perceived value and self-efficacy of performing a task determine their
intention to do so (see Section 2.2), but the theory does not specify the relative importance of these two factors. The current study shows
that, self-efficacy had a stronger influence than perceived value in terms of technology integration. This confirms a previous research find-
ing that although most teachers see the value and benefits of technology in education, many do not use technology in their teaching (OTA,
1995). Teachers’ perceived ability to use technology effectively and produce the desired results is a deciding factor.
Previous research (e.g., Becker, 1994; Ertmer et al., 2006) addresses the influence of teachers’ training on their decisions to use technol-
ogy, assuming that such influence is direct. However, the resulting model in the current study shows that the influence of TRAINING on
preservice teachers’ use of technology was mediated by both VALUE and EFFICACY. This finding shows that preservice teachers’ perceptions
of the benefits of ICT and their perceived efficacy of teaching with technology act as mediators that shape how preservice teachers’ training
is enacted in their decisions on technology use. Indeed, the purpose of training is to help preservice teachers appreciate the value and be-
come aware of the strengths and limitations of ICT and to boost their self-efficacy of teaching with technology, hoping to promote effective
use of ICT for students’ meaningful learning.
Another category includes factors that are extrinsic to preservice teachers. The construct CONTEXT refers to preservice teachers’ percep-
tions about the instructional resources (technological equipment, time, and support) available at the school site. This factor had a moderate
influence on preservice teachers’ use of technology (effect size = 0.43). However, it is naïve to assume that as long as adequate resources
and support are provided to teachers, technology integration would follow (Ertmer, 2005), as other factors can be involved. Indeed, the
results of the study show that the correlation between CONTEXT and TRANING was 0.68. This high correlation indicates that preservice
teachers’ levels of technology skills and their teacher education experience are related to how they perceive the resources and support
at the school site. Preservice teachers who have significant amount of training may not consider lack of adequate equipment, time, and
support as deterrents; it is possible that they have skills and positive dispositions to better overcome the challenges in technology inte-
gration than teachers who have limited amount of training in technology.
The high correlation between CONTEXT and TRAINING also implies that efforts to prepare new teachers to use technology effectively
should synchronize coursework with field experiences. A possible gap can exist between these two venues; preservice teachers with ade-
quate training in ICT in coursework may not be placed in a supported student teaching site that facilitates effective use of ICT in lessons. If
possible, teacher education providers should seek out field sites with ample technology resources and supports. Dexter and Riedel (2003)
argued that teacher education programs should set high expectations for preservice teachers’ use of technology during student teaching
and that both cooperating teachers and university supervisors need preparation to know how to facilitate student teachers’ technology
integration.
Since preservice teachers work very closely with their cooperating teachers during student teaching, teacher education programs can
deliberately train cooperating teachers so they can provide necessary support and facilitate technology integration. Dexter and Riedel
(2003) suggested three approaches: (a) developing cooperating teachers’ expertise through workshops, (b) encouraging cooperating teach-
ers to plan technology projects with preservice teachers, and (c) implementing field-based faculty modeling that involves concrete exam-
ples of technology integrated into curriculum. Together, these approaches can help boost preservice teachers’ beliefs and self-efficacy of
integrating technology in their teaching, which are shown in this study to mediate technology use.
Moreover, Russell, Bebell, O’Dwyer, and O’Connor (2003) suggested that preparing teachers, both preservice and inservice, to use tech-
nology should focus on specific instructional uses of technology rather than on familiarizing them with technology in general. As discussed
in Sections 1 and 2, commentators often treat teachers’ use of technology as one generic construct rather than a complex phenomenon. Due
to the multi-dimensional nature of technology use, it will be effective if teacher education providers distinguish various types of technology
uses and articulate each use. In particular, the current study investigates preservice teachers’ use of technology to support student-center
learning, and thus an implication for teacher education is that effort to train preservice teachers to use technology can focus on designing
and implementing technology-supported projects where students use technology in their learning. For example, this particular manner of
technology use in mathematics education is the focus in the edited book by Masalski and Elliott (2005). Preservice teachers can benefit
from seeing how technology can be specifically integrated and become immanent in the curriculum, not as an addition to existing lessons.
A limitation of the current study is that self-report scales were used to measure the variables for analysis. The results can be biased
because the participants might give socially desirable responses, especially when the researchers were course instructors. The second lim-
itation pertains to the particular group of preservice teachers in the study. Rather than random sampling, a convenient sample was used. A
R.-J. Chen / Computers & Education 55 (2010) 32–42 41
description of the characteristics of the participating teachers is provided in Section 3.2. Research involving both similar and different tea-
cher characteristics can be conducted to exhaust this line of enquiry so that we can gain a deeper and broader understanding of the
phenomenon.
Another limitation of the current study is that CONTEXT did not address the ‘‘soft” aspect of the social and contextual factors in teachers’
daily decision making regarding technology integration in their teaching. Cuban (1986, 2001) has developed a construct, ‘‘situationally con-
strained choice,” for understanding classroom teachers’ decisions on classroom strategies and in particular on technology use. He argues
that teachers’ decision making is strongly shaped by school and classroom settings, including school schedules, curriculum, and the culture
of teaching. Teachers also take a very practical stance toward what to do and how to do things in order for them to ‘‘survive” in the class-
room. In fact, Cuban’s review of the history of the diffusion of technology in American schools has suggested that teachers’ practical and
tacit knowledge plays a fundamental role in their decisions about adopting new instructional strategies and technology.
Thus, future research can include these considerations. Specifically, Cuban’s notion of teachers’ ‘‘situationally constrained choice” can be
a latent variable, which is particular suited to a SEM analysis. It is not possible to capture validly and reliably such a complex construct with
a single observable variable. Instead, this latent variable can include multiple indicators such as preservice teachers’ perceptions about the
curriculum standards and pacing guides they need to address, the types of lessons they are expected to teach (see Henning et al., 2006), and
their perceptions about students. Indeed, SEM allows for testing of alternative models that can adequately stipulate the relationships
among variables and to what extent each variable influences or mediates the effects of other variables. Pedhazur (1997) emphasizes the
importance of testing alternative models because researchers cannot exclude the explanations they have not considered. However, formu-
lation of meaningful alternative models should be based on theories rather than assumptions.
To summarize, the study addressed two limitations of previous research. First, the study involved a SEM analysis of factors that influ-
ence preservice teachers’ adoption of ICT. It sought to unveil the relative weights of these intercorrelated factors rather than merely iden-
tifying a list of significant factors. Second, preservice teachers’ use of technology was specifically defined as facilitating a student-centered
approach to teaching. However, the concealed nature of teachers’ belief systems and the complexity of classroom context bring challenges
and limitations to the study. A few strategies for future research are provided to overcome these challenges and limitations and to increase
our understanding of teachers’ decision making concerning the educational use of technology. Such an understanding is crucial for improv-
ing the learning experience of preservice teachers in a teacher education program.
Teachers have been blamed by technology proponents for resisting or slow adopting technology in teaching (Ferneding, 2003). Indeed,
commentators of teachers’ learning to teach with technology can have a pathological tone. In particular, technology-using teachers are re-
garded as ‘‘normal” and ‘‘healthy.” Teachers who doubt or resist technology are seen as anomalies who need to be treated. What is missing
in this pathological approach to studying teachers’ process of technology adoption is a genuine and multi-dimensional understanding of
the complexity of such a process. The present study addresses this process from a specific perspective by utilizing a SEM analysis. This vi-
sion is partial and needs to be complemented by others. It is by no means sufficient to account for the complicated phenomenon under
study. To integrate ICT in their teaching, some teachers will have thought through issues and reached a mature decision after due thought.
Others, however, will have made the choice by clutching at recommendations ready-made by technology proponents, who may have par-
ticular political and commercial agendas in education. Such a difference in teachers’ rationality is not discernable by the SEM model in the
current study. Research framed in different methodologies (such as qualitative case study, autobiography, phenomenological research, and
so on) can be conducted to explore the multiplicity of the meanings of teachers’ technology adoption. Only through multiple perspectives
can we forsake a narrowed way of understanding teachers’ work and life and adopt a more open-minded approach to teacher education.
References
Baek, Y., Jung, J., & Kim, B. (2008). What makes teachers use technology in the classroom? Exploring the factors affecting facilitation of technology with a Korean sample.
Computers & Education, 50, 224–234.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191–215.
Bauer, J., & Kenton, J. (2005). Toward technology integration in the schools: Why it isn’t happening. Journal of Technology and Teacher Education, 13(4), 519–546.
Becker, H. J. (1994). How exemplary computer-using teachers differ from other teachers: Implications for realizing the potential of computers in schools. Journal of Research in
Computing in Education, 26(3), 291–321.
Bollen, K. A. (1989). Structural equations with latent variables. New York: John Wiley & Sons.
British Educational Communications and Technology Agency (Becta). (2009). Leading next generation learning. <http://www.becta.org.uk/> Accessed 02.03.2009.
Bullock, D. (2004). Moving from theory to practice: An examination of the factors that preservice teachers encounter as they attempt to gain experience teaching with
technology during field placement experiences. Journal of Technology and Teacher Education, 12(2), 211–237.
Cassidy, S., & Eachus, P. (2002). Developing the Computer User Self-Efficacy (CUSE) scale: Investigating the relationship between computer self-efficacy, gender and
experience with computers. Journal of Educational Computing Research, 26(2), 133–153.
Chen, C.-H. (2008). Why do teachers not practice what they believe regarding technology integration? Journal of Educational Research, 102(1), 65–75.
Chen, R.-J., & Ferneding, K. (2003). Technology as a heuristic: How pre-service teachers learn to think about mathematics instruction using technology. In C. Crawford, N.
Davis, J. Price, & R. Webber (Eds.), Proceedings of the fourteenth annual meeting of the society for information technology & teacher education (pp. 3441–3444). Norfolk, VA:
Association for the Advancement of Computing in Education.
Chen, Y.-L. (2008). Modeling the determinants of Internet use. Computers & Education, 51, 545–558.
Clausen, J. M. (2007). Beginning teachers’ technology use: First-year teacher development and the institutional context’s affect on new teachers’ instructional technology use
with students. Journal of Research on Technology in Education, 39(3), 245–261.
Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College Press.
Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard University Press.
Czerniak, C., Lumpe, A., Haney, J., & Beck, J. (1999). Teachers’ beliefs about using educational technology in the science classroom. International Journal of Educational
Technology, 1(2), 1–18.
David, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003.
Demetriadis, S., Barbas, A., Molohides, A., Palaigeorgiou, G., Psillos, D., Vlahavas, I., et al. (2003). ‘‘Cultures in negotiation”: Teachers’ acceptance/resistance attitudes
considering the infusion of technology into schools. Computers & Education, 41(1), 19–37.
Dexter, S., & Riedel, E. (2003). Why improving preservice teacher educational technology preparation must go beyond college’s walls. Journal of Teacher Education, 54(4),
334–346.
Drent, M., & Meelissen, M. (2008). Which factors obstruct or stimulate teacher educators to use ICT innovatively? Computers & Education, 51, 187–199.
Ertmer, P. A. (2005). Teacher pedagogical beliefs: The final frontier in our quest for technology integration? Educational Technology Research and Development, 53(4), 25–39.
Ertmer, P. A., Ottenbreit-Leftwich, A., & York, C. S. (2006). Exemplary technology-using teachers: Perceptions of factors influencing success. Journal of Computing in Teacher
Education, 23(2), 55–61.
Feather, N. T. (1982). Expectations and actions: Expectancy–value models in psychology. Hillsdale, NJ: Erlbaum.
Ferneding, K. (2003). Questioning technology: Electronic technologies and educational reform. New York: Peter Lang.
42 R.-J. Chen / Computers & Education 55 (2010) 32–42
Franklin, C. (2007). Factors that influence elementary teachers use of computers. Journal of Technology and Teacher Education, 15(2), 267–293.
Freidhoff, J. R. (2008). Reflecting on the affordances and constraints of technologies and their impact on pedagogical goals. Journal of Computing in Teacher Education, 24(4),
117–122.
Guerrero, S., Walker, N., & Dugdale, S. (2004). Technology in support of middle grade mathematics: What have we learned? Journal of Computers in Mathematics and Science
Teaching, 23(1), 5–20.
Harrison, C., Comber, C., Fisher, T., Haw, K., Lewin, C., Lunzer, E., et al. (2002). Impact2: The impact of information and communication technologies on pupil learning and
attainment. Coventry, UK: Becta.
Henning, J. E., Robinson, V. L., Herring, M. C., & McDonald, T. (2006). Integrating technology during student teaching: An examination of teacher work samples. Journal of
Computing in Teacher Education, 23(2), 71–76.
Hermans, R., Tondeur, J., van Braak, J., & Valcke, M. (2008). The impact of primary school teachers’ educational beliefs on the classroom use of computers. Computers &
Education, 51(4), 1499–1509.
Higgins, T. E., & Spitulnik, M. W. (2008). Supporting teachers’ use of technology in science instruction through professional development: A literature review. Journal of Science
Education and Technology, 17, 511–521.
Hu, P. J.-H., Clark, T. H. K., & Ma, W. W. (2003). Examining technology acceptance by school teachers: A longitudinal study. Information & Management, 41, 227–241.
International Society for Technology in Education (ISTE) (2008). National Educational Technology Standards. <http://www.iste.org/AM/Template.cfm?Section=NETS> Accessed
12.06.2009.
Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York: Guilford Press.
Lim, C. P., & Chai, C. S. (2008). Teachers’ pedagogical beliefs and their planning and conduct of computer-mediated classroom lessons. British Journal of Educational Technology,
39(5), 807–828.
Masalski, W. J., & Elliott, P. C. (2005). Technology-supposed mathematics learning environment: NCTM sixty-seven yearbook. Reston, VA: National Council of Teachers of
Mathematics.
McGinnis, J. R., Kramer, S., Shama, G., Graeber, A. O., Parker, C. A., & Watanabe, T. (2002). Undergraduates’ attitudes and beliefs about subject matter and pedagogy measured
periodically in a reform-based mathematics and science teacher preparation program. Journal of Research in Science Teaching, 39(8), 713–737.
Mims, C., Polly, D., Shepherd, C., & Inan, F. (2006). Examining PT3 projects designed to improve preservice education. TechTrends, 50(3), 16–24.
Norton, S., McRobbie, C., & Cooper, T. (2000). Exploring secondary mathematics teachers’ reasons for not using computers in their teaching: Five case studies. Journal of
Research on Computing in Education, 33(1), 87–109.
Office of Technology Assessment (OTA) (1995). Teachers and technology: Making the connection. (GPO Rep. No. 052–003-01409–2). Washington, DC: United States Congress
Report.
Pedhazur, E. J. (1997). Multiple regression in behavioral research (3rd ed.). Orlando, FL: Harcourt Brace.
Ropp, M. (1999). Exploring individual characteristics associated with learning to use computers in pre-service teacher preparation. Journal of Research on Computing in
Education, 31(4), 402–424.
Russell, M., Bebell, D., O’Dwyer, L., & O’Connor, K. (2003). Examing teacher technology use: Implications for preservice and inservice teacher preparation. Journal of Teacher
Education, 54(4), 297–310.
Sandholtz, J. H., & Reilly, B. (2004). Teachers, not technicians: Rethinking technical expectations for teachers. Teachers College Record, 106(3), 487–512.
Schmidt, M. (1999). Middle grade teachers’ beliefs about calculator use: Pre-project and two years later. Focus on Learning Problems in Mathematics, 21(1), 18–34.
Schrum, L. (1999). Technology professional development for teachers. Educational Technology Research and Development, 47(4), 83–90.
Smith, P., Rudd, P., & Coghlan, M. (2008). Harnessing Technology: Schools Survey 2008 <http://partners.becta.org.uk/index.php?section=rh&catcode=_re_rp_02&rid=15952>
Accessed 12.01.2009.
Tearle, P. (2003). ICT implementation: What makes the difference? British Journal of Educational Technology, 34(5), 567–583.
Teo, T. (2009). Modeling technology acceptance in education: A study of pre-service teachers. Computers & Education, 52, 302–312.
Tondeur, J., Valcke, M., & van Braak, J. (2008). A multidimensional approach to determinants of computer use in primary education: Teacher and school characteristics. Journal
of Computer Assisted Learning, 24, 494–506.
Wang, Y.-S., Wu, M.-C., & Wang, H.-Y. (2009). Investigating the determinants and age and gender differences in the acceptance of mobile learning. British Journal of Educational
Technology, 40(1), 92–118.
Wigfield, A. (1994). Expectancy–value theory of achievement motivation: A developmental perspective. Educational Psychology Review, 6, 49–78.
Wozney, L., Venkatesh, V., & Abrami, P. C. (2006). Implementing computer technologies: Teachers’ perceptions and practices. Journal of Technology and Teacher Education,
14(1), 173–207.