Kay and Knaack Evaluating The Learning in Learning Objects
Kay and Knaack Evaluating The Learning in Learning Objects
Kay and Knaack Evaluating The Learning in Learning Objects
To cite this Article Kay, Robin H. and Knaack, Liesel(2007) 'Evaluating the learning in learning objects', Open Learning:
The Journal of Open and Distance Learning, 22: 1, 5 — 28
To link to this Article: DOI: 10.1080/02680510601100135
URL: http://dx.doi.org/10.1080/02680510601100135
The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses
should be independently verified with primary sources. The publisher shall not be liable for any loss,
actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly
or indirectly in connection with or arising out of the use of this material.
Open Learning
Vol. 22, No. 1, February 2007, pp. 5–28
A comprehensive review of the literature on the evaluation of learning objects revealed a number of
problem areas, including emphasizing technology ahead of learning, an absence of reliability and
validity estimates, over-reliance on informal descriptive data, a tendency to embrace general impres-
sions of learning objects rather than focusing on specific design features, the use of formative or
summative evaluation, but not both, and testing on small, vaguely described sample populations
using a limited number of learning objects. This study explored a learning-based approach for eval-
uating learning objects using a large, diverse, sample of secondary school students. The soundness
of this approach was supported by estimates of reliability and validity, using formal statistics where
applicable, incorporating both formative and summative evaluations, examining specific learning
objects features based on instructional design research, and testing of a range of learning objects.
The learning-based evaluation tool produced useful and detailed information for educators, design-
ers and researchers about the impact of learning objects in the classroom.
Overview
According to Williams (2000), evaluation is essential for every aspect of designing
learning objects, including identifying learners and their needs, conceptualizing a
design, developing prototypes, implementing and delivering instruction, and improv-
ing the evaluation itself. It is interesting to note, however, that Williams (2000) does
not emphasize evaluating the impact of learning objects on ‘actual learning’. This
omission is representative of the larger body of research on learning objects. In a
recent review of 58 articles (see Kay & Knaack, submitted), 11 studies focused on the
evaluation of learning objects; however, only two papers examined the impact of
learning objects on learning.
A number of authors note that the ‘learning object’ revolution will never take place
unless instructional use and pedagogy are explored and evaluated (for example,
Wiley, 2000; Muzio et al., 2002; Richards, 2002). Duval et al. (2004) add that while
many groups seem to be grappling with issues that are related to the pedagogy of
learning objects, few papers include a detailed analysis of specific learning object
features that affect learning. Clearly, there is a need for empirical research that
focuses on evaluating the learning-based features of learning objects. The purpose of
this study was to explore and test a learning-based approach for evaluating learning
objects.
Literature review
Definition of learning objects
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
summative formats have been used, including general impressions gathered using
informal interviews or surveys, measuring frequency of use and assessing learning
outcomes. The ultimate goal of this kind of evaluation has been to get an overview of
whether participants valued the use of learning objects and whether their learning
performance was altered. The second approach involves the use of formative assess-
ment when learning objects are being developed, an approach that is strongly
supported by Williams (2000). Cochrane (2005) provides a good example of how this
kind of evaluation model works where feedback is solicited from small groups at regular
intervals during the development process. Overall, the formative approach to evalua-
tion is not well documented in the learning object literature.
media developers. Each of these groups offers feedback throughout the development
of a learning object. Ultimately a report is produced that represents multiple values
and needs. A number of studies evaluating learning objects gather information from
multiple sources (for example, Kenny et al., 1999; Bradley & Boyle, 2004; Krauss &
Ally, 2005; MacDonald et al. 2005), although the formal convergence of participant
values advocated by Nesbit et al. (2002) is not pursued. The convergent evaluation
model is somewhat limited by the typically small number of participants giving feed-
back. In other words, the final evaluation may not be representative of what a larger
population might observe or experience.
Methodological issues
At least six key observations are noteworthy with respect to methods used to evaluate
learning objects. First, most studies offer clear descriptions of the learning objects
used; however, considerable variation exists in content. Learning objects examined
included drill-and-practice assessment tools (Adams et al., 2004) or tutorials
(Jaakkola & Nurmi, 2004), video case studies or supports (Kenny et al., 1999;
MacDonald et al., 2005), general web-based multimedia resources (Van Zele et al.,
2003) and self-contained interactive tools in a specific content area (Bradley & Boyle,
8 R. H. Kay and L. Knaack
2004; Cochrane, 2005). The content and design of a learning object needs to be
considered when examining quality and learning outcomes (Jaakkola & Nurmi, 2004;
Cochrane, 2005).
Second, a majority of researchers use multiple sources to evaluate learning objects,
including surveys, interviews or email feedback from students and faculty, tracking
the use of learning objects by students, think-aloud protocols and learning outcomes
(for example, Kenny et al., 1999; Bradley & Boyle, 2004; Krauss & Ally, 2005;
MacDonald et al. 2005).
Third, most evaluation papers focus on single learning objects (Kenny et al., 1999;
Adams et al., 2004; Bradley & Boyle, 2004; Krauss & Ally, 2005; MacDonald et al.,
2005); however, using an evaluation tool to compare a range of learning objects can
provide useful insights. For example, Cochrane (2005) compared a series of four
learning objects based on general impressions of reusability, interactivity and peda-
gogy, and found that different groups valued different areas. Also, Jaakkola and
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
Purpose
The purpose of this study was to explore a learning-based approach for evaluating
learning objects. Based on a detailed review of studies looking at the evaluation of
learning objects, the following practices were followed:
● a large, diverse, sample of secondary school students was used;
● reliability and validity estimates were calculated;
Evaluating the learning in learning objects 9
Method
Sample
Students. The sample consisted of 221 secondary school students (104 males, 116
females, one missing data), 13–17 years of age, in grade 9 (n = 85), grade 11 (n = 67)
and grade 12 (n = 69) from 12 different high schools and three boards of education.
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
Learning objects. Five learning objects in five different subject areas were evaluated
by secondary school students. Seventy-eight students used the mathematics learning
object (grade 9), 40 used the physics learning object (grades 11 and 12), 37 used the
chemistry learning object (grade 12), 34 used the biology learning object (grades 9
and 11) and 32 used the computer science learning object (grades 11 and 12). All
learning objects can be accessed online (http://education.uoit.ca/learningobjects). All
five learning objects met the criteria established by the definition of a learning object
provided for this paper. They were interactive, web-based and enhanced concept
formation in a specific area through graphical supports and scaffolding. A brief
description for each is provided below.
Mathematics. This learning object (slope of a line) was designed to help grade
9 students explore the formula and calculations for the slope of a line. Students used
their knowledge of slope to navigate a spacecraft through four missions. As the
missions progressed from level one to level four, less scaffolding was provided to solve
the mathematical challenges.
Physics. This learning object (relative velocity) helped grade 11 and grade
12 students explore the concept of relative velocity. Students completed two case-study
10 R. H. Kay and L. Knaack
questions, and then actively manipulated the speed and direction of a boat, along with
the river speed, to see how these variables affect relative velocity.
Biology. This learning object (Mendelian genetics) was designed to help grade
11 students investigate the basics of Mendel’s genetics relating the genotype (genetic
trait) with the phenotype (physical traits) including monohybrid and dihybrid crosses.
Students had a visual scaffolding to predict and complete Punnett squares. Each
activity finished with an assessment.
Computer Science. This learning object (Boolean logic) was designed to teach
grade 10 or grade 11 students the six basic logic operations (gates)—AND, OR,
NOT, XOR (exclusive OR), NOR (NOT-OR) and NAND (NOT-AND)—through
a visual metaphor of water flowing through pipes. Students selected the least number
of inputs (water taps) needed to get a result in the single output (water holding tank)
to learn the logical function of each operation.
Procedure
Pre-service teachers (guided by an experienced mentor) and in-service teachers
administered the survey to their classes after using one of the learning objects within
the context of a lesson. Students were told the purpose of the study and asked to give
written consent if they wished to volunteer to participate. Teachers and teacher
candidates were instructed to use the learning object as authentically as possible.
Often the learning object was used as another teaching tool within the context of a
unit. After one period of using the learning object (approximately 70 minutes),
students were asked to fill out a survey (see Appendix).
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
Data sources
The data for this study were gathered using four items based on a seven-point Likert
scale, and two open-ended questions (see Appendix). The questions yielded both
quantitative and qualitative data.
Quantitative data. The first construct consisted of items one to four (Appendix), and
was labelled ‘perceived benefit’ of the learning object. The internal reliability estimate
was 0.87 for this scale. Criterion related validity for perceived benefit score was
assessed by correlating the survey score with the qualitative ratings (item 9—see scor-
ing below). The correlation was significant (0.64; p < 0.001).
Qualitative data—learning object quality. Item five (Appendix) asked students what
they liked and did not like about the learning object. A total of 757 comments were
written down by 221 students. Student comments were coded based on well-
established principles of instructional design. Thirteen categories are presented with
examples and references in Table 2. In addition, all comments were rated on a five-
point Likert scale (−2 = very negative, −1 = negative, 0 = neutral, 1 = positive, 2 =
very positive).
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
Table 1. Coding scheme for assessing perceived benefits of learning objects (item six, Appendix)
Timing When the learning object was introduced in the ‘I think I would have benefited more if I used this program while
curriculum studying the unit’
‘It didn’t benefit me because that particular unit was over. It would
have helped better when I was first learning the concepts’
Review of basics/ Refers to reviewing, reinforcing concept, practice ‘going over it more times is always good for memory’
reinforcement
‘it did help me to review the concept and gave me practise in finding
the equation of a line’
Interactive/hands on/ Refers to interactive nature of the process ‘I believe I did, cause I got to do my own pace … I prefer more hands
12 R. H. Kay and L. Knaack
Table 2. Coding scheme for assessing learning object quality (item five, Appendix)
Organization/layout (for example, Madhumita, Refers to the location or overall layout ‘Sometimes we didn’t know where/what to click’
1995; Koehler & Lehrer, 1998) of items on the screen
‘I found that they were missing the next button’
‘Easy to see layout’
‘[Use a] full screen as opposed to small box’
Learner control over interface (for example, Refers the control of the user over ‘[I liked] that it was step by step and I could go
Akpinar & Hartlet, 1996; Kennedy & McNaught, specific features of the learning object at my own pace’
1997; Bagui, 1998; Hanna et al., 1999) including pace of learning
‘I liked being able to increase and decrease
volume, temperature and pressure on my own. It
made it easier to learn and understand’
‘It was too brief and it went too fast’
Animation (for example, Oren, 1990; Gadanidis Refers specifically to animation ‘You don’t need all the animation. It’s good to
et al., 2003; Sedig & Liang, 2006) features of the program give something good to look at, but sometimes it
can hinder progress’
‘I liked’ the fun animations’
‘Like how it was linked with little movies …
demonstrating techniques’
‘I liked the moving spaceship’
Graphics (for example, Oren, 1990; Gadanidis Refers to graphics (non-animated of ‘The pictures were immature for the age group’
et al., 2003; Sedig & Liang, 2006) the program), colours, size of text
‘I would correct several mistakes in the graphics’
‘The graphics and captions that explained the
steps were helpful’
‘Change the colours to be brighter’
Audio (for example, Oren, 1990; Gadanidis et al., Refers to audio features ‘Needed a voice to tell you what to do’
2003; Sedig & Liang, 2006)
‘Needs sound effects’
‘Unable to hear the character (no sound card on
computers)’
Evaluating the learning in learning objects 13
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
Table 2. (continued)
Clear instructions (for example, Jones et al., 1995; Refers to clarity of instructions before ‘Some of the instruction were confusing’
Kennedy & McNaught, 1997; Macdonald et al., feedback or help is given to the user
2005)
‘I … found it helpful running it through first and
showing you how to do it’
‘[I needed] … more explanations/Clearer
instructions’
Help features (for example, Jones et al., 1995; Refers to help features of the program ‘The glossary was helpful’
14 R. H. Kay and L. Knaack
Table 2. (continued)
Useful/informative (for example, Sedig & Liang, Refers to how useful or informative the ‘I like how it helped me learn’
2006) learning object was
‘I found the simulations to be very useful’
‘[The object] has excellent review material and
interesting activities’
‘I don’t think I learned anything from it though’
Assessment (Atkins, 1993; Sedighian, 1998; Refers to summative feedback/ No specific comments offered by students
Zammit, 2000; Kramarski & Zeichner, 2001; evaluation given after a major task (as
Wiest, 2001) opposed to a single action) is
completed
Theme/motivation (Akpibar & Hartley, 1996; Refers to overall theme and/or ‘Very boring. Confusing. Frustrating’
Harp & Mayer, 1998) motivating aspects of the learning
object
‘Better than paper or lecture—game is good!’
‘I liked it because I enjoy using computers, and I
learn better on them’
Evaluating the learning in learning objects 15
16 R. H. Kay and L. Knaack
Two raters assessed the first 100 comments made by students and achieved inter-
rater reliability of 0.78. They then met, discussed all discrepancies and attained
100% agreement. Next the raters assessed the remaining 657 comments with an
inter-rated reliability of 0.66. All discrepancies were reviewed and 100% agreement
was again reached.
Key variables
The key variables used to evaluate learning objects in this study were the following:
● perceived benefit (survey construct of four items; Appendix);
● perceived benefit (content analysis of open-ended question based on post-hoc
structured categories; Table 1);
● quality of learning objects (content analysis of open-ended response question
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
Results
Formative evaluation
Table 3 outlines nine key areas where formative analysis of the learning objects was
completed by pre-service and in-service teachers, students, a media expert or an
external learning object specialist. Eight of these evaluations occurred before the
summative evaluation of the learning object. Numerous revisions were made
throughout the development process. A final focus group offered systematic analysis
of where the learning objects could be improved.
The focus groups also reported a series of programming changes that would help
improve the consistency and quality of the learning objects (see Table 4). It is impor-
tant to note that the changes offered for chemistry, biology, and computer science
learning objects were cosmetic, whereas those noted for mathematics and physics
were substantive, focusing on key learning challenges. Mathematics and physics
comments included recommendations for clearer instructions, which is consistent
with student evaluations where chemistry and biology learning objects were rated
significantly better than mathematics and physics objects (see Table 4).
The positive impact of formative assessment in creating effective learning objects is
reflected by a majority of students reporting that the learning objects were useful (see
detailed results below).
The remaining results reported in this study are summative evaluations collected
from students who actually used the learning objects in a classroom.
Table 3. Timeline for formative analysis used in developing the learning objects
Mock prototyping September 2004 (two hours) Subject team introduced to learning objects by creating paper-based
prototype; Member from each team circulated and gave feedback on
clarity and design
Prototyping and usability November 2004 (one and a Subject teams produced detailed paper prototype of their learning objects.
half days) Every two hours subject teams were asked to circulate around the room,
and give feedback on other group’s learning object designs. It is not
uncommon for 10–20 versions of the paper-prototype to emerge over the
span of this one-and-a-half-day workshop
Electronic prototype December 2004 One team member creates PowerPoint prototype of learning object.
Throughout this process, feedback was solicited from other team
members. Edits and modifications were made through an online
discussion board where various versions of the prototype were posted and
comments from team members were discussed
Programming learning object January 2005 A Flash programmer/multimedia designer sat down with each group and
observed their electronic prototype. He discussed what challenges they
would have and different strategies for getting started in Flash
Team formative evaluation February 2005 (half day) Subject teams evaluate each other’s Flash versions of learning objects.
Each team had approximately 15 minutes to go through their learning
object, describe interactivity components and highlight sections that each
member had done. The entire learning object group provided feedback
during and after each presentation
Pilot test February 2005 (one day) Learning objects pilot tested on 40 volunteer students
External formative evaluation February 2005 (half day) CLOE expert provides provided one-to-one guidance and feedback for
improving the learning objects
Revision plan (before summative February 2005 (half day) Subject teams digest student and expert feedback and make plan for
evaluation) further revisions
Revision plan (after summative April 2005 Subject teams brought together to evaluate implementation of learning
evaluation objects future revisions (see Table 5)
Evaluating the learning in learning objects 17
18 R. H. Kay and L. Knaack
Table 4. (continued)
Mathematics Make help more obvious. Have a bubble? Bubbles for areas of the screen
(console and intro to the screen).
Press ‘Enter’ on the keyboard instead of ‘next’ on screen (when prompted
for text).
Mission 2—students didn’t know that they needed to do the calculations
on their own using pencil and paper. Instructions need to be more explicit.
‘Instruction’ font size and colour are too small and too dark.
Options to go back to other missions, and when they get to the end, more
clarity as to what or where they will go → more missions or choices.
Variety of scenarios (missions).
Display the equation of the line drawn from ‘planet’ to ‘planet’.
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
4.8, standard deviation = 1.5; scale ranged from 1 to 7). Fourteen per cent of all
students (n = 30) disagreed (average score of 3 or less) that the learning object was of
benefit, whereas 55% (n = 122) agreed (average score of 5 or more) that it was useful.
Table 5. Mean ratings for reasons given for perceived benefits of learning objects (item nine)
Standard
Reason n Mean deviation
were either very negative (n = 42, 6%) or negative (n = 392, 52%), whereas only 42%
of the students made positive (n = 258, 34%) or very positive (n = 57, 8%) statements
about learning object quality.
Standard
Category n Mean deviation
rated significantly lower than animations, interactivity and usefulness (Scheffé post-
hoc analysis, p < 0.05).
Categories—likes only. One might assume that categories with mean ratings close to
zero are not particularly important with respect to evaluation. However, it is possible
that a mean of zero could indicate an even split between students who liked and
disliked a specific category. Therefore, it is worth looking at what students liked about
the learning objects, without dislikes, to identify polar ‘hot spots’. A comparison of
means for positive comments confirmed that usefulness (mean = 1.33) was still
important, but that theme and motivation (mean = 1.35), learner control (mean =
1.35) and organization of the layout (mean = 1.20) also received high ratings. These
areas had mean ratings that were close to zero when negative comments were
included (see Table 5). This indicates than students had relatively polar attitudes
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
Correlation between quality and perceived benefit scores. Theme and motivation (r =
0.45, p < 0.01), the organization of the layout (r = 0.33, p < 0.01), clear instructions
(r = 0.33, p < 0.01) and usefulness (r = 0.33, p < 0.01) were significantly correlated
with the perceived benefit survey (items one to four, Appendix).
Table 7. Multivariate ANOVA for learning object quality, perceived benefits (survey), and
perceived benefits (content analysis) for learning object type
(survey and content analysis p < 0.001) and learning object quality (p < 0.001). An
analysis of contrasts revealed that the chemistry, biology and computer science learn-
ing objects were rated significantly higher with respect to perceived benefit and learn-
ing object quality (p < 0.05).
While the number of observations was too small to make comparisons among post-
hoc categories for perceived benefit (Table 2), a series of ANOVAs was run compar-
ing mean learning objects ratings of categories used to assess learning object quality.
A majority of the categories revealed no significant effect, although three areas
showed significant differences among learning objects: learner control, clear instruc-
tions, and theme/motivation. The chemistry learning object was rated significantly
higher than the mathematics and biology learning objects with respect to learner
control (p < 0.001). The chemistry and biology learning objects were rated signifi-
cantly higher than the mathematics and physics learning objects with respect to clear
instructions (p < 0.001). Finally, the computer science learning object was rated
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
significantly higher than the mathematics learning object with respect to theme/moti-
vation (p < 0.001). These results are partially compromised because all learning
objects were not experienced by students from each grade.
Discussion
The purpose of this study was to explore a learning-based approach for evaluating
learning objects. Key issues emphasized were sample population, reliability and valid-
ity, using formal statistics where applicable, incorporating both formative and
summative evaluations, examining specific learning objects features based on instruc-
tional design research, testing of a range of learning objects, and focusing on the
learner, not the technology.
Sample population
The population in this study is unique compared with previous evaluations of learn-
ing objects. A large, diverse, sample of secondary school students was used to provide
Evaluating the learning in learning objects 23
isolating and identifying salient qualities of learning objects. Overall, the evaluation
tool used in this study provided a reasonable foundation with which to assess the
impact of learning objects.
Data analysis
While descriptive analysis proved to be valuable in providing an overview of
perceived benefits and quality of the learning objects tested, inferential statistics
provided useful information on the relationship between perceived benefits and
learning quality, as well as the individual learning qualities deemed to be most
important by students. The combination of descriptive and inferential statistics,
not regularly seen in previous learning object research, is critical to establishing a
clear, reliable, understanding of how learning objects can be used as effective
teaching tools.
objects as beneficial because they were fun, interactive, visual and helped them
learn. Students who did not benefit felt that learning objects were presented at the
wrong time (e.g. after they had already learned the concept) or that the instructions
were not clear enough. Interestingly, student feedback, both positive and negative,
emphasized learning. While reusability, accessibility and adaptability are given
heavy emphasis in the learning object literature, when it comes to the end user,
learning features appear to be more important.
Future research
This study was a first step in developing a pedagogically based evaluation model for
evaluating learning objects. While the study produced useful information for educa-
tors, designers and researchers, there are at least five key areas that could be
addressed in future research. First, a set of pre-test and post-test content questions is
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
important to assess whether any learning actually occurred. Second, a more system-
atic survey requiring students to rate all quality and benefit categories (Tables 1 and
2) would help to provide more comprehensive assessment data. Third, details about
how each learning object is used are necessary to open up a meaningful dialogue on
the kind of instructional wrap that is effective with learning objects. Fourth, use of
think-aloud protocols would be helpful to examine actual learning processes while
learning objects are being used. Finally, a detailed assessment of computer ability,
attitudes, experience and learning styles of students might provide insights about the
impact of individual differences on the use of learning objects.
Summary
Based on a review of the literature, it was argued that a learning-based approach for
evaluating learning objects was needed. Limitations in previous evaluation studies
were addressed using a large, diverse, sample, providing reliability and validity esti-
mates, using formal statistics to strengthen any conclusions made, incorporating both
formative and summative evaluations, examining specific learning objects features
based on principles of instructional design, and testing of a range of learning objects.
It was concluded that the learning-based approach produced useful and detailed
information for educators, designers and researchers about the impact of learning
objects in the classroom.
References
Adams, A., Lubega, J., Walmsley, S. & Williams, S. (2004) The effectiveness of assessment learn-
ing objects produced using pair programming, Electronic Journal of e-Learning, 2(2). Available
online at: http://www.ejel.org/volume-2/vol2-issue2/v2-i2-art1-adams.pdf (accessed 28 July
2005).
Agostinho, S., Bennett, S., Lockyear, L. & Harper, B. (2004) Developing a learning object meta-
data application profile based on LOM suitable for the Australian higher education market,
Australasian Journal of Educational Technology, 20(2), 191–208.
26 R. H. Kay and L. Knaack
Downes, S. (2001) Learning objects: resources for distance education worldwide, International
Review of Research in Open and Distance Learning, 2(1). Available online at: http://
www.irrodl.org/content/v2.1/downes.html (accessed 1 July 2005).
Duval, E., Hodgins, W., Rehak, D. & Robson, R. (2004) Learning objects symposium special
issue guest editorial, Journal of Educational Multimedia and Hypermedia, 13(4), 331–342.
Gadanidis, G., Gadanidis, J. & Schindler, K. (2003) Factors mediating the use of online applets in
the lesson planning of pre-service mathematics teachers, Journal of Computers in Mathematics
and Science Teaching, 22(4), 323–344.
Hanna, L., Risden, K., Czerwinski, M. & Alexander, K. J. (1999) The role of usability in design-
ing children’s computer products, in: A. Druin (Ed.) The design of children’s technology (San
Francisco, Morgan Kaufmann Publishers, Inc.).
Harp, S. F. & Mayer, R. E. (1998) How seductive details do their damage: a theory of cognitive
interest in science learning, Journal of Educational Psychology, 90(3), 414–434.
Jaakkola, T. & Nurmi, S. (2004) Learning objects—a lot of smoke but is there a fire? Academic impact of
using learning objects in different pedagogical settings (Turku, University of Turku). Available online
at: http://users.utu.fi/samnurm/Final_report_on_celebrate_experimental_studies.pdf (accessed
25 July 2005).
Jones, M. G., Farquhar, J. D. & Surry, D. W. (1995) Using metacognitive theories to design user
interfaces for computer-based learning, Educational Technology, 35(4), 12–22.
Kay, R. H. & Knaack, L. (submitted) A systematic evaluation of learning objects for secondary
school students, Journal of Educational Technology Systems.
Kennedy, D. M. & McNaught, C. (1997) Design elements for interactive multimedia, Australian
Journal of Educational Technology, 13(1), 1–22.
Kenny, R. F., Andrews, B. W., Vignola, M. V., Schilz, M. A. & Covert, J. (1999) Towards guide-
lines for the design of interactive multimedia instruction: Fostering the reflective decision-
making of pre-service teachers, Journal of Technology and Teacher Education, 7(1), 13–31.
Koehler, M. J. & Lehrer, R. (1998) Designing a hypermedia tool for learning about children’s
mathematical cognition, Journal of Educational Computing Research, 18(2), 123–145.
Koppi, T., Bogle, L. & Bogle, M. (2005) Learning objects, repositories, sharing and reusability,
Open Learning, 20(1), 83–91.
Kramarski, B. & Zeichner, O. (2001) Using technology to enhance mathematical reasoning:
effects of feedback and self-regulation learning, Education Media International, 38(2/3).
Krauss, F. & Ally, M. (2005) A study of the design and evaluation of a learning object and
implications for content development, Interdisciplinary Journal of Knowledge and Learning
Objects, 1, 1–22. Available online at: http://ijklo.org/Volume1/v1p001-022Krauss.pdf
(accessed 4 August 2005).
Evaluating the learning in learning objects 27
Larkin, J. H. (1989) What kind of knowledge transfers?, in: L. B. Resnick (Ed.) Knowing, learning,
and instruction (Hillsdale, NJ, Erlbaum Associates), 283–305.
Lave, J. & Wenger, E. (1991) Situated learning: legitimate peripheral participation (New York,
Cambridge University Press).
Littlejohn, A. (2003) Issues in reusing online resources, Special Issue on Reusing Online
Resources, Journal of Interactive Media in Education, 1. Available online at: www-jime.
open.ac.uk/2003/1/ (accessed 1 July 2005).
MacDonald, C. J., Stodel, E., Thompson, T. L., Muirhead, B., Hinton, C., Carson, B., et al.
(2005) Addressing the eLearning contradiction: a collaborative approach for developing a
conceptual framework learning object, Interdisciplinary Journal of Knowledge and Learning
Objects, 1, 79–98. Available online at: http://ijklo.org/Volume1/v1p079-098McDonald.pdf
(accessed 2 August 2005).
Madhumita, K., K.L. (1995) Twenty-one guidelines for effective instructional design, Educational
Technology, 35(3), 58–61.
Metros, S. E. (2005) Visualizing knowledge in new educational environments: a course on learning
objects, Open Learning, 20(1), 93–102.
Downloaded By: [HEAL-Link Consortium] At: 12:02 22 November 2010
Muzio, J. A., Heins, T. & Mundell, R. (2002) Experiences with reusable e-learning objects from
theory to practice, Internet and Higher Education, 2002(1), 21–34.
Nesbit, J., Belfer, K. & Vargo, J. (2002) A convergent participation model for evaluation of learn-
ing objects, Canadian Journal of Learning and Technology, 28(3), 105–120. Available online at:
http://www.cjlt.ca/content/vol28.3/nesbit_etal.html (accessed 1 July 2005).
Oren, T. (1990) Cognitive load in hypermedia: designing for the exploratory learner, in: S.
Ambron & K. Hooper (Eds) Learning with interactive multimedia (Washington, DC, Microsoft
Press), 126–136.
Parrish, P. E. (2004) The trouble with learning objects, Educational Technology Research &
Development, 52(1), 49–67.
Richards, G. (2002) Editorial: the challenges of learning object paradigm, Canadian Journal
of Learning and Technology, 28(3), 3–10. Available online at: http://www.cjlt.ca/content/
vol28.3/editorial.html (accessed 1 July 2005).
Savery, J. R. & Duffy, T. M. (1995) Problem-based learning: an instructional model and its
constructivist framework, Educational Technology, 35(5), 31–34.
Sedig, K & Liang, H (2006) Interactivity of visual mathematical representations: factors affecting
learning and cognitive processes, Journal of Interactive Learning Research, 17(2), 179–212.
Sedighian, K. (1998) Interface style, flow, and reflective cognition: issues in designing interactive
multimedia mathematics learning environments for children. Unpublished Doctor of Philosophy
dissertation, University of British Columbia, Vancouver.
Siqueira, S. W. M., Melo, R. N. & Braz, M. H. L. B. (2004) Increasing the semantics of learning
objects, International Journal of Computer Processing of Oriental Languages, 17(1), 27–39.
Sternberg, R. J. (1989) Domain-generality versus domain-specificity: the life and impending death
of a false dichotomy, Merrill-Palmer Quarterly, 35(1), 115–130.
Van Zele, E., Vandaele, P., Botteldooren, D. & Lenaerts, J. (2003) Implementation and evaluation
of a course concept based on reusable learning objects, Journal of Educational Computing and
Research, 28(4), 355–372.
Wiest, L. R. (2001) The role of computers in mathematics teaching and learning, Computers in the
Schools, 17(1/2), 41–55.
Wiley, D. A. (2000) Connecting learning objects to instructional design theory: a definition, a
metaphor, and a taxonomy, in: D. A. Wiley (Ed.) The instructional use of learning objects:
online version. Available online at: http://reusability.org/read/chapters/wiley.doc (accessed 1
July 2005).
Wiley, D., Wayers, S., Dawson, D., Lambert, B., Barclay, M. & Wade, D. (2004) Overcoming
the limitations of learning objects, Journal of Educational Multimedia and Hypermedia, 13(4),
507–521.
28 R. H. Kay and L. Knaack
Williams, D. D. (2000) Evaluation of learning objects and instruction using learning objects, in:
D. A. Wiley (Ed.) The instructional use of learning objects: online version. Available online at:
from http://reusability.org/read/chapters/williams.doc (accessed 1 July 2005).
Zammit, K. (2000) Computer icons: a picture says a thousand words. Or does it?, Journal of
Educational Computing Research, 23(2), 217–231.