Evaluating text-based information on the World Wide
Web
Citation for published version (APA):
Wopereis, I. G. J. H., & van Merrienboer, J. J. G. (2011). Evaluating text-based information on the World
Wide Web. Learning and Instruction, 21(2), 232-237. https://doi.org/10.1016/j.learninstruc.2010.02.003
Document status and date:
Published: 01/04/2011
DOI:
10.1016/j.learninstruc.2010.02.003
Document Version:
Publisher's PDF, also known as Version of record
Document license:
Taverne
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can
be important differences between the submitted version and the official published version of record.
People interested in the research are advised to contact the author for the final version of the publication,
or visit the DOI to the publisher's website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page
numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright
owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these
rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above,
please follow below link for the End User Agreement:
www.umlib.nl/taverne-license
Take down policy
If you believe that this document breaches copyright please contact us at:
repository@maastrichtuniversity.nl
providing details and we will investigate your claim.
Download date: 23 Oct. 2022
Learning and Instruction 21 (2011) 232e237
www.elsevier.com/locate/learninstruc
Commentary
Evaluating text-based information on the World Wide Web
Iwan G.J.H. Wopereis a,*, Jeroen J.G. van Merriënboer a,b
a
Centre for Learning Sciences and Technologies, Open University of the Netherlands, PO Box 2960, 6401 DL Heerlen, The Netherlands
b
Department of Educational Development and Research and School of Health Professions Education, Maastricht University,
PO Box 616, 6200 MD Maastricht, The Netherlands
Abstract
This special section contributes to an inclusive cognitive model of information problem solving (IPS) activity, touches briefly IPS learning,
and brings to the notice methodological pitfalls related to uncovering IPS processes. Instead of focusing on the IPS process as a whole, the
contributing articles turn their attention to what is regarded the heart of IPS, namely the evaluation of information. In this commentary we reflect
on theoretical, methodological, and instructional design issues. Results are commented upon and future research is addressed. A vignette is
presented to illustrate the aforementioned issues.
Ó 2010 Elsevier Ltd. All rights reserved.
Keywords: Information problem solving; Evaluation; Text; World Wide Web
1. Introduction
Suppose you live in a country where swine influenza is
spreading fast. And suppose the authorities just decided to
recommend immediate vaccination for children aged six
months through five years. You have a child six months old
who is in perfectly good health. Do you take the authorities’
advice and decide to vaccinate, or do you disregard this
advice and trust information on negative side effects and
subsequent health risks? In order to make a balanced decision you would probably try to find a multiple set of reliable
information sources on swine flu vaccination and health
risks. And, provided you have an Internet connection, you
would most likely search the World Wide Web to find this set
of sources (Lemire, Paré, Sicotte, & Harvey, 2008). You
would probably open an Internet search engine and perform
a keyword search using keywords like ‘swine flu’ and
‘vaccination’. Fig. 1 is an example of a search engine results
page (SERP) that might be presented to you (retrieved
December 9, 2009).
* Corresponding author.
E-mail address: iwan.wopereis@ou.nl (I.G.J.H. Wopereis).
0959-4752/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.learninstruc.2010.02.003
You would presumably iteratively evaluate information
presented by the SERP, select sources from the SERP, and
evaluate the information presented by the sources, until you
think you have enough information to make your decision.
Prior knowledge regarding the topic (e.g., vaccination and flu)
and Web-based publishing (e.g., everyone with an Internet
connection can provide information on the Web) would most
likely affect your selection of information. Further, your belief
how medical knowledge comes about could be decisive when
you select sources. Most likely your Web search will provide
you information that can help you make a decision. However,
it is also possible that a proper decision is beyond reach
because you are ‘‘forced’’ to end the Web-based search due to
time constraints or frustration as a result of getting ‘‘lost’’ in
cyberspace and/or you are not able to find or infer a univocal
answer.
The vignette presented above covers a process frequently
referred to as information problem solving (IPS; Brand-Gruwel,
Wopereis, & Vermetten, 2005; Eisenberg & Berkowitz, 1990;
Moore, 1995). This process includes activities such as searching, scanning, processing, organizing, and (if necessary) presenting informationdactivities which are typically performed
in an iterative fashion to fulfill a (pre-)defined information need.
I.G.J.H. Wopereis, J.J.G. van Merriënboer / Learning and Instruction 21 (2011) 232e237
233
Fig. 1. Search engine results page for keyword search.
With the advent of the Internet in education (Hill & Hannafin,
1997), IPS gained special attention in educational research.
This resulted in updated (descriptive) IPS models (BrandGruwel, Wopereis, & Walraven, 2009; Hill, 1999) and, more
interestingly, better understanding of effective instructional
support for learning IPS skills (Brand-Gruwel & Gerjets, 2008;
Graesser et al., 2007; Stadtler & Bromme, 2007). However, as
Lazonder and Rouet (2008) argue, some aspects of the IPS
(learning) process, like metacognitive mediation and collaborative search, are relatively underexposed in (educational) IPS
research. According to Rouet, Ros, Goumi, Macedo-Rouet, and
Dinet (2011), an even more serious concern is the absence of
a comprehensive model of the cognitive processes involved in
IPS activity. A statement which is by no means out of the
ordinary, since the Internet (and corresponding usability
research) is relatively young and above all evolving at a great
pace (Leiner et al., 2009).
The present special section contributes to an inclusive
cognitive model of IPS activity, touches briefly IPS learning,
and brings to notice methodological pitfalls related to
uncovering IPS processes. Instead of focusing on the IPS
process as a whole, the contributing articles turn their attention
to what is regarded the heart of IPS, namely the evaluation of
information (Lazonder & Rouet, 2008). Rouet et al. (2011)
scrutinized students’ source selection strategies in simulated
SERPs. Bråten, Strømsø, and Salmerón (2011) examined how
readers judge the trustworthiness of authentic source materials
on a complex topic (i.e., climate change). Kienhues, Stadtler,
and Bromme (2011) investigated whether and how conflicting and consistent Web-based information influences
epistemic beliefs and decision making (cf. the aforementioned
vignette). Finally, Gerjets, Kammerer, and Werner (2011)
researched methods used to uncover evaluation processes
during IPS. Table 1 presents an overview of the four papers in
this special section. In this commentary we will reflect on
theoretical, methodological, and instructional design issues.
Before reflecting, we will analyze the contributions in light of
the central topic of this special section (evaluation) and the
dimensions of IPS activity (cf. Lazonder & Rouet, 2008).
2. Evaluating text-based information on the web
Gerjets et al. (2011) distinguish three different types of
evaluation processes when performing a Web-based IPS task,
that is, the evaluation of (a) SERPs, (b) Web pages, and (c)
document collections. This classification matches the three
IPS evaluation skills described by Brand-Gruwel et al. (2005),
namely ‘‘judging search results’’, ‘‘judging scanned information’’, and ‘‘judging processed information’’. Interestingly, the
four contributing studies all address different (combinations
234
I.G.J.H. Wopereis, J.J.G. van Merriënboer / Learning and Instruction 21 (2011) 232e237
Table 1
A selection of focal points of the contributing articles.
Study
Participants
Research focus
Task focus
Criteria focus
Rouet et al.
(2011)
Primary and secondary school
students (Experiment 1: N ¼ 174;
Experiment 2: N ¼ 88)
University students (N ¼ 128)
Menu selection strategies
Evaluation of SERPs
Relevance, i.e., surface versus
deep cues
Judgment of trustworthiness of
information sources
Effect of conflicting and consistent
information on epistemic beliefs
and decision making
Multi-method measurement of
evaluation criteria
Evaluation of information
sources
Evaluation of information
sources
Trustworthiness
Topic-specific and discipline-related
epistemic beliefs
Evaluation of SERPs and
information sources
Relevance, i.e., topic-related and
quality-related criteria
Bråten et al.
(2011)
Kienhues et al.
(2011)
Gerjets et al.
(2011)
University students (N ¼ 100)
University students (N ¼ 30)
of) evaluation types (see Table 1). The experiments of Rouet
et al. (2011) focus on evaluating a (simulated) SERP. Gerjets
et al. (2011) zoom in on the evaluation of (simulated)
SERPs and Web pages, and also touch on the evaluation of
document collections. Kienhues et al. (2011) focus on the
evaluation of (authentic) Web pages and document collections.
Finally, Bråten et al. (2011) zoom in on the evaluation of
different (authentic) document types, which could have been
published on the Internet (but actually were presented off-line
to the students during the experiment). In sum, all relevant
types of evaluation processes are covered in this special
section.
Beside different types of evaluation processes, the contributing articles address different evaluation frameworks to
describe and measure the evaluation behavior of the participants
in their studies. Bråten et al. (2011) focus on evaluating
(judging) the trustworthiness of texts when reading multiple
documents on a particular issue. To measure the trustworthiness
of a text, the participating students rated whether they were
influenced in their judgment by (a) the author of the text, (b) the
text publisher, (c) the type of text, (d) the content of the text, (e)
their own opinion about the topic at issue, and (f) publishing date
of the text. Gerjets et al. (2011) based their coding scheme to
analyze concurrent verbal protocols focusing evaluation
behavior on information science research. This coding scheme
consisted of two topic-related evaluation criteria (topicality and
scope) and three quality-related criteria (credibility, up-todateness, and design). Rouet et al. (2011) researched students’
use of surface cues (typographical cues, like underlined and/or
capitalized keywords) and deep cues (semantic information in
title, URL address, and excerpt) for selecting sources from
SERPs. Although evaluation of information was central to the
experimental tasks of Kienhues et al.’s (2011) study this was not
measured in depth, since the authors were interested in the effect
of consulting a set of conflicting information sources versus a set
of consistent information sources on epistemic beliefs and
decision making; it was sufficient for them to distinguish
information consistency. However, due to the ‘‘time-on-task’’
constraint in the experimental task (30 min for consulting 15
sources, that is, 120 s for scanning each source) it is likely that
participants consulted a selection of sources available. This
might for instance have affected the results of the decision
making task, a possibility which is recognized by Kienhues et al.
(2011) Therefore it would be of interest for future research to
analyze task performance in depth to elicit source selection.
Cued-retrospective reporting could be an option (Van Gog, Paas,
Van Merriënboer, & Witte, 2005). In sum, the studies described
in this special section used different frameworks for describing
and measuring evaluation behavior. This is partly due to the
focus on different types of evaluation processes. Nevertheless,
a unified framework for assessing Web-based information is
important for describing all facets of information evaluation
(see, e.g., Hilligoss & Rieh, 2008). This framework should also
address task type or task complexity. The vignette presented in
the introduction of the present commentary, for instance, presents a problem which has to be tackled within time limits. Time
constraints, especially apparent in emergency management
tasks, will most likely influence evaluation behavior and should,
therefore, regarded a task complexity factor.
3. Cognitive dimensions of IPS activity
According to Lazonder and Rouet (2008) a description of
IPS in terms of cognitive dimensions helps to build up
a comprehensive cognitive IPS model. They distinguish three
dimensions: (a) individual variables like prior knowledge and
personal epistemology, (b) contextual variables such as task
conditions (i.e., time constraints; individual versus collaborative, etc.), and (c) resource variables like amount and type of
information available. In the present commentary we will
elaborate on the first dimension. Individual variables include
the individual’s prior knowledge, general skills, and personal
epistemology. An extension of the aforementioned vignette
illustrates that these variables affect the quality of the IPS
process. Suppose the parent mentioned in the vignette is
a general practitioner. Prior knowledge on spreading diseases,
vaccination programs, and accompanying health risks will
probably help him/her to select (additional) information from
the Web to validate initial ideas on what to decide. In case the
parent is a sculptor, reliable sources with up-to-date information have to be found to compensate the lack of a medical
knowledge base. In case a 5th-grade student is presented with
this vaccination problem, it is most likely that his/her decision
to vaccinate is based on information retrieved from the first
comprehensible document selected in the SERP. Na€ıve
knowledge regarding the trustworthiness of Web-based
I.G.J.H. Wopereis, J.J.G. van Merriënboer / Learning and Instruction 21 (2011) 232e237
information (‘‘everything on the Web is true’’) and an absolutist stance of knowledge will most likely determine his/her
document and information selection.
As Bråten et al. (2011) state there is ample evidence that
experts outperform novices on IPS tasks. This is because
experts possess a large knowledge base and advanced general
skills (like reading skills) that help them to free up working
memory capacity for the execution of all sorts of IPS
processes (including the evaluation of SERPs and sources).
Further, since experts have normally a sophisticated view on
knowledge and knowing, this will help them to assess information more accurately. Bråten et al. (2011) found that also
novices in a certain domain who are relatively knowledgeable
about a domain-specific topic, evaluate the information better.
The relatively knowledgeable novices mistrusted less trustworthy sources more frequently and were less influenced by
superficial text features than the ‘‘unknowledgeable’’ novices.
Or, as the researchers put it eloquently, ‘‘the knowledge base
of the readers may actually function as a bulwark against
seduction.’’
Also basic skills like reading affect IPS in general and
evaluation in particular. Rouet et al. (2011) for instance, found
that reading skills are prerequisite for the acquisition of
effective evaluation strategies (i.e., selecting sources based on
reading semantic instead of superficial cues in SERPs; see also
Mason, Boldrin, & Ariasi, 2010, in press). Another interesting
personal variable that affects IPS activity is personal epistemology (Hofer, 2001). Kienhues et al. (2011) focus on
epistemic beliefs, a constituent of personal epistemology.
Recent research on epistemic beliefs and IPS shows that
advanced beliefs about the nature of knowledge (i.e., certainty
and simplicity of knowledge) and the process of knowing (i.e.,
source of knowledge and justification for knowing) result in
more efficient and effective IPS activity (Hofer, 2004; Mason
et al., 2010, in press). Kienhues et al. (2011) found proof that
the relationship between epistemic beliefs and Web-based
information search is also the other way round. Participants
in their study who dealt with conflicting information
(in multiple Web-based documents) showed evidence of
(more) advanced (topic-related) epistemic beliefs. These are
interesting results since they support, to a certain extent,
arguments for the use of Internet as an epistemological tool for
learning (cf. Tsai, 2004).
4. Methodological issues
The (quasi) experimental designs described in the present
section show rigor. However, we would like to address two
methodological issues which according to our view jeopardize
the findings of the studies, namely measurement of data and
authenticity of experimental tasks.
Both Bråten et al. (2011) and Kienhues et al. (2011) used
‘‘paper-and-pencil’’ posttests to measure the dependent variables.
Further, Kienhues et al. (2011) and Rouet et al. (2011) analyzed
task processing ‘‘products’’ (e.g., decisions or selections). The
focus on indirect measurement of evaluation behavior can be
criticized. Most of the aforementioned researchers acknowledge
235
that it would be good to capture the evaluation process in order to
elicit explanations for information and source selection.
Thinking-aloud, trace, and eye-tracking methods are mentioned
explicitly. Gerjets et al. (2011) concurrently used thinking-aloud
and eye-tracking as methods to capture information evaluation
processes. Moreover, they compared two thinking-aloud
versions, that is, a spontaneous version where individuals were
just asked to perform a task and to think aloud, and an instructed
version where individuals received instructions about the type of
task (frequently used in information science studies). Gerjets et al.
(2011) claim that the instructed version influences student
behavior. Therefore, the results inferred from these studies should
be looked at in its perspective. Gerjets et al. (2011) even question
the standard thinking-aloud method, because this is ‘‘still not very
close to a natural search situation.’’ For capturing the evaluation
processes of searchers, it would probably be wise to triangulate
data and combine methods. An interesting suggestion put forward
by Gerjets et al. (2011) is the cued-retrospective reporting method
(see Van Gog et al., 2005).
The evaluation process eliciting methods are timeconsuming, especially when you want to capture evaluation
processes of authentic (complex) IPS tasks. Complex IPS tasks
that include solving ill-structured problems take time. In case
of the vignette, a non-knowledgeable parent (in medicine)
would probably take several hours to search, scan, and
examine documents. Not imitating a true-to-life task situation
in one’s research method would probably lead up to biased
results. Gerjets et al. (2011) acknowledged this pitfall. The
participants in their study only had 20 min to evaluate a SERP
and thirty documents. Time pressure most likely influenced
evaluation behavior. As mentioned earlier in this commentary,
the same authenticity problem came to light in Kienhues
et al.’s (2011) study. Also Bråten et al. (2011) and Rouet
et al. (2011) note some shortcomings regarding authenticity
or fidelity in their research. The simulated SERPs used in
Rouet et al.’s (2011) study were not prototypical and the way
Bråten et al. (2011) presented the documents to the students
doesn’t match reality. As a future direction in IPS research we
recommend to aim for research that addresses the problem of
ecological validity more seriously.
5. Instructional support
Although the studies in this special section did not
explicitly focus on instructional support for learning IPS (i.e.,
the evaluation skills in particular), some remarks on evaluation
skill acquisition were put forward by the researchers. These
remarks will be commented upon.
Probably most of us have acquired IPS evaluation skills
‘‘on the job’’. We learn to evaluate digital information by
performing search tasks in educational settings, at work, and
while performing search tasks for leisure. The success and
failure of these endeavors shape our knowledge and skills
regarding the evaluation of SERPs, sources, and information
within sources. This discovery-based ‘‘learning-by-doing’’
approach might be complemented with goal-driven instructional activities (e.g., an on-line IPS course) or just-in-time
236
I.G.J.H. Wopereis, J.J.G. van Merriënboer / Learning and Instruction 21 (2011) 232e237
instructional support (e.g., consulting a colleague). As the
articles in the present special section show, advancing topic
knowledge, procedural knowledge, and personal epistemology
will also influence the effectiveness of information evaluation
(and the search in general). Explicit support facilitates evaluation skill acquisition. Gerjets et al. (2011) found that
encouraging learners to engage in quality-related evaluation
processes helps learners to improve their Web search performance. Rouet et al. (2011) found that pre-search elaboration of
content can have a positive effect on students’ IPS activity.
Although this effect was only significant for good readers,
performing a preparatory task might be a good instructional
strategy (cf. activating prior knowledge in the initial stages of
IPS). More extensive information on instructional support for
learning evaluation skills is provided by Bråten et al. (2011).
They point to special educational tools (Stadtler & Bromme,
2007) and units for learning evaluation skills (Graesser
et al., 2007).
In educational settings where IPS is an integral part of the
curriculum (e.g., resource-based learning curricula, or
problem-based learning curricula) the issue of task
complexity should be borne in mind. In the beginning of
a curriculum learning tasks should be authentic, but relatively
simple. At the end of a curriculum learning tasks should be
authentic, but relatively complex (for a comprehensive view
on instructional design for complex learning, see Van
Merriënboer & Kirschner, 2007). The IPS constituent of
learning tasks should also follow this simple-to-complex
sequence. When students in the beginning of a curriculum
are asked to search for information to solve a problem, task
complexity could be reduced by offering a predefined set of
Web-based resources (cf. Segers & Verhoeven, 2009). More
advanced learning tasks at the end of the curriculum could
include a full Web-based search with time constraints. For
instance, when the vaccination problem presented in the
vignette would be a learning task in a basic module in medical
education (for aspirant general practitioners), students could
be offered a predefined set of sources with conflicting
information.
6. Conclusion
The four studies in the present special section contribute to
an all-inclusive cognitive model of IPS activity. Previously
unattached issues regarding evaluation, personal epistemology
and research methodology were addressed in depth. The
participating students in the special section’s studies evaluated
text-based information to solve their information problems.
Since information on the Web is mainly text-based
(or document-based; cf. Rouet, 2009) this focus is justifiable. However, it should be borne in mind that audio, video,
and multimedia sources win ground on the Web and, as
a result, are increasingly used for (personal) knowledge
construction (Greenhow, Robelia, & Hughes, 2009). Future
research should consider the evolution of the Web towards
a predominantly multimedia-based information source.
References
Brand-Gruwel, S., & Gerjets, P. (Eds.). (2008). Instructional support for
enhancing students’ information problem solving ability. Computers in
Human Behavior, 24(3). [Special issue].
Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem
solving by experts and novices: analysis of a complex cognitive skill.
Computers in Human Behavior, 21, 487e508.
Brand-Gruwel, S., Wopereis, I., & Walraven, A. (2009). A descriptive model
of information problem solving while using Internet. Computers &
Education, 53, 1207e1217.
Bråten, I., Strømsø, H. I., & Salmerón, L. (2011). Trust and mistrust when students
read multiple information sources about climate change. Learning and
Instruction, 21(2), 180e192.
Eisenberg, M. B., & Berkowitz, R. E. (1990). Information problem-solving:
The big six skills approach to library and information skills instruction.
Norwood, NJ: Ablex.
Gerjets, P., Kammerer, Y., & Werner, B. (2011). Measuring spontaneous and
instructed evaluation processes during Web search: integrating concurrent
thinking-aloudprotocols and eye-tracking data. Learning and Instruction,
21(2), 220e231.
Graesser, A. C., Wiley, J., Goldman, S. R., O’Reilly, T., Jeon, M., & McDaniel, B.
(2007). SEEK Web tutor: fostering a critical stance while exploring the
causes of volcanic eruption. Metacognition and Learning, 2, 89e105.
Greenhow, C., Robelia, B., & Hughes, J. E. (2009). Learning, teaching, and
scholarship in a digital age. Web 2.0 and classroom research: what path
should we take now? Educational Researcher, 38, 246e259.
Hill, J. R. (1999). A conceptual framework for understanding information
seeking in open-ended information services. Educational Technology,
Research and Development, 47(1), 5e27.
Hill, J. R., & Hannafin, M. J. (1997). Cognitive strategies and learning from
the World Wide Web. Educational Technology, Research and Development, 45(4), 37e64.
Hilligoss, B., & Rieh, S. Y. (2008). Developing a unifying framework of
credibility assessment: construct, heuristics, and interaction in context.
Information Processing and Management, 44, 1467e1484.
Hofer, B. K. (2001). Personal epistemology research: implications for learning
and teaching. Journal of Educational Psychology Review, 13, 353e383.
Hofer, B. K. (2004). Epistemological understanding as a metacognitive process:
thinking aloud during online searching. Educational Psychologist, 39, 43e55.
Kienhues, D., Stadtler, M., & Bromme, R. (2011). Dealing with conflicting or
consistent medical information on the Web: when expert information breeds
laypersons’ doubts about experts. Learning and Instruction, 21(2), 193e204.
Lazonder, A. W., & Rouet, J.-F. (2008). Information problem solving
instruction: some cognitive and metacognitive issues. Computers in
Human Behavior, 24, 753e765.
Leiner, B. M., Cerf, V. G., Clark, D. D., Kahn, R. E., Kleinrock, L., Lynch, D.
C., et al. (2009). A brief history of the Internet. ACM SIGCOMM Computer
Communication Review, 39(5), 22e31.
Lemire, M., Paré, G., Sicotte, C., & Harvey, C. (2008). Determinants of
Internet use as a preferred source of information on personal health.
International Journal of Medical Informatics, 77, 723e734.
Mason, L., Boldrin, A., & Ariasi, N. (2010). Epistemic metacognition in context:
evaluating and learning online information. Metacognition and Learning, 5,
67e90.
Mason, L., Boldrin, A., & Ariasi, N. (in press). Searching the Web to learn
about a controversial topic: are students epistemically active? Instructional
Science.
Moore, P. (1995). Information problem solving: a wider view of library skills.
Contemporary Educational Psychology, 20, 1e31.
Rouet, J.-F. (2009). Managing cognitive load during document-based learning.
Learning and Instruction, 19, 445e450.
Rouet, J. -F., Ros, C., Goumi, A., Macedo-Rouet, M., & Dinet, J. (2011). The
influence of suface and deep cues on grade school students’ assessment of
relevance in Web menus. Learning and Instruction, 21(2), 205e219.
Segers, E., & Verhoeven, L. (2009). Learning in a sheltered Internet environment: the use of WebQuests. Learning and Instruction, 19, 423e432.
I.G.J.H. Wopereis, J.J.G. van Merriënboer / Learning and Instruction 21 (2011) 232e237
Stadtler, M., & Bromme, R. (2007). Dealing with multiple documents on the
WWW: the role of metacognition in the formation of documents models.
Computer Supported Collaborative Learning, 2, 191e210.
Tsai, C.-C. (2004). Beyond cognitive and metacognitive tools: the use of the
Internet as an ‘epistemological’ tool for instruction. British Journal of
Educational Technology, 35, 525e536.
237
Van Gog, T., Paas, F., Van Merriënboer, J. J. G., & Witte, P. (2005). Uncovering the problem solving process: cued retrospective reporting versus
concurrent and retrospective reporting. Journal of Experimental
Psychology: Applied, 11, 237e244.
Van Merriënboer, J. J. G., & Kirschner, P. A. (2007). Ten steps to complex
learning. Mahwah, NJ: Erlbaum.