This art icle was downloaded by: [ Universit y of California, San Diego]
On: 04 April 2014, At : 17: 15
Publisher: Rout ledge
I nform a Lt d Regist ered in England and Wales Regist ered Num ber: 1072954 Regist ered office: Mort im er House,
37- 41 Mort im er St reet , London W1T 3JH, UK
Teaching and Learning in Medicine: An International
Journal
Publicat ion det ails, including inst ruct ions f or aut hors and subscript ion inf ormat ion:
ht t p: / / www. t andf online. com/ loi/ ht lm20
Writing-Skills Development in the Health Professions
Richard E. Rawson , Kat hleen M. Quinlan , Barry J. Cooper , Clare Fewt rell & Jennif er R.
Mat low
Published online: 15 Jun 2010.
To cite this article: Richard E. Rawson , Kat hleen M. Quinlan , Barry J. Cooper , Clare Fewt rell & Jennif er R. Mat low (2005)
Writ ing-Skills Development in t he Healt h Prof essions, Teaching and Learning in Medicine: An Int ernat ional Journal, 17: 3,
233-238, DOI: 10. 1207/ s15328015t lm1703_6
To link to this article: ht t p: / / dx. doi. org/ 10. 1207/ s15328015t lm1703_6
PLEASE SCROLL DOWN FOR ARTI CLE
Taylor & Francis m akes every effort t o ensure t he accuracy of all t he inform at ion ( t he “ Cont ent ” ) cont ained
in t he publicat ions on our plat form . However, Taylor & Francis, our agent s, and our licensors m ake no
represent at ions or warrant ies what soever as t o t he accuracy, com plet eness, or suit abilit y for any purpose of t he
Cont ent . Any opinions and views expressed in t his publicat ion are t he opinions and views of t he aut hors, and
are not t he views of or endorsed by Taylor & Francis. The accuracy of t he Cont ent should not be relied upon and
should be independent ly verified wit h prim ary sources of inform at ion. Taylor and Francis shall not be liable for
any losses, act ions, claim s, proceedings, dem ands, cost s, expenses, dam ages, and ot her liabilit ies what soever
or howsoever caused arising direct ly or indirect ly in connect ion wit h, in relat ion t o or arising out of t he use of
t he Cont ent .
This art icle m ay be used for research, t eaching, and privat e st udy purposes. Any subst ant ial or syst em at ic
reproduct ion, redist ribut ion, reselling, loan, sub- licensing, syst em at ic supply, or dist ribut ion in any
form t o anyone is expressly forbidden. Term s & Condit ions of access and use can be found at ht t p: / /
www.t andfonline.com / page/ t erm s- and- condit ions
Writing-Skills Development in the Health Professions
Richard E. Rawson
Department of Biomedical Sciences
College of Veterinary Medicine
Cornell University
Ithaca, New York, USA
Kathleen M. Quinlan
Downloaded by [University of California, San Diego] at 17:15 04 April 2014
Office of Educational Development
College of Veterinary Medicine
Cornell University
Ithaca, New York, USA
Barry J. Cooper
Department of Biomedical Sciences
College of Veterinary Medicine
Cornell University
Ithaca, New York, USA
Clare Fewtrell
Department of Molecular Medicine
College of Veterinary Medicine
Cornell University
Ithaca, New York, USA
Jennifer R. Matlow
Department of Biomedical Sciences
College of Veterinary Medicine
Cornell University
Ithaca, New York, USA
Background: Studies have found that students in the medical professions often lack
the writing skills required during their education and career. One contributing factor
to this deficiency is that writing tends to be discipline specific, rather than requiring
general skills acquired in undergraduate schools.
Purpose: To determine the extent to which a rigorous writing exercise impacted the
quality of students’ medical writing based on a specified rubric.
Method: In the context of a basic science course, we developed 6 weekly writing exercises called Question of the Week, along with a rubric for scoring students’work. The
rubric evaluated 6 specific aspects of students’ writing including
Comprehensiveness/Thoroughness, Accuracy, Conciseness, Logical Organization,
Justification of Assertions, and Use of Appropriate Terminology.
Results: Except for Justification of Assertions and Accuracy, which did not change,
scores for all categories improved between Weeks 1 and 2. Use of Appropriate
Terminology was the only category for which scores increased after Week 2.
Conclusion: The clearest indication of writing development came from students’augmented ability to use medical terminology in appropriate ways. This is an important
observation, given that each Question of the Week covered a separate body system,
characterized by distinctly different terms and jargon. We concluded that students
need much more practice to attain the level of proficiency outlined by our rubric.
Teaching and Learning in Medicine, 17(3), 233–239
Copyright © 2005 by Lawrence Erlbaum Associates, Inc.
We acknowledge the assistance of Dr. Kathleen D. MacLeod for helping with the scoring of Question of the Week assignments and Dr. Yryo
Grohn for statistical consultation.
Correspondence may be sent to Richard E. Rawson, DVM, PhD, Department of Biomedical Sciences, College of Veterinary Medicine, Cornell University, Ithaca, NY 14853, USA. E-mail: rer1@cornell.edu
233
Downloaded by [University of California, San Diego] at 17:15 04 April 2014
RAWSON ET AL
Clear, concise, accurate writing is required for medical record entries, referral letters, discharge statements, and other activities in which medical professionals participate.1 In health professions education,
writing has been used as a tool to promote a variety of
learning goals. For example, writing in health sciences
education has focused on promoting students’ reflections on clinical experiences and self-awareness of attitudes, on students’ critical appraisal skills including
critiques of published research or intervention plans,
and on writing for publication.2–11 Yet, a core skill required of health professionals and the focus of this
study is the ability to write about scientific content
concisely and accurately for their colleagues.
It is well established in undergraduate education
that many writing skills are domain specific or discipline specific and that all disciplines need to teach
writing in the context of learning the discipline.12,13
Kovac and Sherwood14 described the incorporation of
writing assignments into an undergraduate general
chemistry course in which writing served as a learning
tool and helped students develop specific scientific and
technical writing skills.
The same principle—that students need to learn to
communicate in ways that are specific to a profession
or discipline—can be applied to graduate health professions education. Students who wrote well in their
undergraduate classes may produce inadequate writing in the new domain of medicine. Indeed, we noted
writing shortcomings over several years of grading
students’ exams in a problem-based learning (PBL)
course at the Cornell College of Veterinary Medicine,
consistent with others’ observations.15 The third
course in our professional curriculum, Function &
Dysfunction, provides the basic science background
for internal medicine. To maintain consistency with
the way students learn in a PBL setting, examinations
in this course are case based, with questions that require students to explain and justify their answers in
short essays. We have found that students’ answers
tend to be verbose, poorly organized, and use terminology imprecisely, all of which makes it difficult for
the faculty to judge their adequacy. We regarded
these deficiencies as serious because writing is an important means by which medical professionals communicate with colleagues about technical, clinical,
and scientific matters.
We also suspected that poor writing might be related to poor content understanding. Obviously, students do not possess the knowledge base of experts or
the complex interrelationships and organization that is
characteristic of expert knowledge. Thus, medical education involves enhancing both the content and organization of students’ knowledge base.
Students’ writing of pathophysiological explanations of medical problems is integrally related to their
understanding of the subject matter and their clinical
234
reasoning. Hmelo16,17 has assessed the quality of students’ problem solving on the basis of the accuracy,
coherence, reasoning strategies, and use of science
concepts as evidenced in case-based written causal explanations of medical problems. Hmelo18 found that
these criteria had strong face validity with physicians.
Students whose writing does not display those characteristics may either lack the knowledge itself, or their
content knowledge is not organized in a way that
matches experts’ knowledge networks. Causal chains
of reasoning are closely tied to the way that knowledge
is organized. Didactic components of a course can deliver content, but writing affords students the opportunity to effectively organize that content.14 Although
our course provides ample opportunity for verbal elaboration of students’ understanding of course content,
students do not formally practice expressing their understanding of pathophysiological concepts in writing.
Unless the tutor is particularly rigorous, oral explanations during tutorial sessions tend to be rather untidy in
the interest of simply generating discussion. Furthermore, the paucity of lectures limits the exposure of students to teachers who can model the use of accurate
medical terminology and concepts.
Therefore, we developed a weekly writing exercise
called Question of the Week, along with a scoring rubric,
with the aim of improving the quality of students’
pathophysiological explanations. As this was a new assessment approach, we sought to evaluate its effect on
students and their learning. Specifically, was there an
improvement in students’ written essays over time, and
on what dimensions did improvement take place? We
hypothesized, based on the case-specificity effect,19 that
on some dimensions, such as accuracy, there would be
little or no improvement from week to week because
each week the focus of the Question of the Week was
new, reflecting the content of that week. On other criteria, though, such as conciseness and use of medical terminology, we anticipated that students would improve
with practice, as these skills may be more transferable
from one case to another. We also investigated the relation between students’ performance on the Question of
the Week and their final exam scores. Students who developed their ability to use terminology accurately, to
write concisely and logically, and to justify their assertions should perform better on exams.
Methods
Question of the Week Assignment
The Question of the Week project was implemented
in Function & Dysfunction in which all 83 first-year
students at the Cornell College of Veterinary Medicine
were enrolled. Each student earned 0.5% of the final
Downloaded by [University of California, San Diego] at 17:15 04 April 2014
WRITING AS A TEACHING TOOL
grade for each Question of the Week assignment completed, irrespective of the scores achieved.
Function and Dysfunction is a 7-week, systems-based course that addresses the physiology, pathology, pharmacology, and clinical pathology of the
neuromuscular, cardiovascular, respiratory, urinary,
and gastrointestinal systems. Each Question of the
Week addressed the content area specific to a given
week of the course. The questions required that students use higher order thinking skills including analysis, synthesis, and evaluation.20 These types of questions are posed routinely on our examinations.
Each Question of the Week consisted of a short case
scenario followed by one question (Table 1). A word
limit was supplied. Questions were posted on the
course Web site on Friday of each week with a midday
Monday deadline. Students were expected to complete
the assignments individually within the word limits
posted without the aid of books. A 30-min time estimate for completion was provided. No attempt was
made to determine if students abided by the time estimates or “closed book” guidelines.
Scoring Rubric
An important component of the Question of the
Week project involved providing students with timely
and critical feedback. Students handed in papers on
Monday, and evaluators marked them so they could be
returned before the next question was posted on Friday.
For evaluation purposes, the class was divided into
four groups of approximately 20 students. Every week,
each of four evaluators was assigned randomly to mark
one group of papers. Evaluators marked a different
group each week so that any student who wrote at least
four assignments received feedback from at least four
evaluators. All evaluators had scientific expertise in the
content matter of the course.
A scoring rubric (Table 2) was developed to formalize the criteria by which student work would be judged.
Because five faculty members were involved in marking
papers during the course, a rubric would improve consistency among evaluators. Further, we wanted to pro-
vide students with detailed and meaningful feedback
that could be used while writing subsequent assignments. The rubric applied a range of scores from 1 to 3
for each of six criteria: Comprehensiveness/Thoroughness, Accuracy, Conciseness, Logical Organization,
Justification of Assertions, and Terminology. For the
benefit of both the evaluator and the student, each of
these criteria was defined precisely. Doing so allowed us
to rate student performance consistently over the
6-week period.
We collaboratively constructed the rubric during a
90-min workshop. We began with Hmelo’s17 criteria
and our own observations of the qualities we sought in
good student essays. A framework was developed by
applying the emerging criteria to several examples of
student essays until agreement was reached on the criteria and their interpretation. Two of the authors (R.
Rawson and K. Quinlan) then completed the rubric by
describing the standards for each of the three score levels for each of the criteria. The final rubric was discussed, evaluated, and agreed on by all of us.
Two mechanisms were set in place to minimize
interrater variability. First, because all of the evaluators
participated fully in construction of the rubric, each
was well acquainted with its intent and meaning. Second, the same evaluators marked student work
throughout the course and were rotated such that each
student’s work was marked by each evaluator.
Statistics
We examined student scores on each of the six criteria over the 6 weeks. Data were summarized as mean ±
standard deviation. Repeated measures analysis of
variance was conducted using a general linear model to
test differences between means. Differences were considered significant at the p < .05 level.
Results
A total of 493 of a possible 498 (83 students × 6
weeks) papers (99%) were submitted for evaluation.
Table 1. A Sample Question Used During the Question of the Week Project
Dawkin’s Own is a horse with protein-losing enteropathy, a
condition in which there is reduced absorptive capacity in the
small intestine and loss of protein into the intestinal lumen. He
has low plasma total protein and very low serum albumin,
resulting in edema and ascites. [You may look up the meaning of
these words, if necessary.] To treat this condition, he was given
certain drugs orally, but these did not produce the expected
effects.
Assume that the drugs used to treat this horse were a rational
choice to produce the desired effects. Taking into account the
facts related to this case (provided above) how might the
disposition of drugs have been altered in this animal, thus
affecting their efficacy? (200 words)
Focus: Consider those pharmacokinetic factors that are relevant to
this scenario that could reduce the efficacy of any orally
administered drug in this patient.
235
RAWSON ET AL
Table 2. Rubric Used for Scoring Essays
A.
B.
C.
Downloaded by [University of California, San Diego] at 17:15 04 April 2014
D.
E.
F.
Comprehensiveness/Thoroughness
3 = All of the relevant information is included.
2 = Some relevant information was not included.
1 = Much of the relevant information is not included.
Accuracy
3 = All of the information stated is accurate.
2 = Some but not all of the information is accurate.
1 = There are gross inaccuracies.
Conciseness
3 = There is no irrelevant information included.
2 = Some of the information included is irrelevant to this question.
1 = Much of the information included is irrelevant to this question.
Logical Organization
3 = Logically connects related concepts appropriately and avoids repetition.
2 = Some evidence of logical organization but concepts are not related appropriately and/or some concepts are repeated.
1 = Little evidence of logical organization. Relationships among concepts are unclear.
Justification of Assertions
3 = Makes assertion(s) that are supported by the data and gives supporting pathophysiological explanations.
2 = Makes assertion(s) that are supported by data, but doesn’t give adequate pathophysiological explanations.
1 = Makes assertions that are not supported by the data.
Use of Appropriate Medical Terminology
3 = Uses medical terminology correctly and whenever possible.
2 = Sometimes uses medical terminology incorrectly or misses opportunities to use it.
1 = Little use of medical terminology.
Mean scores for each category are shown in Table 3.
For all but two categories (Accuracy and Justification
of Assertions), scores improved significantly (p < .05)
between Week 1 and Week 6. Scores for justification of
assertions showed no change at all during the 6 weeks
of this project, whereas Accuracy tended to decrease
over time. After Week 2, Use of Appropriate
Terminology was the only category that showed any
further increase in mean score.
The correlation between scores on each of the six
criteria at Week 1 and performance on the final exam
was poor (p > .1). By Week 6, scores for four of the six
criteria were still only weakly correlated with final
exam scores (p > .2). There was a stronger correlation
between both Accuracy and Justification of Assertions
and scores on the final exam (p = .09), but neither
reached the selected level of significance (p = .05). Students who improved the most from Week 1 to Week 6
on justification of assertions performed better on the final exam (p = .03). Improvement in other areas did not
have a similar relation to performance on the final
exam.
In an end-of-course survey, students described four
main ways in which the Question of the Week was useful to them. First, a substantial portion (27%) commented on how helpful it was to use and think about the
knowledge they had gained in a different way—to apply it to a problem different from the one used in their
PBL groups, to reason with it, synthesize it, or integrate it. Second, students (24%) used the Question of
the Week as a gauge of their content understanding.
Third, students (23%) valued practicing writing their
answers, with a focus on summarizing, being concise,
or using appropriate terminology. Fourth, students
(15%) referred to the value of the exercises in preparing them for the exam.
Analysis of scoring indicated that there was some
significant variation (p < .05) in the application of the
rubric by the four evaluators. This finding provides justification for our decision to rotate evaluators. Inspec-
Table 3. Mean Scores for Each of Six Criteria Used in the Evaluation of Student Writing
Weeks
Criteria
Comprehensive & Thorough
Accuracy
Conciseness
Logical Organization
Justification of Assertions
Medical Terminology
1
2
3
4
5
6
d
1.7 ± 0.7
2.4 ± 0.7
1.5 ± 0.5
2.0 ± 0.6
1.9 ± 0.6
2.0 ± 0.6b
2.1 ± 0.7a
2.2 ± 0.5
2.4 ± 0.7a
2.3 ± 0.6a
1.9 ± 0.7
2.2 ± 0.6
2.2 ± 0.6a
2.4 ± 0.6
2.4 ± 0.6a
2.5 ± 0.6a
2.2 ± 0.7
2.4 ± 0.6a
1.8 ± 0.6a
2.2 ± 0.6
2.3 ± 0.7a
2.4 ± 0.7a
2.1 ± 0.6
2.3 ± 0.6a
2.2 ± 0.7a
2.1 ± 0.7a
2.4 ± 0.6a
2.6 ± 0.5a
2.1 ± 0.7
2.8 ± 0.5a, b
2.1 ± 0.7a
2.2 ± 0.5
2.2 ± 0.7a
2.4 ± 0.6a
2.0 ± 0.7
2.7 ± 0.5a, b
0.5
0.5
1.1
0.7
—
1.2
Note: N = 83. Values represent mean score ± standard deviation; d = effect size.
aSignificantly different from value at week 1. bSignificantly different from value at week 4.
236
WRITING AS A TEACHING TOOL
Downloaded by [University of California, San Diego] at 17:15 04 April 2014
tion of mean scores for each evaluator for each of the
six categories addressed by the rubric did not suggest
any patterns of bias over the 6 weeks of the exercise.
No single evaluator marked student work consistently
higher or lower than other evaluators.
All Question of the Week papers received numerical
scores based on each of the six criteria specified in the
rubric, with varying amounts of additional written
feedback provided by each evaluator. Despite the concerted effort to provide meaningful feedback, the second most common category of student concern (13%)
was about the feedback received. Students requested a
consistent grader (rather than the rotating system of
graders that we used), more comments, and more recognition of what they had done well.
Discussion
Students were apparently highly motivated to complete as many assignments as possible. The response
rate (99%) was very high, and students seemed to respond carefully and thoughtfully to the questions,
making a genuine effort on all assignments.
Each Question of the Week related to the content
area of that week’s tutorial case, reflecting new content
each week. Some criteria used to evaluate student papers, such as Accuracy and Justification of Assertions,
were expected to reflect this case specificity and thus
not improve significantly. In fact, these categories did
not improve between Weeks 1 and 6. All of the other
criteria, however, showed significant improvement between Weeks 1 and 6, with most of the improvement
coming between the 1st and 2nd weeks. The latter observation suggested that the bulk of the effect may have
been in simply clarifying expectations to students. Indeed, this would have been the first formal opportunity
to make the expectations of medical writing explicit to
these 1st-year veterinary students.
Use of Appropriate Terminology might be expected
to be case specific because each body system has a set
of unique terms with which it is associated (e.g., respiratory: tachypnea; cardiovascular: cardiomegaly; renal: countercurrent). That scores for this criterion improved at Week 3 and then even further at Week 5 begs
additional explanation. It is possible that although students may have been able to use medical terminology
during say, Week 1, they felt obligated not to, believing
that it was important to assure the faculty that they understand definitions of terms. For example, one student
began the essay for Week 4 by defining tachypnea
rather than simply using the term in an appropriate way
to address the question. Instructions to students clearly
stated “Write to a colleague who ‘speaks your language’ but who does not necessarily appreciate the particulars of the case you are working on or the question
you’re addressing.” Despite this explicit charge, students wrote as if performing for their teachers rather
than writing to a colleague. As teachers, we assumed
that if a student could use a term correctly, she or he understood what that term meant and that being able to
define a term does not necessarily demonstrate understanding. For students, it is easier to learn and provide
definitions and more difficult to incorporate accurate
terminology into their understanding of body systems.
Generally speaking, teachers are concerned that students not only know definitions but that they are able to
use terms correctly and appropriately, reflecting an understanding of the material that goes beyond rote memorization. The fact that students continued to improve
at Week 5 on this dimension suggests a more meaningful change in students’ approach to medical writing
than merely clarifying these teachers’ expectations.
We hypothesized that improved writing skills
should be reflected in students’ performance in writing short essays for the examination. Essays written
for the final examination were graded based primarily
on content, which would be reflected principally in
accuracy and justification of assertions. Other indicators of the quality of composition were graded only to
the extent that they affected the ability of the grader
to judge the quality of the content. Thus, the finding
that Question of the Week scores during the 6th week
for Accuracy and Justification of Assertions showed
the best correlation with outcomes on the examination was not surprising.
Mean scores for Comprehensive and Thorough
and Justification of Assertions were lowest among
scores in any given week, and mean scores for the latter did not improve over time. This suggests that
these two skills represent difficult areas for students.
Students, as a whole, improved in their ability to be
comprehensive and thorough in their writing. That
the students who improved the most on the
Justification of Assertions criteria during the course
tended to perform better on the final exam suggests
that the ability to justify or explain statements satisfactorily can be learned and that it is central to medical writing and understanding overall. Justification of
Assertions arguably represents a higher order skill,
and it is not surprising that, on average, students performed less well in this area and would take longer to
improve. Apparently, the Question of the Week exercise did not provide enough practice for most students to develop this skill sufficiently, suggesting an
avenue for future emphasis in instruction.
Students recognized that the Question of the Week
exercises were clear approximations to questions that
would appear on the end-of-course exam. Nevertheless, a large number of students reported that the Question of the Week had no effect on their studying. The
ability to use factual knowledge to solve problems and
237
Downloaded by [University of California, San Diego] at 17:15 04 April 2014
RAWSON ET AL
explain solutions clearly and logically are significant
goals of our instruction. It would appear that for some
students, learning factual knowledge overshadowed
the opportunity to practice using that knowledge in
meaningful ways. This apparent disconnect between
factual knowledge and use of that knowledge to explain medical problems suggests a potential avenue for
further study.
We attributed improvement in the quality of student
writing to practice and feedback afforded by the Question of the Week. However, observed improvement
could have been due to normal course activities such as
lecture attendance and note taking, reading textbooks,
or practicing using medical terminology during tutorial sessions. This argument seems less likely in light
of empirical evidence. One study21 found that about
25% of medical students were deficient in reading and
writing skills and that the situation was not improved
by the 3rd year of medical training. More recently, a
survey of deans of veterinary colleges revealed that
students’ writing skills were deficient.15 For a significant number of students, then, writing skills do not improve substantially simply as a result of coursework.
Programs aimed explicitly at enhancing writing skills
are needed.
We concluded that Question of the Week had educational merit but needed improvement. Although scores
for most criteria increased, the clearest evidence of improvement came from students’ steadily increasing
proficiency in using medical terminology in their
pathophysiological explanations. This was a satisfying
result because the appropriate use of terminology is a
hallmark of medical writing and of communication
among medical professionals in general. We were encouraged that a significant number of students reported
using the exercises in ways that promoted higher order
thinking and learning activities that are associated with
improved transfer of learning.22
This report supports the conclusions of others that
there is a significant deficiency in writing among cohorts of arguably top students admitted to professional programs.15,21 Although medical and veterinary students may come with writing skills that were
satisfactory in their undergraduate classes, it is precisely because writing tends to be domain specific or
discipline specific12 that writing skills appropriate for
a given profession will only be learned by students
engaged in that profession. It is important, therefore,
that faculty in professional programs design instructional instruments that will provide ample opportunity for students to learn the scientific and technical
aspects of writing in a given profession.23 In this
study, students improved along some of our criteria,
but most of that gain occurred during the first 2
weeks despite continuation of the writing assignments and explicit feedback. We conclude that they
238
needed more practice and perhaps more explicit instruction and modeling to attain the proficiency
described by our rubric.24
References
1. Yanoff KL, Burg FD. Types of medical writing and teaching of
writing in U.S. medical schools. Journal of Medical Education
1988;63:30–7.
2. Deloney LA, Carey MJ, Beeman HG. Using electronic journal
writing to foster reflection and provide feedback in an introduction to clinical medicine course. Academic Medicine
1998;73:574–5.
3. Edwards R, White M, Gray J, Fischbacher C. Use of a journal
club and letter-writing exercise to teach critical appraisal to
medical undergraduates. Medical Education 2001;35:691–4.
4. Garland BK, Pearson RJ. Epidemiology course for medical students focuses on proposal writing. American Journal of Preventive Medicine 1989;5:240–3.
5. Guilford WH. Teaching peer review and the process of scientific writing. Advances in Physiology Education 2001;25:
167–75.
6. Hatem D, Ferrara E. Becoming a doctor: Fostering humane
caregivers through creative writing. Patient Education and
Counseling 2001;45:13–22.
7. Landeen J, Byrne C, Brown B. Journal keeping as an educational strategy in teaching psychiatric nursing. Journal of Advanced Nursing 1992;17:347–55.
8. Pee B, Woodman T, Fry H, Davenport ES. Appraising and assessing reflection in students’ writing on a structured
worksheet. Medical Education 2002;36:575–85.
9. Poirier S, Ahrens WR, Brauner DJ. Songs of innocence and experience: Student’s poems about their medical education. Academic Medicine 1998;73:473–8.
10. Tollan A, Magnus JH. Writing a scientific paper as part of the
medical curriculum. Medical Education 1993;27:461–4.
11. Westwood B, Westwood G. The culture of criticism and argument in health education. Medical Teacher 2002;24:156–61.
12. Geisler C. Language and learning across the disciplines. Literacy and Expertise in the Academy 1994;1:35–57.
13. Griffin CW. Programs for writing across the curriculum: A report. College Composition and Communication 1985;36:
398–403.
14. Kovac J, Sherwood DW. Writing in chemistry: An effective
learning tool. Journal of Chemical Education 1999;76:
1399–403.
15. Hendrix CM, Thompson IK, Mann CJ. A survey of reading,
writing, and oral communication skills in North American veterinary medical colleges. Journal of Veterinary Medical Education 2001;28:34–40.
16. Hmelo CE. Problem-based learning: Effects on the early acquisition of cognitive skill in medicine. Journal of the Learning
Sciences 1998;7:173–208.
17. Hmelo CE. Cognitive consequences of problem-based learning
for the early development of medical expertise. Teaching and
Learning in Medicine 1998;10:92–100.
18. Hmelo CE. Personal communication, June 2002.
19. Elstein AS, Shulman LS, Sprafka SA. Medical problem solving: An analysis of clinical reasoning. Cambridge, MA: Harvard University Press, 1978.
20. Bloom BS, Englehart MD, Furst EJ, Hill WH, Krathwohl DR
(Eds.). Taxonomy of educational objectives: The classification
of educational goals: Handbook I, cognitive domain. New
York: Longmans, Green, 1956.
WRITING AS A TEACHING TOOL
24. Showalter E, Griffin A. Teaching medical students how to write
well. Medical Education 2000;34:165.
Received 13 August 2003
Final revision received 4 December 2004
Downloaded by [University of California, San Diego] at 17:15 04 April 2014
21. Flaherty JA, Rezler A, McGuire C. Clinical reading and writing
skills of junior medical students. Journal of Medical Education
1982;57:848–53.
22. Marini A, Genereux R. The challenge of teaching for transfer.
In A McKeough, J Lupart, A Marini (Eds.), Teaching for transfer: Fostering generalization in learning (pp. 1–19). Mahwah,
NJ: Lawrence Erlbaum Associates, Inc., 1995.
23. Law J. Notes from other programs: Learning from Harvard.
Writing Across the Curriculum 1998;7:2–3.
239