Classroom Vocabulary Assessment
Classroom Vocabulary Assessment
Classroom Vocabulary Assessment
Areas
By: Katherine A. Dougherty Stahl, Marco A. Bravo
What are some ways that we can gauge vocabulary development in the content areas? In this article,
the authors explain how the intricacies of word knowledge make assessment difficult, particularly
with content area vocabulary. They suggest ways to improve assessments that more precisely track
students' vocabulary growth across the curriculum, including English language learners.
Osa (all names are pseudonyms) teaches third grade in a high-poverty urban setting with a diverse
population that includes a majority of children of color and a high percentage of English-language
learners (ELLs). During the most recent school year, she instructed vocabulary in a deliberate way
during the literacy block and content area instruction. In light of the increased time and attention to
vocabulary instruction, she felt confident that her students had increased word knowledge and word
consciousness.
However, Osa was disappointed and discouraged by the outcomes of the yearly standardized
assessment used by her district, the Iowa Test of Basic Skills (ITBS). Her students' scores on the
vocabulary sub-test did not indicate any significant gains from their previous year's scores.
She knew that her students had increased knowledge about words, but she wanted quantitative
evidence of that increased knowledge. If the standardized test scores did not demonstrate growth, was
this instruction worth the time invested? What might be other evidence-based ways to document her
students' growth?
phonics, and fluency are considered constrained because they are fairly linear and students develop
mastery levels (test ceilings) within a few years.
Alternatively, vocabulary and comprehension are multidimensional, incremental, context dependent,
and develop across a lifetime. As a result, they simply do not lend themselves to simplistic, singular
measures (NICHD, 2000; Paris, 2005). Our discussion addresses the unconstrained nature of
vocabulary knowledge and describes some assessments that are suited to a complex theoretical
construct.
The final stage of Dale's conceptualization of word knowledge can be further broken down into
additional stages, including the ability to name other words related to the word under study and
knowing precise versus general word knowledge.
Instead of stages, Beck, McKeown, and Omanson, (1987) referred to a person's word knowledge as
falling along a continuum. These include (a) no knowledge of the term, (b) general understanding, (c)
narrow but context-bound understanding, such as knowing that discriminate means to pay special
attention to subtle differences and exercise judgment about people but unable to recognize that the
term could also be used to refer to singling out sounds in phonemic awareness activities, (d) having
knowledge of a word but not being able to recall it readily enough to use it appropriately, and (e)
decontextualized knowledge of a word's meaning, its relationship to other words, and extensions to
metaphorical uses.
Bravo and Cervetti (2008) posited a similar continuum for content area vocabulary. These points on a
continuum can range from having no control of a word (where students have never seen or heard the
word) to passive control (where students can decode the term and provide a synonym or basic
definition) and finally active control (where students can decode the word, provide a definition,
situate it in connection to other words in the discipline, and use it in their oral and written
communications).
For example, some students may have never heard the term observe while others may have a general
gist or passive control of the term and be able to mention its synonym see. Yet others may have active
control and be able to recognize that to observe in science means to use any of the five senses to
gather information and these students would be able to use the term correctly in both oral and written
form. Such active control exemplifies the kind of contextual and relational understanding that
characterizes conceptual understanding. Word knowledge is a matter of degree and can grow over
time. Incremental knowledge of a word occurs with multiple exposures in meaningful contexts.
"For each exposure, the child learns a little about the word, until the child develops a full and flexible
knowledge about the word's meaning. This will include definitional aspects, such as the category to which it
belongs and how it differs from other members of the category It will also contain information about the
various context in which the word was found, and how the meaning differed in the different contexts." (Stahl &
Stahl, 2004, p. 63)
Along the stages and continuum put forth by Beck et al. (1987), Bravo and Cervetti (2008), and Dale
(1965), respectively, there are also qualitative dimensions of word knowledge. Multidimensionality
aspects of word knowledge can include precise usage of the term, fluent access, and appreciation of
metaphorical use of the term (Calfee & Drum, 1986).
Understanding that a term has more than one meaning and understanding those meanings is yet
another dimension of word knowledge. Multiple meaning words abound in the English language.
Johnson, Moe, and Baumann (1983) found that among the identified 9,000 critical vocabulary words
for elementary-grade students, 70% were polysemous, or had more than one meaning.
Within content areas, polysemous words such as property, operation, and current often carry an
everyday meaning and a more specialized meaning within the discipline. Understanding the shades of
meanings of multimeaning words involves a certain depth of knowledge of that word.
Additional dimensions of word knowledge include lexical organization, which is the consideration of
the relationship a word might have with other words (Johnson & Pearson, 1984; Nagy & Scott, 2000,
Qian, 2002). Students' grasp of one word is linked to their knowledge of other words. In fact, learning
the vocabulary of a discipline should be thought of as learning about the interconnectedness of ideas
and concepts indexed by words. Cronbach (1942) encapsulated many of these dimensions, including
the following:
Cronbach's (1942) final dimension leads us into the last facet of word knowledge, the
receptive/productive duality. Receptive vocabulary refers to words students understand when they
read or hear them. Productive vocabulary, on the other hand, refers to the words students can use
correctly when talking or writing. Lexical competence for many develops from receptive to
productive stages of vocabulary knowledge.
Vocabulary knowledge is multifaceted. Word knowledge is acquired incrementally. At each stage or
point on a continuum of word knowledge, students might be familiar with the term, know words
related to the term, or have flexibility with using it in both written and oral form. It is clear that to
know a word is more than to know its definition. Teaching and testing definitions of words looks
much different than contemporary approaches to instruction and assessment that consider
incrementality, multidimensionality, and the students' level of use.
Back to top
Assessment Dimensions
As with any test, it is important to determine whether the vocabulary test's purpose is in alignment
with each stakeholder's purpose. It is likely that this is the reason that Osa felt frustrated. The primary
purpose of the ITBS is to look at group trends. Although it provides insights about students' receptive
vocabulary compared with a group norm, it cannot be used to assess students' depth of knowledge
about a specific disciplinary word corpus or to measure a students' ability to use vocabulary in
productive ways.
In other words, current standardized measures are not suited to teachers' purpose of planning
instruction or monitoring students' disciplinary vocabulary growth in both receptive and productive
ways, or in a manner to capture the various multifaceted aspects of knowing a word (e.g., polysemy,
interrelatedness, categorization; NICHD, 2000).
Read (2000) developed three continua for designing and evaluating vocabulary assessments. His work
is based on an evaluation of vocabulary assessments for ELLs, but the three assessment dimensions
are relevant to all vocabulary assessments. These assessment dimensions can be helpful to teachers in
evaluating the purposes and usefulness of commercial assessments or in designing their own
measures.
DiscreteEmbedded
At the discrete end of the continuum, we have vocabulary treated as a separate subtest or isolated set
of words distinct from each word's role within a larger construct of comprehension, composition, or
conceptual application. Alternatively, a purely embedded measure would look at how students
operationalize vocabulary in a holistic context and a vocabulary scale might be one measure of the
larger construct.
For example, Blachowicz and Fisher's (2006) description of anecdotal record keeping is an example
of an embedded measure. Throughout a content unit, a teacher keeps notes on vocabulary use by the
students. Those notes are then transferred to a checklist that documents whether students applied the
word in discussion, writing, or on a test. See Table 1 for a sample teacher checklist of geometry terms.
Even if words are presented in context, measures can be considered discrete measures if they are not
using the vocabulary as part of a larger disciplinary knowledge construct. The 2009 National
Assessment of Educational Progress (NAEP) framework assumes an embedded approach (National
Assessment Governing Board [NAGB], 2009). Vocabulary items are interspersed among the
comprehension items and viewed as part of the comprehension construct, but a vocabulary subtest
score is also reported.
SelectiveComprehensive
The smaller the set of words from which the test sample is drawn, the more selective the test. If
testing the vocabulary words from one story, assessment is at the selective end of the continuum.
However, tests such as the ITBS select from a larger corpus of general vocabulary and are considered
to be at the comprehensive end of this continuum.
In between and closer to the selective end would be a basal unit test or a disciplinary unit test. Further
along the continuum toward comprehensive would be the vocabulary component of a state criterion
referenced test in a single discipline.
Context-IndependentContext-Dependent
In its extreme form, context-independent tests simply present a word as an isolated element. However,
this dimension has more to do with the need to engage with context to derive a meaning than simply
how the word is presented. In multiple-choice measures that are context-dependent, all choices
represent a possible definition of the word. Students need to identify the correct definition reflecting
the word's use in a particular text passage.
Typically, embedded measures require the student to apply the word appropriately for the embedded
context. Test designers for the 2009 NAEP were deliberate in selecting polysemous items and
constructing distractors that reflect alternative meanings for each assessed word (NAGB, 2009).
Back to top
Read's
Read's (2000)
(2000)
assessment
assessment
dimensions
dimensions
Read's
(2000)
assessment
dimensions
Vocabulary
Knowledge
Scale
Depth
Discrete
Selective
Contextindependent
Vocabulary
Recognition
Task
Size, depth,
lexical
Discrete
organization
Selective
Contextdependent
Vocabulary
Assessment
Magazine
Size, depth,
productive Embedded
knowledge
Comprehensive
Contextdependent
It is possible to modify the VKS to assess the key vocabulary in content area units in elementary
classrooms for even the youngest students. Blachowicz and Fisher (2006) applied the principles of the
VKS in a table format making it possible to assess a larger number of words. Kay (first author)
used the Native American Home VKS (see Figure 1) as a pretest with her second-grade class. As a
posttest, she used the VKS in conjunction with Figure 2, which required students to specify the tribe
and resource materials used to build the home and to compose an illustration of the home.
Figure 1: Native American Home VKS Pretest
Vocabulary
words
I have Never
heard of this
Native
American
dwelling.
I have heard
of this kind of
home, but I
can't tell you
much about
it.
Wigwam
Apartment
Longhouse
Tipi/teepee
Brush lodge
Asi
Figure 2: Native American Home VKS Pretest
Vocabulary
words
Wigwam
Apartment
Longhouse
Tipi/teepee
Brush lodge
Name of the
tribe who once
lived in this
home
Resources used
to make this
home
Draw a quick
picture of this
home
Asi
Webs received two scores, (1) total number of words correctly sorted by category and (2) percentage
of words correctly selected on VRT that were correctly sorted by category.
The VRT requires teachers to select a bank of words that students are held accountable for in a
content unit, thus measuring breadth of vocabulary knowledge on a topic. Using correlations with
other vocabulary tests, Anderson and Freebody (1983) determined that the yes-no task is a reliable
and valid measure of vocabulary assessment. They found that it provides a better measure of student
knowledge than a multiple-choice task, particularly for younger students.
Teachers of novice readers know how important it is for them to be able to independently read words
encountered in content units, something taken for granted with older students. This assessment is
more adaptable to a larger corpus of target words than the VKS. The web that is included as part of
the posttest provides a lens for depth of knowledge and lexical organization (Qian, 2002). Its
simplicity also makes it a user-friendly format for ELLs.
Kay also used the VRT regularly in her second-grade classroom. Because the social studies and
science units were more in depth than the mini-units in the research project (Stahl, 2008), the
classroom VRT typically contained a total of 33 words: 25 hits and 8 foils. When using the VRT in
the classroom, a
simple scoring
system was used, H FA or the percentage
of correct responses.
For classroom use,
the web score was
simply the total
number of words
placed correctly in
each category.
Using the VRT as a
pretest allows
teachers to
determine which
words are known
and unknown. As a
result, less
instructional time
can be devoted to
known words while
providing more
intense instruction to
less familiar
vocabulary.
In addition to
learning about the
students' vocabulary
growth, the VRT
posttest can assess
our teaching. An
interesting first-year
consequence was
discovering that
there were weak pockets of instruction. For example, at the conclusion of the state-mandated unit on
Australia the students did very well webbing animals and geographic regions of Australia. However,
most students had less success webbing people and foods associated with Australia. This was a clear
indication that the instruction and materials on these subtopics needed bolstering.
References
Click the "References" link above to hide these references.
Anderson, R.C., & Freebody, P. (1981). Vocabulary knowledge. In J.T. Guthrie (Ed.),
Comprehension and teaching: Research reviews (pp. 77117). Newark, DE: International Reading
Association.
Anderson, R.C., & Freebody, P. (1983). Reading comprehension and the assessment and acquisition
of word knowledge. In B. Hutson (Ed.), Advances in reading/language research (pp. 231256).
Greenwich, CT: JAI.
Anderson, R.C., & Freebody, P. (1985). Vocabulary knowledge. In H. Singer & R.B. Ruddell (Eds.),
Theoretical models and processes of reading (3rd ed., pp. 343371). Newark, DE: International
Reading Association.
Beck, I.L., McKeown, M.G., & Kucan, L. (2002). Bringing words to life: Robust vocabulary
instruction. New York: Guilford.
Beck, I.L., McKeown, M.G., & Omanson, R.C. (1987). The effects and uses of diverse vocabulary
instructional techniques. In M.G. McKeown & M.E. Curtis (Eds.), The nature of vocabulary
acquisition (pp. 147163). Hillsdale, NJ: Erlbaum.
Blachowicz, C.L.Z., & Fisher, P.J. (2006). Teaching vocabulary in all classrooms (3rd ed.). Upper
Saddle River, NJ: Pearson Education.
Bravo, M.A., & Cervetti, G.N. (2008). Teaching vocabulary through text and experience in content
areas. In A.E. Farstrup & S.J. Samuels (Eds.), What research has to say about vocabulary instruction
(pp. 130149). Newark, DE: International Reading Association.
Bravo, M.A., Cervetti, G.N., Hiebert, E.H., & Pearson, P.D. (2008). From passive to active control of
science vocabulary (56th yearbook of the National Reading Conference, pp. 122135). Chicago:
National Reading Conference.
Calfee, R.C., & Drum, P. (1986). Research on teaching reading. In M. Wittrock (Ed.), Handbook of
research on teaching (pp. 804849). New York: Macmillan.
Cronbach, L.J. (1942). Measuring knowledge of precise word meaning. The Journal of Educational
Research, 36(7), 528534.
Dale, E. (1965). Vocabulary measurement: Techniques and major findings. Elementary English, 42
(8), 895901.
Johnson, D.D., Moe, A.J., & Baumann, J.F. (1983). The Ginn word book for teachers: A basic
lexicon. Boston: Ginn.
Johnson, D.D., & Pearson, P.D. (1984). Teaching reading vocabulary. New York: Holt, Rinehart and
Winston.
Nagy, W.E., & Scott, J.A. (2000). Vocabulary Processing. In M.L. Kamil, P.B. Mosenthal, P.D.
Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. 3, pp. 269274). Mahwah, NJ:
Erlbaum.
National Assessment Governing Board. (2009). Reading framework for the 2009 National
Assessment of Educational Progress. Retrieved December 29, 2009, from
www.nagb.org/publications/frameworks/reading09.pdf
National Clearinghouse for English Language Acquisition. (2007). National Clearinghouse for
English Language Acquisition Report: NCELA frequently asked questions. Washington, DC: U.S.
Department of Education. Retrieved October 2, 2007, from www.ncela.gwu.edu/faqs
National Institute of Child Health and Human, Development. (2000). Report of the National Reading
Panel. Teaching children to read: An evidence-based assessment of the scientific research literature
on reading and its implications for reading instruction. (NIH Publication No. 00-4769). Washington,
DC: National Institute of Child Health and Human Development.
Paris, S.G. (2005). Reinterpreting the development of reading skills. Reading Research Quarterly, 40
(2), 184202. doi:10.1598/RRQ.40.2.3
Qian, D.D. (2002). Investigating the relationship between vocabulary knowledge and academic
reading performance: An assessment perspective. Language Learning, 52(3), 513536.
doi:10.1111/1467-9922.00193
Read, J. (2000). Assessing vocabulary. Cambridge, England: Cambridge University Press.
Stahl, K.A.D. (2008). The effects of three instructional methods on the reading comprehension and
content acquisition of novice readers. Journal of Literacy Research, 40(3), 359393.
doi:10.1080/10862960802520594
Stahl, S.A., & Nagy, W.E. (2006). Teaching word meanings. Mahwah, NJ: Erlbaum.
Stahl, S.A., & Stahl, K.A.D. (2004). Word wizards all! Teaching word meanings in preschool and
primary education. In J.F. Baumann & E.J. Kame'enui (Eds.), Vocabulary instruction: Research to
practice (pp. 5978). New York: Guilford.
Wesche, M., & Paribakht, T.S. (1996). Assessing second language vocabulary knowledge: Depth
versus breadth. Canadian Modern Language Review, 53(1), 1340.
Dougherty Stahl, K.A., & Bravo, M.A. (2010, April). Contemporary Classroom Vocabulary
Assessment for Content Areas. The Reading Teacher, 63 (7), 566578. doi: 10.1598/RT.63.7.4
Reprints
For any reprint requests, please contact the author or publisher listed.
"The things I want to know are in books. My best friend is the man who'll get me a book I [haven't]
read." Abraham Lincoln