Research Report Real-Time Language Processing in School-Age Children With Specific Language Impairment

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

INT. J. LANG. COMM. DIS.

, MAY–JUNE 2006,
VOL. 41, NO. 3, 275–291

Research Report
Real-time language processing in school-age
children with specific language impairment

James W. Montgomery
School of Hearing, Speech and Language Sciences, Ohio University, Athens,
OH, USA
(Received 22 March 2005; accepted 14 June 2005)

Abstract
Background: School-age children with specific language impairment (SLI) exhibit
slower real-time (i.e. immediate) language processing relative to same-age peers
and younger, language-matched peers. Results of the few studies that have been
done seem to indicate that the slower language processing of children with SLI
is due to inefficient higher-order linguistic processing and not to difficulties with
more basic acoustic–phonetic processing. However, this claim requires further
experimental verification.
Aims: It was investigated whether the real-time language processing deficit of
children with SLI arises from inferior acoustic–phonetic processing, inefficient
linguistic processing, or both poor sensory processing and linguistic processing.
If these children’s impaired online language processing is due to inferior
acoustic–phonetic processing, then their reaction time (RT) for recognizing
words presented in list fashion should be significantly longer relative to control
children’s RT. If, however, their impaired language processing relates to
inefficient linguistic processing, then, relative to control children, their RT for
word-list-presented words should be comparable and their sentence-embedded
word-recognition RT should be significantly longer.
Methods & Procedures: Sixteen school-age children with SLI, 16 age-matched (CA)
typically developing children, and 16 receptive-syntax matched (RS) children
completed two word-recognition RT tasks. In one task, children monitored
word lists for the occurrence of a target word (isolated lexical processing task).
In the second task, children monitored simple sentences for a target word
(sentence-embedded lexical processing task). In both tasks, children made a
timed response immediately upon recognizing the target.
Outcomes & Results: Children with SLI and CA children showed comparable RT
in the isolated lexical processing task and both were faster than RS children. In
the sentence-processing task, children with SLI were slower at lexical processing

Address correspondence to: James W. Montgomery, School of Hearing, Speech and Language Sciences,
Grover Center W235, Ohio University, Athens, OH 45701-2979, USA; e-mail: montgoj1@ohio.edu

International Journal of Language & Communication Disorders


ISSN 1368-2822 print/ISSN 1460-6984 online # 2006 Royal College of Speech & Language Therapists
http://www.tandf.co.uk/journals
DOI: 10.1080/13682820500227987
276 James W. Montgomery

than CA and RS children, with CA children demonstrating the fastest


processing.
Conclusions: The real-time language processing of children with SLI appears to
be attributable to inefficient higher-order linguistic processing operations and
not to inferior acoustic–phonetic processing. The slower language processing of
children with SLI relative to younger, language-matched children suggests that
the language mechanism of children with SLI operates more slowly than what
might otherwise be predicted by their linguistic competence.

Keywords: specific language impairment, school-age children, language proces-


sing.

‘What this paper adds’


School-age children with specific language impairment (SLI) process spoken
language more slowly than age-matched and younger, language-matched peers.
A developing hypothesis is that these children are slower to deploy and/or
complete various higher-order linguistic processes as opposed to being poorer
at processing the acoustic-phonetic character of the input (Montgomery,
2002a). This hypothesis, however, has not been systematically assessed and the
aim of the present study was to do so.
The children with SLI in the present study were shown to be no slower to
process isolated words than control children but were slower to process
similar words appearing in sentences. The results were interpreted to suggest
that the slower language processing of children with SLI primarily relates to
inefficient higher-order linguistic processing and not inferior acoustic-
phonetic (phonological) processing.

Introduction
Real-time language processing requires listeners to convert immediately a rapidly
disappearing acoustic signal into a meaningful linguistic representation. Young
school-age children (i.e. ages 6–11 years) with specific language impairment (SLI)
have shown themselves to be slower language processors than both their age peers
(Stark and Montgomery 1995, Montgomery and Leonard 1998, Montgomery 2000,
2002a, 2005) and younger, language-matched peers (Montgomery and Leonard 1998,
Montgomery 2000, 2002a). From the perspective of current word-recognition
models such as the revised cohort theory (Gaskell and Marslen-Wilson 1997), there
are three possible reasons why children with SLI are slow to process language. They
may be slower at acoustic–phonetic processing, i.e. they are slower to convert the
acoustic signal into a recognizable word, independent of linguistic analysis. Second,
they may be inefficient at linguistic processing, slower at retrieving and integrating the
linguistic properties of incoming words into a developing and coherent sentence-
level representation. This account assumes intact acoustic–phonetic mapping
processes. Third, they may be poor at both acoustic–phonetic and linguistic
processing. Of the three possibilities, Montgomery (2002a) suggested that children
with SLI possess inefficient linguistic processing operations, arguing that they are
slower to access and/or integrate the linguistic properties of incoming words into an
Language processing in children with SLI 277

evolving sentence meaning. This interpretation is based on a pattern of results across


several studies differing in focus and experimental design. The study reported here
was designed to explore this hypothesis in a more direct manner by asking children to
perform two lexical processing tasks that varied in linguistic processing requirements.
More recent models of spoken language comprehension (Marslen-Wilson and
Zwiterslood 1989, McQueen et al. 1995, Pitt and Samuel 1995, Gaskell and Marslen-
Wilson 1997), especially those pertaining to auditory word recognition, appear to be
well suited to help us characterize the language processing of children with SLI.
Most current models consider word recognition as an automatic, data-driven process
facilitated by context. An important feature of many of these models is that they
explicitly provide for the interaction of acoustic–phonetic processing and higher-
order linguistic processing. It is at the level of word recognition where this
interaction has been studied most extensively because it is here that a listener’s
linguistic knowledge (i.e. phonological, syntactic, morphological, semantic)
presumably interfaces with acoustic–phonetic input (Marslen-Wilson and Welsh
1978, Marslen-Wilson and Tyler 1980, Pisoni and Luce 1987).
One influential model has been the revised cohort theory (Frauenfelder and
Tyler 1987, Marslen-Wilson and Zwiterslood 1989), which is regarded as an
‘interactive’ staged model of word recognition. The theory details several
hypothesized stages and their relationship to one another. According to this model,
lexical recognition comprises various interactive stages, including lexical contact,
activation, retrieval/access, selection and recognition/integration. Lexical contact/
activation is assumed to operate as an autonomous pre-lexical stage of processing in
that the sensory form of the input word undergoes an acoustic–phonetic analysis
before a linguistic analysis. During lexical contact, listeners synthesize several form-
based representations from the sensory input and then match them to lexical items
stored in long-term memory. Those words matching the contact representations
then become activated. This initial set of activated words is referred to as the word
initial cohort. For this study, we refer collectively to these stages as the acoustic–
phonetic processing stage. But our primary focus on this stage is on the sensory
processing of the signal, not the creation of the word initial cohort, per se. Figure 1
shows a schematic of the putative interactive stages of the revised cohort model.
During acoustic–phonetic processing listeners continually and rapidly process
the intake of acoustic–phonetic information of a word. Experimental evidence
reveals that listeners can use sub-phonetic cues (i.e. temporal and/or spectral) of
prior speech segments to help predict and identify an upcoming phonetic segment
and facilitate word recognition (Warren and Marslen-Wilson 1988, Lahiri and

Figure 1. Schematic representation of the putative stages of word recognition in the revised
cohort model.
278 James W. Montgomery

Marslen-Wilson 1990). Further, it has been argued that word-onset information, in


particular, holds special status and priority in the word recognition system. This is
because it is on the basis of word-initial information (i.e. about the first 150 ms) that
listeners presumably can align the stimulus with possible word candidates in the
mental lexicon and activate a word initial cohort.
During lexical access/retrieval, the linguistic properties (e.g. syntactic,
morphological, semantic) functionally associated with the activated word set are
retrieved and made available to the rest of the language-processing system. The
linguistic properties of the competing word candidates are then assessed for
syntactic and semantic appropriateness relative to the developing sentence meaning.
Activated words inconsistent with subsequent sensory information and sentence
meaning become inhibited. The remaining lexical entry best matching both sources
of constraint is finally selected, recognized, and integrated into context. The
interactivity of the model is reflected in the fact that listeners often can recognize
words in context before their acoustic offset (however, for findings related to
recognition after word offset, see Wingfield et al. 1997). While acoustic–phonetic
processing initiates the word recognition process, ‘later’ stages may influence the
sensory processing of the signal to the extent that word recognition may be derived
from less than full acoustic specification of the word.

Real-time language processing in children with SLI


Several studies have now examined the language-processing abilities of children with
SLI. Most of these studies have used a word-recognition reaction time (RT)/word-
monitoring task within the context of a sentence-processing paradigm (Montgomery
et al. 1990, Stark and Montgomery 1995, Montgomery and Leonard 1998,
Montgomery 2000, 2005). In this paradigm, children are asked to listen to
sentences for the occurrence of a target word (usually a noun) and to make a timed
response as soon as they hear the word. Such a paradigm permits investigators to
examine the unconscious mental representations and operations that are
automatically engaged during comprehension (Tyler 1992), as well as the interaction
of acoustic–phonetic and linguistic processing operations (Pisoni and Luce 1987).
The paradigm has proved to be sensitive to both the lexical processing
(Montgomery et al. 1990, Stark and Montgomery 1995, Montgomery 2000, 2005)
and the processing of bound grammatical morphemes by children with SLI and
typically developing children (Montgomery and Leonard 1998).
Results of word-recognition RT studies to date indicate that children with SLI
are slower to process spoken language, as evidenced by slower processing/
recognition of sentence-embedded words compared with age peers (Stark and
Montgomery 1995, Montgomery and Leonard 1998, Montgomery 2000, 2002a) and
younger, syntax-matched children (Montgomery 2000, 2002a). Relative to control
children, children with SLI are slower to begin to generate a sentence meaning (as
indicated by slower word-recognition RT at the beginning and middle of sentences)
but by the end of a sentence show comparable processing speed (Montgomery et al.
1990, Montgomery 2000). Only the studies by Stark and Montgomery (1995) and
Montgomery (2002a) were designed with the aim of addressing the potential locus
of children’s slower language processing. The results of Stark and Montgomery show
Language processing in children with SLI 279

that the processing of acoustic–phonetic content (i.e. temporal and spectral cues)
interacts in complex ways with semantic and syntactic processes. These researchers
examined the effects of time compression and low-pass filtering on the lexical
processing of children with SLI and an age-matched group of children. Relative to
both a filtered condition and an unaltered condition (i.e. sentences that were not
manipulated by low-pass filtering or time compression), all the children showed
faster word-recognition RT in a time-compression condition. By contrast, compared
with the time-compression sentences, all the children demonstrated slower word-
recognition RT in the filtered condition. These investigators suggested that reduced
access to spectral cues, not the temporal character of the input, leads to greater
difficulty in lexical processing for both children with SLI and control children. Also,
regardless of condition, the SLI group demonstrated slower RT than their age peers.
Overall, the results were interpreted to suggest that the slower lexical processing of
the children with SLI was likely attributable to inefficient linguistic processing, not
to acoustic–phonetic processing difficulties. Supporting this interpretation were data
revealing that the children with SLI and the CA children demonstrated comparable
word-recognition RT for words appearing randomly in different word lists. This
condition was not the focus of the study. But the results are interesting and pertinent
to us here because they suggest that in the absence of linguistic context children
with SLI and CA children demonstrate comparable lexical processing.
In a follow-up study, Montgomery (2002a) manipulated the phonetic content of
the sentences children monitored as a way of presumably disrupting the acoustic–
phonetic mapping component of lexical processing. Children listened to sentences
containing a high proportion of stop consonants (i.e. sentences presumably that
should require greater acoustic–phonetic processing, especially temporal) and
sentences containing a high proportion of non-stop consonants. It was reasoned
that children with SLI would demonstrate poorer processing of the stop-loaded
sentences because of the characteristic temporal processing deficit many of these
children exhibit (Tallal et al. 1985a, b). The children with SLI should show signi-
ficantly slower word-recognition RT for stop-loaded sentences because of a disrup-
tion to the initial acoustic–phonetic mapping stage of word recognition given the pre-
sumed greater temporal processing requirements of these sentences. Results showed
that both the children with SLI and the control children were unaffected by the mani-
pulation, i.e. the children with SLI and control children showed no RT difference
between the stop- and non-stop-loaded sentences. The children with SLI, however,
were significantly slower at lexical processing overall relative to control children. The
results were interpreted to suggest that the slower language processing of the children
with SLI was not due to poor acoustic–phonetic processing but rather to inefficient
linguistic processing. Together, the results of studies to date have been taken to sug-
gest that the slower language processing of children with SLI arises not from inferior
sensory processing of lexical material but from inefficient linguistic processing. This
claim, however, requires additional and more direct experimental verification.

Aim
The aim was to examine more directly whether the real-time language processing
limitation of children with SLI is attributable to inefficient acoustic–phonetic
processing, slower higher-order linguistic processing, or to both poor sensory and
280 James W. Montgomery

linguistic processing. A seemingly direct and straightforward way to differentiate


among these possibilities is to compare these children’s lexical processing under
conditions that should invite similar acoustic–phonetic mapping but which differ in
linguistic processing requirements. If these children have difficulty with acoustic–
phonetic mapping, then one would expect them to be slower to process familiar
words when they appear randomly in a word list, a condition that requires no
higher-order linguistic processing. This isolated lexical processing task represents a
replication and extension of one of the conditions in the Stark and Montgomery
(1995) study because we include (1) a greater number of word lists and trials, thereby
yielding a considerably larger/more stable RT data set on which to make group
comparisons; and (2) a group of younger, language-matched children (see the
Methods for the rationale for including this group). By contrast, if their inferior
language processing primarily arises from slower linguistic processing, one would
expect them to demonstrate, relative to control children, comparable isolated word
recognition but slower recognition of words embedded in meaningful sentences.
Finally, if their inferior language processing is the result of a combination of sensory
and linguistic processing limitations they should perform more poorly on both
lexical processing tasks relative to control children.

Methods
Participants
Recruited for this study were 16 children with SLI (mean age58 years; 7 months), 16
chronologically age-matched (CA) children (mean age58;4), and 16 younger typically
developing children matched for receptive syntax knowledge (RS, mean age56;6).
All the children with SLI and control children demonstrated normal-range non-
verbal IQ (85–120) on the Test of Nonverbal Intelligence (TONI; Brown et al. 1990),
normal range hearing sensitivity as determined by audiometric puretone screening at
20 dBHL (American National Standards Institute 1989), and no oral structural or
motor impairments affecting speech or non-speech movements of the articulators
(Robbins and Klee 1987). None of the children were judged to have any difficulty
sustaining different vowels or rapidly producing various syllables in isolation or in
combination (e.g. [ma], [ba], [ta], [ka]). They also demonstrated normal or corrected
vision and articulation abilities that fell at or above the 67th percentile on the
Goldman–Fristoe Test of Articulation (Goldman and Fristoe 1986). Also, all the
children were judged to produce intelligible conversational speech, demonstrating
few, if any, phonological problems. The groups did not differ in non-verbal IQ (F(2,
45)50.76, p50.47). Children had no history of frank neurological impairment or
psychological/emotional disturbance or attention deficit disorder (from parent
report). For each child, the number of years of education attained by the mother was
also obtained. No significant differences emerged across groups with respect to the
number of mothers who attained a college education (x2 (2)51.69, p50.48).

Language criteria
The children with SLI scored greater than 21.2 SD from the mean on at least two
of the three receptive sub-tests and more than 21.2 SD on two of three expressive
Language processing in children with SLI 281

sub-tests on the Clinical Evaluation of Language Fundamentals — Revised (CELF-


R; Semel et al. 1987). In addition, they attained an overall receptive language score
and an overall expressive language score falling greater than 21.2 SD from the
mean on the CELF-R. They also performed greater than 1 SD below the mean on
the Test for Reception of Grammar (TROG; Bishop 1989). The CA and RS children
performed at or above 21 SD from the mean on the same language measures.
Children’s single word receptive vocabulary knowledge was also assessed using the
Peabody Picture Vocabulary Test — Revised (PPVT-R; Dunn and Dunn 1981),
although no performance criterion was set to be eligible for the study.
Each child with SLI was matched with a same-gender CA child ¡3 months and
a same-gender RS child on the number of blocks passed on the TROG. A significant
group difference was found for the number of blocks passed (F(2, 45)542.44,
p50.00), with the CA children passing more blocks than both RS and SLI groups
(Tukey HSD, p,0.05), who did not differ from each other. The inclusion of an RS
group is important because they will allow us (1) to determine whether children with
SLI are comparable with younger children in lexical processing independent of
linguistic processing (as might be predicted by the findings of Stark and
Montgomery 1995); and (2) to replicate previous findings showing that children
with SLI are slower to process spoken language than younger, language-matched
children. Such a pattern of results would be very informative regarding the nature of
the slower language processing of children with SLI.

Isolated lexical processing task


To assess children’s lexical processing separate from any linguistic processing
demands, an isolated word-recognition RT task was created in which children were
asked to monitor word lists for the occurrence of a specific target word. Children’s
timed responses to the targets thus derived from just an acoustic–phonetic analysis of

Table 1. Chronological age (months) and score on each of the language measures and the IQ
test for each subject group

CELF-R
Group Age (mos) PPVT RLS ELS TROG IQ
SLI
Mean 103.6 86.5 77.8 70.9 12.3 (82) 103.5
SD 12.8 15.7 4.9 7.1 1.5 10.9
Range 82–125 61–106 70–82 62–81 9–15 89–120
CA
Mean 100.3 111.2 110.1 106.0 17.1 (106) 107.1
SD 11.9 9.3 9.8 8.0 1.6 6.4
Range 85–125 95–128 89–122 95–118 13–19 95–120
RS
Mean 78.7 106.4 100.5 99.6 12.1 (96) 106.0
SD 9.4 8.7 6.9 8.0 1.5 7.3
Range 66–96 92–124 91–115 88–118 9–15 94–116

PPVT-R, Peabody Picture Vocabulary Test-R standard score; RLS, Receptive Language Score on the
CELF-R; ELS, Expressive Language Score on the CELF-R; TROG, number of blocks passed on the
Test for Reception of Grammar (numbers in parentheses are a standard score); IQ, intelligence
quotient on the Test of Nonverbal Intelligence (TONI).
282 James W. Montgomery

the signal. Similar to Stark and Montgomery (1995), the task included 12 highly
familiar monosyllabic consonant–vowel–consonant (CVC) nouns (Moe et al. 1982).
We restricted this investigation to the study of monosyllabic nouns for two reasons.
First, they provided a means of replicating the findings of our earlier work (Stark
and Montgomery 1995). Second, they offered greater experimental control with
respect to phonological and prosodic processing demands (i.e. no demand to process
phonologically complex words or bisyllabic/polysyllabic items varying in stress
pattern), an important consideration given that the study of language processing in
children with SLI is in its infancy. For phonetic variety, six of the targets began and
ended with a stop consonant (i.e. book, boat, cup, cake, duck, pig) and six with a non-
stop consonant (i.e. cheese, fish, knife, juice, man, sun). Twelve word lists were
created, each corresponding to one target noun. Each list included 100 words. The
target noun appeared randomly 20 times, with the constraint that it could not
appear twice consecutively. Each list contained nine to ten foil words that were also
highly familiar monosyllabic CVC stop or non-stop nouns. While each foil in a list
started with a stop or non-stop (depending the phonetic character of the target word),
no foil contained the same initial phoneme as the target, thereby circumventing acous-
tically based false alarms. Foils were repeated approximately equally often in a list.

Target word list generation procedures


Each target and foil word was read aloud in list fashion by an adult male native speaker
of American English. High-quality recordings of all the words were made in a sound-
treated booth. Before waveform editing, each recorded word was low-pass filtered
(4.5 kHz) and digitized (10 kHz) and stored on disk. Each digitized audio waveform
was edited interactively using the ASYST software package (1993) to identify the
acoustic onsets and offsets of each target word. The edited stimuli were stored on
disk. The twelve 100-word list files were created using ASYST. A 1-s interstimulus
interval separated each word in a list. A 500-Hz timing pulse was synchronized to
begin at the onset of each word in the list. The timing tone, inaudible to the subject,
was used during the experiment to trigger an external clock that was used to capture
subjects’ word-recognition RT. Each target word list was played out (10 kHz) and low-
pass filtered (4.5 kHz) to the subject. Two counterbalanced orders of the word lists
were created and presented across equal numbers of children within each group.

Sentence-embedded lexical processing task


To evaluate the role of sentential context on children’s lexical processing children
completed a conventional word-monitoring task in which they listened to sentences
for the occurrence of a target word. The sentences used in this study were the same
ones used in a prior study (Montgomery 2002a). The task comprised 84
experimental sentence pairs. All the sentence pairs were syntactically simple
constructions that contained a similar range of vocabulary appropriate for 5–6-year-
olds’ comprehension and production abilities (i.e. Miller 1981, Moe et al. 1982).
Example stimulus sentences, with the target word underlined include, ‘Doug got
hurt today. He fell off his bike and cut both his hands and legs’; ‘Ted likes eating
weird foods. Yesterday morning he ate cheese on his breakfast cereal’. The rationale
for using familiar syntactic structures was to control for potential group differences
Language processing in children with SLI 283

in syntax knowledge, thereby isolating potential linguistic processing differences


between the children with SLI and the control children. The first sentence in each
pair ranged from four to 12 words and served as a topic sentence. The second
sentence ranged from 11 to 14 words. As in the isolated word-recognition task, the
target words in the sentence task included 42 highly familiar monosyllabic CVC
words, 12 of which were the same words used in the isolated lexical processing task
(e.g. dad, duck, bike, fish, knife, juice). Relative to the mean duration of the isolated
words (mean5368 ms, SD547 ms), the mean duration of the sentence-embedded
words (mean5259 ms, SD578 ms) was significantly shorter (t(83)523.98, p,0.01).
Each of the 42 target nouns appeared twice across the 84 sentences but in a
different word position. Target words appeared in the fifth, seventh or tenth word
position of the second sentence. Given that, relative to control children, children
with SLI are slower at lexical processing in the early and middle parts of a sentence
but comparable by the end of a sentence (Montgomery et al. 1990, Montgomery
2000), we did not treat word position as an independent variable in the present
study. Varying word position, however, did prevent children from guessing the
location of the target. All words preceding the target were acoustically dissimilar to
the target thereby preventing acoustically based false alarms. Twenty-four catch trials
were constructed in which a target word (also highly familiar monosyllabic CVC
nouns) did not appear in the second sentence. Catch trials were included to identify
children with an impulsive response style.

Target word and sentence generation procedures


A list of individual target words (targets corresponding to the words appearing in the
sentences) and the stimulus sentences were read aloud by an adult male native
speaker of American English. High-quality recordings of all materials were made in a
sound-treated booth. Individual target words were read in list fashion. The
sentences were read at a normal rate (about 4.4 syllables/s; Ellis Weismer and
Hesketh 1996) and with normal prosodic variation. As was the case with the isolated
target words, before waveform editing each recorded sentence was low-pass filtered
(4.5 kHz) and digitized (10 kHz) and stored on disk. Each digitized audio waveform
was edited interactively using the AYST software package to identify the acoustic
onsets and offsets of each sentence-embedded target word. The edited stimuli were
stored on disk and later played out (10 kHz) and low-pass filtered (4.5 kHz) from a
PC laboratory computer to the subject. A 500 Hz timing pulse was synchronized to
begin at sentence-one onset and was used to gate on the external clock. Each
sentence’s corresponding isolated target word also was low-pass filtered, digitized
and stored on disk. Each target was later played out and low-pass filtered to the
subject 1 s before its corresponding sentence. Table 2 presents descriptive data about
the target words. Two counterbalanced orders of the isolated lexical processing and
sentence-embedded lexical processing tasks were created and presented across equal
numbers of children in each group.

General procedures
Each child was tested individually in an acoustically isolated booth over three testing
sessions. Each session lasted 60–75 min, with two or three short rest breaks. A
284 James W. Montgomery

Table 2. Accuracies (collapsing missed responses and false alarms) of the children with SLI,
CA children and RS children in each experimental task

Sentence-Embedded Word
Tone Detection Isolated Word Recognition Recognition
SLI
Mean 97 97 96
SD 1.8 1.7 2.1
Range 92–100 93–100 92–98
CA
Mean 98 98 97
SD 1.3 1.6 1.3
Range 96–100 95–100 94–98
RS
Mean 97 96 96
SD 1.4 1.5 1.5
Range 93–100 94–100 92–97

Accuracies are expressed as percentages.

trained graduate student (second experimenter) sat with the child in the testing
booth while the experimenter (J. M.) sat in an adjoining control room delivering the
stimuli. All word lists and sentence stimuli were delivered to the child binaurally via
headphones at a comfortable listening level.
For each task, four live-voice practice items and 12 computer practice items
preceded the experimental items. Children were told to listen to some lists of words
or ‘short stories’ and to push a response pad as quickly as they could as soon as they
heard the target word in the list or the ‘story.’ Again, both response accuracy and
speed were stressed to the children in the instructions and practice trials, as well as
during the experimental task. Before stimulus presentation, children were shown a
picture of the target word by the second experimenter and heard the target word
presented in isolation via the computer.
Throughout testing, children were provided constant encouragement and praise
and, as needed, reminders to ‘stay focused.’ These forms of non-specific feedback
were intended to maximize both motivation and RT performance. Children’s
responses (pad press) stopped the clock. Responses occurring before target word
onset (i.e. negative RT) were scored as false alarms and failures to respond were
scored as misses. RT, which was calculated by a custom-written program as the
difference between target word onset and the time the child pressed the response
pad, was automatically stored on the computer.

Auditory detection reaction time task


A simple auditory detection RT task was also included to serve as an index of
motor response and basic auditory reception/sensation time independent of
linguistic processing given that some children with SLI show slower RT than age
controls on various tasks having little to do with language (e.g. Hughes and
Sussman 1983, Nichols et al. 1995). Children were instructed to press the response
pad as quickly as possible immediately upon hearing an imperative stimulus (1 s,
2 kHz pure tone) following a warning tone (500 ms, 500 Hz tone). As in the word-
recognition tasks, both response accuracy and speed were stressed to the child.
Language processing in children with SLI 285

Following five live-voice practice trials (i.e. examiner producing a ‘low beep’
followed by a ‘high beep’ and demonstrating the button press on the ‘high beep’)
and 15 computer-delivered practice trials, 36 experimental RT trials (inter-stimulus
interval between warning and test tones varying randomly between 1.5 and 3 s)
were presented. Children received constant encouragement and praise, as well as
necessary reminders ‘to stay alert/focused’ during the task. For each child a mean
auditory detection RT was calculated. This task also provided children with
practice and familiarity with the word-recognition RT tasks, as well as to enhance
intra-subject RT stability. This task always occurred before the word-recognition
task. Each RT was automatically calculated by the computer as the difference
between the onset of the imperative stimulus and when the child pushed the
response pad.

Data analysis
Before the analyses of each RT task, all outlying responses from a child’s mean RT
were eliminated from his/her data set (e.g. Kail 1991). For each task, an outlier was
defined as any RT falling ¡2 SD from a child’s mean RT. In the word-recognition
task, outliers were identified in each rate condition according to word position. For
any child with missing RTs (i.e. false alarms, no responses, outliers), an appropriate
mean RT was calculated and then inserted into his/her data set and a new mean RT
was then derived (Fazio 1990) in order to yield a complete data set for each child for
each task.

Results
Auditory detection RT task
Fewer than 3% of each group’s responses were eliminated because of outlying
responses. Pooling across missed responses and false alarms, no group difference
emerged with respect to overall hit rate (F(2, 45)51.91, p50.16). The children with
SLI achieved a hit rate of 97%, the CA a hit rate of 98%, and RS children 97%. The
groups’ accuracy data are presented in table 2. The groups also yielded comparable
mean RT (F(2, 45)51.47, p50.29). Because the groups were comparable in simple
RT, this variable was not used as a covariate in the following analyses. Table 3
displays subject groups’ RT data.

Isolated lexical processing task


Each group was highly accurate, producing less than 2% outlying responses. As
indicated by table 2 the groups also yielded overall comparable accuracies (collapsing
missed responses and false alarms) (F(2, 45)51.15, p50.99). The RT data displayed
in table 3 suggest that the children with SLI and the CA children yielded comparable
word-recognition RT and that the RS children were slower than both of these
groups. Results of a one-way ANOVA supported this impression by revealing a
significant group effect (F(2, 45)55.14, p50. 01). Post-hoc Tukey HSD analysis
(p,0.05) indeed indicated that there was no difference between the SLI and CA
groups and that the RS children were significantly slower than both the SLI and CA
286 James W. Montgomery

Table 3. Mean reaction time (RT, ms) for the children with SLI, CA children and RS children
in each experimental task

Tone Detection RT Isolated Word Recognition Sentence-Embedded Word


(ms) RT (ms) Recognition RT (ms)
SLI
Mean 396.7 622.5 412.9
SD 132.1 109.4 76.9
Range 238–600 455–854 189–540
CA
Mean 330.1 594.9 301.4
SD 78.9 95.5 44.9
Range 185–472 407–719 239–396
RS
Mean 372.9 709.5 358.3
SD 99.5 111.1 54.2
Range 263–572 515–932 287–507

groups. Inspection of individual subject data revealed that 13 of the 16 children with
SLI followed this pattern, with two SLI children appearing to show ‘comparable’
lexical processing to their RS matches and one who was faster than his RS match. In
addition, 14 of the CA children and 13 of the RS children followed the overall
ANOVA response pattern.

Sentence-embedded lexical processing task


Similar to the isolated word recognition condition, each group was highly accurate,
producing less than 3% outlying responses. In addition, no significant differences
emerged with respect to overall accuracies (collapsing missed responses and false
alarms) (F(2, 45)50.76, p50.99). The groups also did not differ with respect to the
number of responses to catch trials (F(2, 45)51.33, p50.88). As suggested by the
data presented in table 3, the groups yielded significantly different word-recognition
RT, with the children with SLI producing the slowest RT. This impression was
confirmed by the results of a one-way ANOVA, which indicated a significant group
effect (F(2, 45)513.74, p50. 00). Post-hoc Tukey HSD analysis (p,0.05) revealed
the CA children were faster than both the RS children and the children with SLI,
and the RS children were significantly faster than the children with SLI. Again,
inspection of individual subject data revealed that the majority (14/16) of the
children with SLI appeared to follow this pattern, with one SLI child showing
‘faster’ lexical processing than his CA match and one child showing ‘comparable’
processing to her RS match. Similarly, 15 of the CA children and 14 of the RS
children followed the overall ANOVA pattern.
To illuminate the beneficial effect of context on children’s word-recognition RT,
each group’s isolated word-recognition RT was compared informally with word-
recognition RT in the sentence-embedded condition relative to each condition’s
word duration. As clearly indicated in table 3, compared with the isolated word-
recognition task, each group responded considerably more quickly to sentence-
embedded targets. On average, the children with SLI responded 153 ms after word
offset for sentence-embedded words versus 254 ms for isolated words, CA children
Language processing in children with SLI 287

42 versus 277 ms, and RS children 99 versus 342 ms. It is especially noteworthy that
despite the fact that context facilitated lexical processing in the children with SLI,
these children were significantly slower at lexical processing than the control
children, even the RS children who were slower at isolated lexical processing than
the children with SLI.

Discussion
This study addressed the issue of whether the real-time language processing
impairment of children with SLI is related to slower acoustic–phonetic process-
ing, slower higher-order linguistic processing or difficulty with both processes.
Children with SLI and two control groups of children, one matched on age, the
other on receptive syntax knowledge, participated in two lexical-processing
tasks varying in linguistic processing requirements, an isolated word-recognition
RT task (involving no linguistic processing) and a sentence-embedded word-
recognition RT task. The results were clear. Children with SLI are just as fast to
recognize highly familiar CVC words as their CA and language-matched counter-
parts so long as there are no linguistic processing requirements. However, when
similar highly familiar words occur in sentences, children with SLI are significantly
slower at lexical processing than both their CA and younger, language-matched
peers.
The isolated word-recognition RT task was designed to evaluate children’s lexical
processing separate from linguistic processing. Because this task did not invite
children to use any linguistic operations (e.g. retrieving semantic–syntactic pro-
perties of words, constructing a sentence meaning, integrating target word into a
developing sentence interpretation), children’s word recognition derived solely from
an acoustic–phonetic/phonological analysis of the target words. The results were
clear in showing that the children with SLI were comparable with their age-matched
peers in isolated lexical processing. These results are consistent with those reported
by Stark and Montgomery (1995). The children with SLI even proved faster than
their language-matched cohorts. Importantly, inspection of individual response
patterns of individual children with SLI revealed that the large majority of these
children showed a comparable RT than CA children and faster RT than RS children.
Thus, it cannot be argued that the good isolated lexical processing of the children
with SLI was attributable to just a few fast processors. Viewed within the context of
the revised cohort model, the children with SLI had no trouble processing the
acoustic–phonetic (phonological) characteristics of incoming lexical material. They
appeared to have at least comparable abilities with their typically developing peers in
reliably and accurately mapping the acoustic–phonetic characteristics of incoming
words onto lexical representations stored in the long-term lexicon, at least for highly
familiar monosyllabic words.
Relative to the word list task, children’s word-recognition RT in the sentence-
embedded task were clearly faster, as indicated by an informal comparison of RT
between the two tasks. This was the case even for the children with SLI. These
findings also agree well with the findings of Stark and Montgomery (1995). Such
results show that lexical processing is facilitated by sentential information, indicating
that the acoustic–phonetic processing of lexical input interacts with a listener’s
linguistic knowledge (Pisoni and Luce 1987).
288 James W. Montgomery

Even though children with SLI demonstrate faster lexical processing in context,
they are still significantly slower processors than CA and RS children, findings that
also accord well with several other study findings (Stark and Montgomery 1995,
Montgomery and Leonard 1998, Montgomery 2000, 2002a). Again, inspection of
individual SLI subject data revealed that the large majority of these children were
slower processors than both the CA and RS children. The slower processing of
children with SLI cannot be attributed to inferior basic auditory sensory reception
and/or motor execution. If this were the case, these children should have been
slower in the simple tone detection task, which was not the case. Nor can their
slowness be attributed to inferior acoustic–phonetic processing, as indicated by the
preceding discussion. In the context of the present experimental design, the slower
processing of the children with SLI is likely attributable to less efficient linguistic
processing. During sentence processing the children not only must map the acoustic
signal onto stored lexical representations (i.e. sensory processing) but also invoke a
variety of higher-order linguistic processes, including retrieval of the semantic–
syntactic features associated with each input word, construction of an ongoing
sentence-level representation of the input, integration of each incoming word into
the developing sentence representation. It thus seems reasonable to argue, that,
relative to typically developing children, children with SLI are less efficient to deploy
and/or complete one or more of the various linguistic operations in real time.
Because these processes presumably operate simultaneously, are temporally bound,
and require some finite amount of attentional resources to perform, these children’s
slower language processing can profitably be viewed as yet another example of their
limited speed of processing capacity (Miller et al. 2001, Montgomery 2002a, b, 2005,
Windsor and Hwang 1999), in particular slower linguistic processing speed. As
has been argued elsewhere (Montgomery 2000, 2002a, Montgomery and Leonard
1998), children with SLI appear to have difficulty allocating sufficient attentional
resources to a variety of linguistic processing operations that virtually occur
simultaneously.
In summary, results of the present study indicate that children with SLI are no
slower than their CA peers when processing highly familiar monosyllabic words
when there are no demands for linguistic processing. By contrast, children with SLI
are significantly slower than both CA and younger, language-matched peers at lexical
processing when words appear in sentences. These findings thus suggest that the
slower language processing of these children primarily relates to inefficient higher-
order linguistic processing as opposed to inferior acoustic–phonetic (phonological)
processing. In fact, a clear pattern now appears to be emerging across a number of
studies. First, regardless of how lexical material is presented (e.g. word lists versus
sentences; Stark and Montgomery 1995, present study) or manipulated (i.e. input
rate; Stark and Montgomery 1995, Montgomery 2005; phonetic content,
Montgomery 2002a), children with SLI and typically developing children respond
similarly to the presentations and manipulations. That is, children with SLI do not
show a unique response pattern to the different presentations or manipulations
designed to disrupt the acoustic–phonetic processing of lexical input. Results of an
offline gating experiment (Montgomery 1999) provide converging support for the
developing hypothesis that inefficient linguistic processing is the primary trouble
spot underlying the slower language processing of children with SLI. Montgomery
(1999) showed that children with SLI required no more acoustic–phonetic
information to recognize reliably familiar phonologically simple/monosyllabic
Language processing in children with SLI 289

words than typically developing children. Second, and especially striking, is that
children with SLI are slower to process spoken language relative to their younger,
language-matched cohorts. These findings suggest that the language-processing
apparatus of children with SLI operates at an even slower rate than what might be
predicted by their level of linguistic competence (Montgomery 2005). Even for
sentence material (i.e. simple grammatical constructions) that should be well within
their linguistic grasp children with SLI are slow to construct a sentence-level
representation.
Finally, it is important to point out that the interpretation being advanced here
must be considered tentative for now. Studies to date, including the present one,
have only examined these children’s processing of phonologically simple (CVC)
monosyllabic words. It is possible that children with SLI are less efficient at
processing longer, bisyllabic and polysyllabic, words that include variable stress
patterns, i.e. words containing stressed and unstressed syllables, either in isolation
or in running speech. Indeed, these children have been shown to have difficulty
processing unstressed syllables in running speech, albeit this difficulty has been
examined in the context of processing bound grammatical morphemes
(Montgomery and Leonard 1998). In addition, because the sentence-embedded
words in the present study were of shorter duration than the words in
the isolated condition, it is possible that the children with SLI demonstrated
slower lexical processing in sentences, at least in part, because the words were
shorter in overall duration. It might be that reduced temporal and/or spectral
information of lexical input hinders these children’s processing of connected
speech (e.g. Stark and Montgomery 1995). It is also possible that these children
were not able to take advantage of the sub-phonetic cues associated with the
immediately preceding word (Lahiri and Marslen-Wilson 1990). Perhaps these
children are less sensitive to the potentially facilitating effects of coarticulation
during sentence processing. The present study did not assess this possibility.
It thus will be important for future studies investigating the interaction of
acoustic–phonetic and linguistic processing in children with SLI to incorporate
longer, phonologically and prosodically more complex words to advance our
understanding of the nature and locus of these children’s slower language
processing.

Clinical implications
Recent findings suggest that the language processing of children with SLI can be
facilitated by a slower input rate (Montgomery 2005). The locus of this effect,
whether it occurs at the level of linguistic processing or at the acoustic–phonetic
level (Merzenich et al. 1996), is unclear. However, given the apparent converging
evidence of a linguistic processing impairment in these children, including the
present findings, the effect may well reside at the linguistic level. Regardless of the
locus of the effect, it is encouraging that something as simple as manipulating rate of
speech seems to have a beneficial effect on the real-time language processing of
these children. A slower input rate evidently offers these children additional time to
allocate sufficient attentional resources to complete the various and concurrent
sensory and linguistic processing operations subserving real-time language
processing.
290 James W. Montgomery

Acknowledgements
The study was supported by a research grant (DC 02535) from the National
Institute on Deafness and Other Communication Disorders. The authors express
their gratitude to the children and their parents who participated in the study.

References
AMERICAN NATIONAL STANDARDS INSTITUTE, 1989, Specifications of Audiometers. ANSI S3.6-1989 (New York:
ANSI).
ASYST, 1993, ASYST. Software (Cleveland, OH: Keithley Instruments, Inc.).
BISHOP, D. V., 1989, Test for Reception of Grammar, 2nd edn (Manchester: Department of Psychology,
University of Manchester).
BROWN, L., SHERBENOU, R. and JOHNSEN, S., 1990, Test of Nonverbal Intelligence, 2nd edn (Austin, TX: Pro-
Ed).
DUNN, M. and DUNN, M., 1981, Peabody Picture Vocabulary Test — Revised (Circle Pines, MN: American
Guidance Service).
ELLIS WEISMER, S. and HESKETH, L., 1996, Lexical learning by children with specific language impairment:
effects of linguistic input presented at varying speaking rates. Journal of Speech and Hearing Research,
54, 177–190.
FRAUENFELDER, U. and TYLER, L., 1987, The process of spoken word recognition: an Introduction.
Cognition, 25, 1–20.
GASKELL, M. and MARSLEN-WILSON, W., 1997, Integrating form and meaning: a distributed model of
speech perception. In G. T. M. Altman (ed.), Cognitive Models of Speech Processing: Psycholinguistic and
Computational Perspectives on the Lexicon (Hove: Psychology Press).
GOLDMAN, R. and FRISTOE, M., 1986, Goldman–Fristoe Test of Articulation (Circle Pines, MN: American
Guidance Service).
HUGHES, M. and SUSSMAN, H., 1983, An assessment of cerebral dominance in language-disordered
children via a time-sharing paradigm. Brain and Language, 19, 48–64.
KAIL, R., 1991, Processing time declines exponentially during adolescence. Developmental Psychology, 27,
259–266.
LAHIRI, A. and MARSLEN-WILSON, W., 1990, The mental representation of lexical form: a phonological
approach to the recognition lexicon. Cognition, 38, 245–294.
MARSLEN-WILSON, W. and TYLER, L., 1980, The temporal structure of spoken language understanding.
Cognition, 25, 71–102.
MARSLEN-WILSON, W. and WELSH, A., 1978, Processing interactions and lexical access during word
recognition in continuous speech. Cognitive Psychology, 10, 29–63.
MARSLEN-WILSON, W. and ZWITERSLOOD, P., 1989, Accessing spoken words: The importance of word
onsets. Journal of Experimental Psychology: Human Perception and Performance, 15, 576–585.
MCQUEEN, J., CUTLER, A., BRISCOE, T. and NORRIS, D., 1995, Models of continuous speech recognition and
the contents of the vocabulary. Language and Cognitive Processes, 10, 309–331.
MERZENICH, M., JENKINS, W., JOHNSTON, P., SCHREINER, C., MILLER, S. and TALLAL, P., 1996, Temporal
processing deficits of language-learning impaired children ameliorated by training. Science, 271,
77–84.
MILLER, C., KAIL, R., LEONARD, L. and TOMBLIN, J. B., 2001, Speed of processing in children with specific
language impairment. Journal of Speech, Language, and Hearing Research, 44, 416–433.
MILLER, J., 1981, Assessing Language Production in Children (Baltimore, MD: University Park Press).
MOE, A., HOPKINS, C. and RUSH, R., 1982, The Vocabulary of First-grade Children (Springfield, IL: Thomas).
MONTGOMERY, J., 1999, Recognition of gated words by children with specific language impairment: an
examination of lexical mapping. Journal of Speech, Language, and Hearing Research, 42, 735–743.
MONTGOMERY, J., 2000, Relation of working memory to off-line and real-time sentence processing in
children with specific language impairment. Applied Psycholinguistics, 21, 117–148.
MONTGOMERY, J., 2002a, Examining the nature of lexical processing in children with specific language
impairment: temporal processing or processing capacity deficit? Applied Psycholinguistics, 23,
447–470.
MONTGOMERY, J., 2002b, Information processing and language comprehension in children with specific
language impairments. Topics in Language Disorders: Information processing: Implications for Assessment
and Intervention, 22, 64–87.
Language processing in children with SLI 291

MONTGOMERY, J., 2005, Effects of input rate and age on the real-time lexical processing of children with
specific language impairment. International Journal of Language and Communication Disorders, 40,
171–188.
MONTGOMERY, J. and LEONARD, L., 1998, Real-time inflectional processing by children with specific
language impairment: effects of phonetic substance. Journal of Speech, Language, and Hearing
Research, 41, 1442–1443.
MONTGOMERY, J., SCUDDER, R. and MOORE, C., 1990, Language-impaired children’s real-time comprehen-
sion of spoken language. Applied Psycholinguistics, 11, 273–290.
NICHOLS, S., TOWNSEND, J. and WULFECK, B., 1995, Covert Visual Attention in Language-impaired Children.
Technical Report CND-9502 (San Diego, CA: Center for Research in Language, University of
California).
PISONI, D. and LUCE, P., 1987, Acoustic–phonetic representations in word recognition. Cognition, 25,
21–52.
PITT, M. and SAMUEL, A., 1995, Lexical and sublexical feedback in auditory word recognition. Cognitive
Psychology, 29, 149–188.
ROBBINS, J. and KLEE, T., 1987, Clinical assessment of oropharyngeal motor development in young
children. Journal of Speech and Hearing Disorders, 52, 271–277.
SEMEL, E., WIIG, E. and SECORD, W., 1987, Clinical Evaluation of Language Fundamentals — Revised (San
Antonio, TX: Psychological Corporation).
STARK, R. and MONTGOMERY, J., 1995, Sentence processing in language-impaired children under
conditions of filtering and time compression. Applied Psycholinguistics, 16, 137–154.
TALLAL, P., STARK, R. and MELLITS, D., 1985a, The relationship between auditory temporal analysis and
receptive language development: evidence from studies of developmental language disorder.
Neuropsychologia, 23, 527–534.
TALLAL, P., STARK, R. and MELLITS, D., 1985b, Identification of language-impaired children on the basis of
rapid perception and production skills. Brain and Language, 25, 314–322.
TYLER, A., 1992, Spoken Language Comprehension: An Experimental Approach to Disordered and Normal
Processing (Cambridge, MA: MIT Press).
WARREN, P. and MARSLEN-WILSON, W., 1988, Cues to lexical choice: discriminating place and voice.
Perception and Psychophysics, 43, 21–30.
WINDSOR, J. and HWANG, M., 1999, Children’s auditory lexical decisions: a limited processing capacity
account of language impairment. Journal of Speech, Language and Hearing Research, 42, 990–1002.
WINGFIELD, A., GOODGLASS, H. and LINDFIELD, K., 1997, Word recognition from acoustic onsets and
acoustic offsets: effects of cohort size and syllabic stress. Applied Psycholinguistics, 18, 85–100.

You might also like