Academia.eduAcademia.edu

It Still Isn’t Over: Event Boundaries in Language and Perception

The interaction between language and perception networks in the brain can hold the key to the biological bases of language evolution. In language, every sentence is built around a verb, which describes an event. During perception, humans constantly and automatically segment reality into individualized event (verb) units. The cognitive connection between the perceptual segmentation of events and their grammatical expression in language is a novel direction for research into the neurobiology of language, but it may be a key example of the emergence of linguistic structure – grammatical category – from individualized meaning. This article summarizes the current state of this research, and is divided into three parts. The initial overview of cross-linguistic typology of events shows that events and their boundaries are of vital importance to linguistic communication. We then summarize what is known about perceiving and identifying event boundaries in the most well-studied domain, that of visual perception. Here, sign languages provide the link between visual perception and linguistic expression, showing a complex mapping from perceptual and cognitive event segmentation to linguistic structures at the phonology– syntax interface. The last section reviews current evidence for neural bases of event processing, showing that identification of an event boundary (linguistic or perceptual) is used for memory updates, and suggesting the possible role of an event-type universal in the syntactic structure of human languages.

Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071 It Still Isn’t Over: Event Boundaries in Language and Perception Evie Malaia* Center for Mind, Brain, and Education, University of Texas at Arlington Abstract The interaction between language and perception networks in the brain can hold the key to the biological bases of language evolution. In language, every sentence is built around a verb, which describes an event. During perception, humans constantly and automatically segment reality into individualized event (verb) units. The cognitive connection between the perceptual segmentation of events and their grammatical expression in language is a novel direction for research into the neurobiology of language, but it may be a key example of the emergence of linguistic structure – grammatical category – from individualized meaning. This article summarizes the current state of this research, and is divided into three parts. The initial overview of cross-linguistic typology of events shows that events and their boundaries are of vital importance to linguistic communication. We then summarize what is known about perceiving and identifying event boundaries in the most well-studied domain, that of visual perception. Here, sign languages provide the link between visual perception and linguistic expression, showing a complex mapping from perceptual and cognitive event segmentation to linguistic structures at the phonology– syntax interface. The last section reviews current evidence for neural bases of event processing, showing that identification of an event boundary (linguistic or perceptual) is used for memory updates, and suggesting the possible role of an event-type universal in the syntactic structure of human languages. Overview Since about 2000, event perception has been considered an independent scientific problem in perceptual psychology, leading to the development of processing models, such as Event Segmentation Theory (EST; Zacks, Speer, Swallow, Braver, and Reynolds 2007; see also early seminal work, e.g. Newtson 1973). Perceptual event segmentation models attempt to tie together visual processing of ambient surroundings and retrieval of schemas – abstract representations of situation models – from semantic memory. Recent research in the psycholinguistics of sign language processing has helped elucidate the relationship between linguistic verb types, motion processing, and event schema retrieval. What Cues do People Use for Event Segmentation? The most common and widely studied avenue through which humans perceive the events around them is, of course, visual. As we know that humans parse the visual world into distinct events, it is natural to ask, ‘what visual information does one use to decide when one event ends and the other begins?’ The original formulation of the question was a complicated one, since initially, the level of individual variability in event segmentation was not clear. Although people may use the same verb to describe one event, that does not necessarily mean that they concur on the boundaries of that event. Thus, visual cues denoting perceptual event boundaries, and psycholinguistics of sign language processing, have helped elucidate the relationship between linguistic verb types, motion processing, and event schema retrieval. © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd 90 Evie Malaia This article summarizes the body of research that led to our current understanding of event processing in language and cognition, and considers the wider implications of these findings to our understanding of linguistic processing and language development. As an exercise, think about going on your last major trip. The things you think about might include ‘deciding to go, getting tickets, driving/flying, attending an event, returning home’. In other words, you will have described a large chunk of your past as if it consisted of individual, discrete pieces (‘getting tickets’, ‘driving’, etc.), with each discrete activity having a perceived internal structure. Speakers of most known languages describe their reality as a collection of separate structured events (typically described by verbs), since our mental accounting of continuous reality consists of individuated bursts of processing: we pay attention to individual pieces of our surroundings, and then use those for further mental processing – memorization, planning, ordering, and narrating. Human discourse is built around such descriptions: sentences center around verb or verblike units, each of which can be analyzed in terms of how it represents events. There is a striking similarity in the representation of basic event types – event calculus – across different language families (see Folli and Harley 2006; Van Valin 2006, for reviews). Such crosslinguistic universals are important for identifying links between language, cognition, and perception, so we will consider it in some detail. Verbs describing events that contain a defined time-point of transition from the onset state or toward the end-state (such as drop, marry, or discover) are termed telic. Envisioning any of those events, one can easily decide on the instance of change that defines the event itself. The verbs that describe events that are not tied to a specific transition in time (sleep, run, and wait) are termed atelic. The basic dichotomy between events with a built-in transition time-point (telic) and those without it (atelic events) has been found in most of the world’s languages. Some linguistic systems stop at the basic dichotomy of presence and absence of a reference time-point in the verb; others (such as Russian and Bengali) make a distinction between changes that occur at the onset or at the end of an event (Malaia 2004; Basu 2010). Various languages rely on different linguistic sub-systems to convey the meaning of telicity, which identifies event boundaries. Some (such as American Sign Language and Japanese) use phonological distinctions (Wilbur 2003; Fujimori 2012), while others (such as Indonesian or Russian) rely on morphology (Son and Cole 2008; Malaia & Basu, 2013). Many Germanic languages have grammaticalized the distinction between telic and atelic verb types in the use of, respectively, haben or sein in the formation of perfective aspect for specific verbs (van Hout, 2001), or in the use of determiners and quantification of the Object (Ogiela et al. 2013). The minimal units of event comprehension found in most of the world’s languages relate closely to the current understanding of how the brain processes reality: in individuated, temporally bounded chunks. Consequently, the question of how those chunks are selected becomes important for linguistic analysis. Individual variability in use of those cues had to be studied first. Breakthrough findings came from a participant-driven paradigm in neuroimaging studies (Zacks et al. 2001; Zacks et al. 2009), in which participants were presented with videos of human movement that was also motion captured, and asked to make their own decisions about boundaries between events in what they were seeing. Both experiments utilized a post-hoc analysis approach, using participants’ responses denoting the boundaries perceived by each person individually, in order to trace neural activations and kinematic features of the stimuli that triggered segmentation decisions. Behavioral data in the studies indicated that participants, in fact, did not differ significantly on segmentation decisions. Fine-grained segmentation (responses to the task of breaking down the video into short events, less than 3 seconds in duration) showed especially high temporal correlation among participants (Zacks et al. 2001). Since there was so much agreement among participants © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071 Event Boundaries in Language and Perception 91 with regard to the short events they were seeing, it was then possible to identify the intervals, within which most participants observed a boundary between events. This allowed further investigation into visual cues behind the segmentation decisions. The analysis of motion capture data of the stimuli video (Zacks et al. 2009) showed that among the analyzed kinematic features (which included position, velocity, and acceleration of limb and head motions in absolute coordinates, as well as with respect to each other), it was the velocity and acceleration of individual body parts that robustly correlated with participants’ event segmentation decisions. Most participants identified time intervals with high limb acceleration/deceleration, and high motion velocity as event boundaries. In general, it appeared, humans rely on visual cues of velocity and acceleration to separate events in their surroundings, and do so with remarkable similarity. In addition to the investigation of event segmentation in coarse, everyday motion, the relationship between perceptual event boundaries and their expression in language may be analyzed with more fidelity through the analysis of sign languages. Sign languages use the visual modality for both perception and expression, employing hand movements of high complexity (Petitto et al., 2001; Malaia, Borneman, and Wilbur ). The visual abilities of signers differ significantly from those of non-signers due to a lifelong experience with sign language input: the periphery of signers’ visual field is more sensitive to motion, and the attended visual field is larger and better regulated by selective attention (Bavelier, Dye, and Hauser 2006; Bosworth, Bartlett, and Dobkins 2006). An investigation of visual perception using moving point-light ‘writing’ of pseudo-hieroglyphs (Klima et al. 1999) found a remarkable difference between how signers and non-signers process visual motion. Non-signers, regardless of their familiarity with hieroglyphic writing systems, could not reproduce pseudo-hieroglyphic targets: their perception was simply that of a light on the screen, moving somewhat chaotically. Signers looking at the same moving point-light were able to identify underlying targets – discrete structural elements of the stimuli – based solely on the differences in motion: for them, the transitions between structural elements looked different than the elements themselves. Both American Sign Language and Chinese Sign Language signers performed similarly in this task. Clearly, sign language experience led to the signers’ high sensitivity to some parameters of biological motion. But which parameters were signers using – and how could one begin to look for similarities in motion across unrelated sign languages? Understanding of the kinematic features that helped signers identify structural components of motion could be a breakthrough in understanding how hominids learned to make sense of their surroundings, and what perceptual abilities were the foundation of evolutionary language development. Based on theoretical work in sign language phonology1 and investigations of verb types, Wilbur (2003) put forth the Event Visibility Hypothesis (EVH), suggesting that telic vs. atelic types of signed verbs are distinguished by different kinematic profiles. Subsequent experimental studies both confirmed the hypothesis, and raised new questions with regard to kinematic cues used for event segmentation. For example, motion capture analyses of verb production in American Sign Language and Croatian Sign Language demonstrated that both languages mark the time-referenced (telic) verbs, which specifically denote a boundary (onset or offset ) in the event. The markers included higher peak speed of hand motion during the sign, higher peak deceleration of the signing hand toward the final location, and a shortened time between peak speed and the end of sign (Malaia and Wilbur 2012a, 2012b, 2012c). The first two marker types, reliant on limb velocity and acceleration, replicated the perceptual-motion cues observed for everyday event segmentations by Zacks et al. (2009). An fMRI study of sign perception in signers and non-signers (Malaia et al. 2012) led to further interesting findings. Firstly, both signers’ and non-signers’ brains showed activation of the region responsible for processing of biological motion, cortical visual motion area MT+. The region was less focally active in non-signers (with a more extensive area, and © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071 92 Evie Malaia lower peak activation), suggesting that while signers processed motion features that marked the differences in the stimuli, non-signers (at least) attempted to process the same. Secondly, the brain network differentially active during processing of telic vs. atelic ASL signs in Deaf signers included posterior cingulate/precuneus – the region best known for processing episodic memory, or, more generally, schemas of past, familiar events. Telic verb signs (those referencing a specific time-point of state change, and marked with higher acceleration in American Sign Language) appeared to trigger access to long-term memory. This was the first finding that suggested the model of interaction among the components of the processing model in Event Segmentation theory: features of motion, memory access, and language processing. Neural Bases of Event Boundary Processing Now that we understand how events are segmented and identified in perception, the logical question is – why are event types so important for language? Or, in other words, what is the benefit of continuously keeping track of event boundaries? Recently observed interactions among the neural components of perceptual and linguistic networks suggest that event boundaries trigger a memory reference. Once the fact that individuals rely on similar perceptual cues for event boundaries was established, an analysis of linguistic event boundaries – those realized by means of verbs in spoken languages – could commence. It turned out that visual (perceptual) and linguistic (conceptual) event segmentations were entirely congruent. As Magliano et al. (2012) demonstrated, segmentation of visual (movie) and text-based narratives yield strikingly similar results with regard to the identification of transition points and the number of events. Even more remarkably, both visual event segmentation, based on predictive processing, and event segmentation in a linguistic narrative activated the neural regions linked to retrieval of event schemata from semantic memory. Yarkoni, Speer, and Zacks (2008) showed that neural activation during reading of a coherent narrative, as compared to that during reading of unrelated sentences, involved higher activation of posterior parietal network, including posterior cingulate/precuneus (BA 23/31). Strikingly, activation in a network of regions including BA 23/31 was predictive of participants’ subsequent recognition memory of story-level events in a multiple-choice sentence recognition test. This finding suggested that mental updating of situation models for separate events was important for retaining the content of the narrative. The relevance of timing for memory updates with regard to event content is supported by the findings that an individual’s strategy for understanding complex sentences varies with the reader’s verbal working memory capacity (Malaia, Wilbur, and Weber-Fox 2009). This EEG study has shown that participants with high working memory capacity use a more syntaxfocused strategy for sentence processing, which leads to earlier matching of event schemas with incoming linguistic information, as compared to participants with average working memory capacity. Participants with working memory capacity above average try to understand complex sentences (those containing multiple clauses / describing multiple events, such as The toddler carried by the mother waved her hands excitedly) as soon as some information about the event becomes available. So, the beginning of the sentence The toddler carried… would trigger a event schema retrieval, and prompt further expectation of a noun, such as “rattle”, or “toy” (the following violation of this expectation is then evidenced by EEG). The group using this strategy might be able to understand sentences faster when they read them, but they also end up doing “extra” work of retrieving event schemas they do not use. At and below average memory capacity participants adopt a more ‘economic’ strategy of trying to figure out ‘who did what’ by waiting until all arguments of the event are known. Their event schema retrieval avoids event model revision in the so-called ‘garden-path2’ conditions. © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071 Event Boundaries in Language and Perception 93 A recent meta-analysis of fMRI data investigating the role of memory in sentence processing has shown that functional connectivity between Brodmann areas 45/44 in the Inferior frontal gyrus and posterior cingulate (PCC), as well as between the inferior parietal cortex and PCC, is significantly modulated by working memory capacity (Newman, Malaia, Seo and Hu, 2013). In other words, activation of brain regions responsible for syntax and semantics processing co-occurs with declarative memory referencing more often in participants with higher working memory capacity. Higher activations of PCC observed during processing of telic as compared to atelic verb signs in the study mentioned in the previous section (Malaia, et al. 2012) also supported the link between the perceptual features of a signed verb with its role memory management. The activations seen in the contrasts between telic vs. atelic verb signs indicated links between perceptual processing (for example, after observation of high deceleration of a signing hand, marking an event boundary), linguistic processing (in the case of American Sign Language, phonological processing of syllable type), and retrieval of event schemas from the semantic memory. The neural activation observed during verb processing in ASL combines motion processing, seen in studies on visual event segmentation, with linguistic processing and memory referencing, as observed in language processing research (Figure 1). Further details on the mechanism of event-triggered memory referencing are emerging from perceptual studies looking at bottom-up and top-down regulation of attention and memory3. Here, Huff, Papenmeier, and Zacks (2012) have shown that event boundaries draw attention at the expense of other stimuli. In the experiment, participants watched video clips of soccer games, during which an attention probe would appear on the screen at various times. Visual detection of probes appearing on-screen during the times when the ball changed possession (identified, in a separate study, as event boundaries) was less accurate, as compared to probe detection at other time points. These results showed that bottom-up (visual feature-dependent) regulation of attention over time is correlated with segmentation of ongoing activity into events. A neuroimaging study by Schubotz et al. (2012) further showed that motion features also trigger top-down modulation of attentional focus. In this study, participants viewed everyday actions (such as folding laundry) and tai chi movements (which could not be processed in the top-down manner based on action knowledge). Behaviorally, motion features triggered boundary detection in all conditions. However, only motion boundaries in everyday, predictable actions elicited activation of the superior frontal sulcus, parietal angular gyrus, and parahippocampal cortex, indicative of top-down modulation of attention and retrieval of long-term memory associations. Another neuroimaging study (Swallow et al., 2010) showed that memory updating is also closely linked to event segmentation. In this study, participants watched a movie, and occasionally answered questions about objects that appeared in the movie. The questions always followed 5 seconds after object presentation; in half the cases, an event boundary occurred between object presentation and the question. fMRI data indicated that in cases where the event changed during the delay between object presentation and question, the episodic memory system, including hyppocamus, showed higher activation. In other words, event boundaries marked updates of active memory for the current event, making it more difficult to retrieve objects from past events. The latter feature of event segmentation – its relation to control processes of memory updates – appears to hold true in the case of linguistic presentation of events (Kurby and Zacks, 2012). This was a behavioral investigation of memory updating while reading a narrative. The participants were asked to type what they were thinking, and then segment the narrative into separate meaningful events. The typed responses were coded for the © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071 94 Evie Malaia Fig 1. The model of neural processing in sign language: starting from visual (motion) input (such as area MT+), routed for linguistic analysis to Superior Temporal Gyrus (STG), and triggering memory reference in posterior cingulate (PCC). components of the event, or situational dimensions (characters, objects, space, time, etc.), during which they were typed. It turned out that readers mentioned situational dimensions when those dimensions changed, or at event boundaries. These results showed that memory updates during reading are also closely related to perceived event boundaries. Let us now recapture the general, stimulus-independent model of event processing, encompassing both perceptual and linguistic evidence. Generalized event schema (or templates) are stored in the declarative memory (specifically semantic memory), as abstract patterns to be matched. These overlap with the distributed meaning-generating semantic network, in the areas including ventral precuneus and posterior cingulate cortex. During incoming information processing, the brain continually activates this network, matching the meaning-based incoming pattern to a remembered schema of an event. This serves the purpose of consolidating detailed information about an event into a single reference to an event schema, freeing up the working memory for further information processing. The neural evidence of event processing confirms the function of events as building blocks of an individual reality. Sensory information is continuously gathered and matched against © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071 Event Boundaries in Language and Perception 95 the known patterns representing past events to monitor the surroundings for discontinuities, which signal that mental representations of ‘what is going on’ must be updated. While sensory features characteristic of event changes drive bottom-up attention, additional top-down attentional regulation ensures that perceived events are used for memory updates. Event Perception, Description, and Cognition as a New Research Avenue The hypothesis that sensory and linguistic systems jointly create one’s neurological ‘reality’ dates back to the work of Russian neurophysiologist and Nobel Prize winner Ivan Pavlov, who termed the neural processing streams of the two inputs as first (sensory) and second (linguistic) signal systems (Pavlov 1928). To date, the relevance of event-based accounts for our understanding of perception and language use has been clearly established by work in multiple domains. Recent neuroimaging evidence demonstrates that humans use overlapping brain networks to process visual and linguistic cues for event segmentation and schema recognition. Motion processing in sign language also appears intricately tied to linguistic processing. For example, the motion of the dominant hand in signs generates features used as a grammatical markers for telic event structure in American Sign Language, and unrelated Croatian Sign Language (Malaia, Wilbur, and Milcovič, 2013; Milcovič and Malaia 2010). Interestingly, the two sign languages appear to use slightly different derivative features of motion: American Sign Language utilizes deceleration and/or the overall slope of deceleration from peak velocity to the end of the sign, while Croatian Sign Language uses peak velocity to distinguish telic verbs from the atelic ones. This difference likely derives from the systemic differences in the two linguistic systems: e.g. in American Sign Language, peak velocity of the dominant is used to mark phonological stress, and therefore cannot be employed to mark semantic verb classes (telic vs. atelic). However, in both languages, the kinematic verbal telicity marker is robust to suprasegmental features, such as phonological end-of-sentence lengthening common to sign and spoken languages – the perceptual cue serves as a resilient anchor for both sign language perception and production. Further quantitative and comparative investigations of multiple sign language systems are likely to yield a better picture of the features of visible motion that can be processed by humans and used to extract and construct meaning. The role of linguistic telicity then, and a possible reason for its status as a linguistic universal, is the expression of conceptual event boundaries during communication. As a visually observed event boundary triggers neural referencing to a suitable event schema during perception (‘the first signal system’, in Pavlovian terminology), the linguistic expression of event boundary does the same for communication (or ‘the second signal system’). Neuroimaging evidence for direct use of motion-based visual cues in linguistic sign language processing emphasizes that linguistic event structure and the perceptual segmentation of events conflate in a single cognitive step – referencing of declarative memory. Two important consequences of an event-based understanding of cognition deserve explicit mention. HUMAN MEMORY APPEARS TO BE EVENT-BASED As boundaries between meaningful events serve as key moments for action comprehension, attention regulation in time corresponds with activity segmentation into events (Huff, Papenmeier, and Zacks 2012). First, event-focused attention leads to impairment of visual detection for ambient (non-task-related) motion. Further, the presence of informative motion features (such as velocity and acceleration of movement) is conveyed to the prefrontal cortex, which controls the attentional focus in a top-down manner (Schubotz et al. 2012). Finally, the segmentation of ongoing activity into events is a control process that regulates when memory for events can be updated (Swallow et al., 2010; Kurby and Zacks, 2012). © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071 96 Evie Malaia LANGUAGE MODULATES EVENT PERCEPTION AND MEMORY Readers treat temporal changes in the narrative (or discourse) as event boundaries, using them as a means for controlling contents of memory (Speer and Zacks 2005)4. EEG and behavioral data also show that the construction of situation models during narrative and sentence comprehension can be influenced by verb features – telicity, aspect, and tense (Malaia et al. 2008; Ferretti et al. 2009; Yap et al. 2009). On the other hand, individual processing resources – such as working memory capacity and semantic memory richness – also contribute to linguistic processing, notably affecting comprehension – the binding of incoming linguistic information with an event schema from semantic memory (Malaia et al. 2009). Event-based perception and language processing accounts raise multiple questions about the evolutionary forces that shaped human cognition, as well as malleability of event-based processing in modern man. Sign language research has shown that velocity and acceleration are the perceptual features that non-signer use for everyday event segmentation, and they can be recruited by a linguistic system to express semantic event types, or even grammatical distinctions between verbs. What is remarkable about these studies is that although American and Croatian sign languages both used the same perceptual features, they showed different ways of building them into the linguistic system. This raises questions about the ways in which individual perception and linguistic systems can interact during language acquisition. For example, to what extent can language shape event perception? Or does perceptual segmentation of events ‘scaffold’ language acquisition? How do bilingual individuals reconcile linguistically disparate methods of event boundary signaling (telicity expression) across languages? As similar questions open the avenues for development and testing of event processing models, the exploration of what event-based processing means for our understanding of language and human cognition is only beginning. Short Biography Evie Malaia investigates the effect of linguistic experience on other neural systems, such as visual processing, memory, and executive control networks. Her research employs a combination of motion capture, electrophysiology, and neuroimaging to investigate the neurobiological bases of language processing. She earned her PhD at Purdue University, working on sign language comprehension, and trained subsequently at the Cognitive Neuroimaging Laboratory at Indiana University, conducting investigations of network-level characterizations of linguistic system. She is currently an Assistant Professor at the University of Texas at Arlington Southwest Center for Mind, Brain, and Education. Notes * Correspondence address: Evie Malaia, Southwest Center for Mind, Brain, and Education, Department of Curriculum and Instruction, Suite 416, Hammond Hall, 701 Planetarium Place, University of Texas at Arlington, Arlington, TX 76010, USA. E-mail: malaia@uta.edu 1 Sign language phonology investigates the minimal elements of signs that can change its meaning – handshape, hand orientation, hand placement – as well as other linguistically meaningful kinematic parameters of signing, such as signing space, hand velocity, and sign duration. 2 ‘Garden-path’ sentences are grammatically correct sentences, which, when presented without the accompanying intonation (in reading), are likely to be initially parsed in a way that does not allow meaningful interpretation, e.g. ‘The old man the boat’ (cf. ‘The old man the boat carried finally stepped ashore’.) 3 With regard to event segmentation, attention can be defined as up-regulation, or increased activation of neural networks involved in each stage of event processing, from perceptual segmentation to retrieval from memory. 4 Storytellers use the same as attention-grabbing device: ‘A long time ago, in a galaxy far, far away…’ © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071 Event Boundaries in Language and Perception 97 Works Cited Basu, Debarchana, and Ronnie Wilbur. 2010. Complex predicates in Bangla: an event-based analysis. Rice Working Papers in Linguistics 2. 1–19. Bavelier, Daphne, Dye, Matthew W., and Peter C. Hauser. 2006. Do deaf individuals see better? Trends in cognitive sciences 10(11). 512–518. Bosworth, Rain G., Bartlett, Marian Stewart, and Karen R. Dobkins. 2006. Image statistics of American Sign Language: comparison with faces and natural scenes. Journal of Optical Society of America A 23(9). 2085–2096. Ferretti, Todd R., Rohde, Hannah, Kehler, Andrew, & Melanie Crutchley. 2009. Verb aspect, event structure, and coreferential processing. Journal of memory and language 61(2). 191–205. Folli, Rafaella, and Heidi Harley. 2006. What language says about the psychology of events. Trends in Cognitive Science 10 (3). 91–92. Fujimori, Atsushi. 2012. The association of sound with meaning. Towards a Biolinguistic Understanding of Grammar: Essays on Interfaces ed. by A. M. Di Sciullo. 141–166. John Benjamins Publishing Company. Huff, Markus, Papenmeier, Frank, and Jeffrey M. Zacks. 2012. Visual target detection is impaired at event boundaries. Visual Cognition 20(7). 848–864. Klima, Edward S., Tzeng, Ovid. J. L., Fok, A., Bellugi, Ursula, Corina, David, and Jeffrey G. Bettger. 1999. From sign to script: effects of linguistic experience on perceptual categorization. Journal of Chinese Linguistics 13. 96–129. Kurby, Cristopher A., and Jeffrey M. Zacks. 2012. Starting from scratch and building brick by brick in comprehension. Memory & Cognition 40(5). 812–826. Magliano, Joseph, Kopp, Kristopher, McNerney, M. Windy, Radvansky, Gabriel A., and Jeffrey M. Zacks. 2012. Aging and perceived event structure as a function of modality. Aging, Neuropsychology, and Cognition 19(1-2). 264–282. Malaia, Evie. 2004. Event structure and telicity in Russian: an event-based analysis for telicity puzzle in Slavic languages. Ohio State University Working Papers in Slavic Studies 4. 87–98. Malaia, Evie, Debarchana Basu. 2013. Verb-verb predicates in Bangla and Russian: morpho-semantic event structure analysis. NINJAL International Conference on V-V complexes in Asian languages, Tokyo, Japan Malaia, Evie, and Ronnie B. Wilbur. 2012a. Motion capture signatures of telic and atelic events in ASL predicates. Language and Speech 55(3). 407–421. ——. 2012b. Telicity expression in visual modality. Telicity, change, and state: a cross-categorial view of event structure, ed. by L. McNally and V. Delmonte, 122–138. Oxford: Oxford University Press. ——. 2012c. What Sign Languages show: neurobiological bases of visual phonology. Towards a biolinguistic understanding of grammar: essays on interfaces, ed. by A. M. Di Sciullo, 265–275. John Benjamins Publishing Company, Amsterdam. Malaia, Evie, Borneman, Joshua D., and Ronnie B. Wilbur. (submitted) Bioinformatic properties of sign language motion indicated by fractal complexity. Malaia, Evie, Ranaweera, Ruwan, Wilbur, Ronnie B., and Thomas M. Talavage. 2012. Event segmentation in a visual language: neural bases of processing American Sign Language predicates. Neuroimage 59(4). 4094–4101. Malaia, Evie, Wilbur, Ronnie B., and Marina Milkovič. 2013. Kinematic parameters of signed verbs. Journal of Speech, Language and Hearing Research 56(5). 1677–1688. Malaia, Evie, Wilbur, Ronnie B., Christina Weber-Fox. 2009. ERP evidence for telicity effects on syntactic processing in garden-path sentences. Brain and Language 108(3). 145–158. Malaia, Evie, Wilbur, Ronnie B., and Christina Weber-Fox. 2012. Down the garden path in EEG: telicity effects on thematic role re-assignment in relative clauses with transitive verbs. Journal of Psycholinguistic Research 41(5). 323–345. Milkovič, Marina, and Evie Malaia. 2010. Event visibility in Croatian Sign Language: separating Aspect and Aktionsart. Poster, Theoretical Issues in Sign Language Research-10, Purdue University, IN, USA. Newman, Sharlene D., Malaia, Evie, Seo, Roy, and Cheng Hu. 2013. The effect of individual differences in working memory capacity on sentence comprehension: an fMRI study. Brain Topography 26(3). 458–67. Newtson, Darren. 1973. Attribution and the unit of perception of ongoing behavior. Journal of Personality and Social Psychology 28(1). 28–38. Ogiela, D. A., Schmitt, C., and M. W. Casby. 2013. Interpretation of verb phrase telicity: sensitivity to verb-type and determiner-type. Journal of Speech, Language and Hearing Research. Pavlov, I. P. 1928. Lectures on conditioned reflexes. Trans. and ed. by WH Gantt. New York, Intemational Publishers Co., Inc.. Petitto, Laura Ann, Holowka, Siobhan, Sergio, Lauren E., and David Ostry. 2001. Language rhythms in baby hand movements. Nature 413(6851). 35–36. Schubotz Ricarda I., Korb, Franziska M., Schiffer, Anne-Marike, Stadler, Waltraud, and D. Yves von Cramon. 2012. The fraction of an action is more than a movement: neural signatures of event segmentation in fMRI. Neuroimage 61(4). 1195–205. © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071 98 Evie Malaia Son, Ming-Jeong, and Peter Cole. 2008. An event-based account of –kan constructions in Standard Indonesian. Language 84(1). 120–160. Speer, Nicole K., and Jeffrey M. Zacks. 2005. Temporal changes as event boundaries: processing and memory consequences of narrative time shifts. Journal of Memory and Language 53. 125–140. Swallow, Khena M., Barch, Deanna M., Head, Denise, Maley, Corey J., Holder, Derek, and Jeffrey M. Zacks. 2010. Changes in events alter how people remember recent information. Journal of Cognitive Neuroscience 23. 1052–1064. van Valin Jr, Robert D. 2006. Some universals of verb semantics. Linguistic universals, e. by R. Mairal and J. Gil, 155–178. Van Hout, Angelique. 2001. Event semantics at the lexicon-syntax interface. Events as grammatical objects, ed. by C. Tenny and J. Pustejovsky, 239–282. CSLI Publications. Wilbur, Ronnie B. 2003. Representations of telicity in ASL. In Proceedings from the Annual Meeting of the Chicago Linguistic Society 39(1). 354–368. Chicago Linguistic Society. Yap, Foong Ha, Patrick Chun Kau Chu, Emily Sze Man Yiu, Stella Fay Wong, Stella Wing Man Kwan, Stephen Matthews, Li Hai Tan, Li, Ping, and Yasuhiro Shirai. 2009. Aspectual asymmetries in the mental representation of events: role of lexical and grammatical aspect. Memory & cognition 37(5). 587–595. Yarkoni, Tal, Speer, Nicole K., and Jeffrey M. Zacks. 2008. Neural substrates of narrative comprehension and memory. Neuroimage 41. 1408–1425. Zacks, Jeffrey M., Braver, Todd S., Sheridan, Margaret A., Donaldson, David I., Snyder, Abraham Z., Ollinger, John M., Buckner, Randy L., and Marcus E. Raichle. 2001. Human brain activity time-locked to perceptual event boundaries. Nature Neuroscience 4. 651–655. Zacks, Jeffrey M., Kumar, Shawn, and Abrams, Richard A., Ritesh Mehta. 2009. Using movement and intentions to understand human activity. Cognition 112. 201–216. Zacks, Jeffrey M., Speer, Nicole K., Swallow, Khena M., Braver, Todd S., and Jeremy R. Reynolds. 2007. Event perception: a mind/brain perspective. Psychological Bulletin 133. 273–293. © 2014 The Author Language and Linguistics Compass © 2014 John Wiley & Sons Ltd Language and Linguistics Compass 8/3 (2014): 89–98, 10.1111/lnc3.12071