Papers by Evguenia Malaia
Language, cognition and neuroscience, Sep 22, 2019
The preference of the human parser for interpreting syntactically ambiguous sentence-initial argu... more The preference of the human parser for interpreting syntactically ambiguous sentence-initial arguments as the subject of a clause (i.e. subject preference) has been documented for spoken and sign languages. Recent research (He, 2016) suggests that the subject preference can be eliminated by manipulating information structure (topicalisation). To investigate the effects of interaction between syntax and information structure on language processing, we tested the role of topic marking in sentence processing in Austrian Sign Language (ÖGS). We examined whether non-manual topic marking on the sentence-initial argument eliminates the subject preference using event-related brain potentials. We replicated the finding of the subject preference in ÖGS by identifying an N400-family response to object-first sentences. Further, topic marking in ÖGS influenced the processing of the topic argument itself and later processing stages. This suggests that interpretation of topic marking imposes additional processing costs, relative to syntactic reanalysis.
Wiley Interdisciplinary Reviews: Cognitive Science, Sep 10, 2019
To understand human language-both spoken and signed-the listener or viewer has to parse the conti... more To understand human language-both spoken and signed-the listener or viewer has to parse the continuous external signal into components. The question of what those components are (e.g., phrases, words, sounds, phonemes?) has been a subject of long-standing debate. We re-frame this question to ask: What properties of the incoming visual or auditory signal are indispensable to eliciting language comprehension? In this review, we assess the phenomenon of language parsing from modality-independent viewpoint. We show that the interplay between dynamic changes in the entropy of the signal and between neural entrainment to the signal at syllable level (4-5 Hz range) is causally related to language comprehension in both speech and sign language. This modality-independent Entropy Syllable Parsing model for the linguistic signal offers insight into the mechanisms of language processing, suggesting common neurocomputational bases for syllables in speech and sign language.
Wiley Interdisciplinary Reviews: Cognitive Science, Nov 12, 2018
Brain Research, Jul 1, 2018
Research on spoken languages has identified a "subject preference" processing strategy for tackli... more Research on spoken languages has identified a "subject preference" processing strategy for tackling input that is syntactically ambiguous as to whether a sentence-initial NP is a subject or object. The present study documents that the "subject preference" strategy is also seen in the processing of a sign language, supporting the hypothesis that the "subject"-first is universal and not dependent on the language modality (spoken vs. signed). Deaf signers of Austrian Sign Language (ÖGS) were shown videos of locally ambiguous signed sentences in SOV and OSV word orders. Electroencephalogram (EEG) data indicated higher cognitive load in response to OSV stimuli (i.e. a negativity for OSV compared to SOV), indicative of syntactic re-analysis cost. A finding that is specific to the visual modality is that the ERP (event-related potential)-effect reflecting linguistic reanalysis occurred earlier than might have been expected, that is, before the time point when the path movement of the disambiguating sign was visible. We suggest that in the visual modality, transitional movement of the articulators prior to the disambiguating verb position or co-occurring non-manual (face/body) markings were used in resolving the local ambiguity in ÖGS. Thus, whereas the processing strategy of "subject preference" is cross-modal at the linguistic level, the cues that enable the processor to apply that strategy differ in signing as compared to speech.
When people listen to speech, neural activity is cued by the fluctuation in the acoustic envelope... more When people listen to speech, neural activity is cued by the fluctuation in the acoustic envelope (Peelle, Gross, & Davis, 2012). For spoken languages, this cue-based entrainment is the basis of signal parsing and predictive processing (Ding et al., 2016). A growing body of research also indicates that humans are highly sensitive to motion differences in the visual signal (Strickland et al., 2015), and signers make neurolinguistic distinctions based on motion profiles of signs (Malaia et al., 2013). The temporal mechanism of predictive parsing in the visual modality, however, is not yet clear. We tested the hypothesis that in signers, as in speakers, the syllable-driven fluctuations (~4 syllables per second, or 4Hz) in the envelope of the signal are cues to predictive entrainment. EEG data was collected in signers viewing signed sentences (meaningful stimuli), and the same sentences played in reverse (meaningless stimuli with rich spectrotemporal structure). We then assessed the optical flow in the visual stimuli, which is a validated measure of dynamic entropy in the overall signal (Borneman et al., 2018), as well as a measure of overall motion in time that marks sign-syllable boundaries in continuous signed discourse. We then computed cross-correlation between the variations in the optical flow of the visual signal, and neural activity of the participants.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this p... more The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.
Brain and Language, 2020
One of the key questions in the study of human language acquisition is the extent to which the de... more One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown. We examined neural responses of a group of Deaf signers who received access to signed input at varying ages to three linguistic phenomena at the levels of classifier signs, syntactic structure, and information structure. The amplitude of the N400 response to the marked word order condition negatively correlated with the age of acquisition for syntax and information structure, indicating increased cognitive load in these conditions. Additionally, the combination of behavioral and neural data suggested that late learners preferentially relied on classifiers over word order for meaning extraction. This suggests that late acquisition of sign language significantly increases cognitive load during analysis of syntax and information structure, but not word-level meaning.
Cortex, Mar 1, 2019
Short-term memory capacity differs between spoken and signed languages.-The discrepancy stems fro... more Short-term memory capacity differs between spoken and signed languages.-The discrepancy stems from different recruitment of spatial processing resources.-Sign language processing requires processing of 'what' and 'where' parameters.-'Phonological loop'-based rehearsal for speech recruits spatial processing resources.-STM capacity in speech is enhanced by available rehearsal-based strategy.
The Rural Special Education Quarterly, May 2, 2023
This position paper explores the needs of rural families of children, adolescents, and adults wit... more This position paper explores the needs of rural families of children, adolescents, and adults with autism spectrum disorder (ASD) during the COVID-19 pandemic. Prior to COVID-19, literature portrays elevated stress in families of individuals with ASD and health and socioeconomic disparities for rural and underserved populations. These disparities were exacerbated due to COVID-19 and subsequent lockdowns and economic turmoil. Academic and adaptive skills training were particularly impacted due to school closures, with parents tasked with taking some responsibility for training these skills. Our goals for this article focus on special considerations for rural families regarding (a) neurobiological and developmental impacts of stressful experiences like COVID-19, (b) delineation of the impacts on individuals with ASD and other comorbid and related conditions, and (c) education and intervention needs during these times. Finally, we offer suggestions for future care during pandemic events, including recommendations for improving service delivery under such conditions.
Hrvatska revija za rehabilitacijska istraživanja, Oct 12, 2022
This paper reviews best practices for experimental design and analysis for sign language research... more This paper reviews best practices for experimental design and analysis for sign language research using neurophysiological methods, such as electroencephalography (EEG) and other methods with high temporal resolution, as well as identifies methodological challenges in neurophysiological research on natural sign language processing. In particular, we outline the considerations for generating linguistically and physically well-controlled stimuli accounting for 1) the layering of manual and non-manual information at different timescales, 2) possible unknown linguistic and non-linguistic visual cues that can affect processing, 3) variability across linguistic stimuli, and 4) predictive processing. Two specific concerns with regard to the analysis and interpretation of observed event related potential (ERP) effects for dynamic stimuli are discussed in detail. First, we discuss the "trigger/effect assignment problem", which describes the difficulty of determining the time point for calculating ERPs. This issue is related to the problem of determining the onset of a critical sign (i.e., stimulus onset time), and the lack of clarity as to how the border between lexical (sign) and transitional movement (motion trajectory between individual signs) should be defined. Second, we discuss possible differences in the dynamics within signing that might influence ERP patterns and should be controlled for when creating natural sign language material for ERP studies. In addition, we outline alternative approaches to EEG data analyses for natural signing stimuli, such as the timestamping of continuous EEG with trigger markers for each potentially relevant cue in dynamic stimuli. Throughout the discussion, we present empirical evidence for the need to account for dynamic, multi-channel, and multi-timescale visual signal that characterizes sign languages in order to ensure the ecological validity of neurophysiological research in sign languages.
The MIT Press eBooks, Nov 21, 2014
International journal of semantic computing, Mar 1, 2008
This paper considers neurological, formational and functional similarities between gestures and s... more This paper considers neurological, formational and functional similarities between gestures and signed verb predicates. From analysis of verb sign movement, we offer suggestions for analyzing gestural movement (motion capture, kinematic analysis, trajectory internal structure). From analysis of verb sign distinctions, we offer suggestions for analyzing co-speech gesture functions.
Language, cognition and neuroscience, Jun 29, 2023
International Journal of Behavioral Development, Sep 23, 2020
Acquisition of natural language has been shown to fundamentally impact both one’s ability to use ... more Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language, and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue, because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later - a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range: 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject-object-verb vs. object-subject-verb) were examined in sentences that included: 1) simple sentences, 2) topicalized sentences, and 3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.
Autism Research, Nov 8, 2019
Autism spectrum disorder is increasingly understood to be based on atypical signal transfer among... more Autism spectrum disorder is increasingly understood to be based on atypical signal transfer among multiple interconnected networks in the brain. Relative temporal patterns of neural activity have been shown to underlie both the altered neurophysiology and the altered behaviors in a variety of neurogenic disorders. We assessed brain network dynamics variability in Autism Spectrum Disorders (ASD) using measures of synchronization (phase-locking) strength, and timing of synchronization and desynchronization of neural activity (desynchronization ratio) across frequency bands of resting state EEG. Our analysis indicated that fronto-parietal synchronization is higher in ASD, but with more short periods of desynchronization. It also indicates that the relationship between the properties of neural synchronization and behavior is different in ASD and typically developing populations. Recent theoretical studies suggest that neural networks with high desynchronization ratio have increased sensitivity to inputs. Our results point to the potential significance of this phenomenon to autistic brain. This sensitivity may disrupt production of an appropriate neural and behavioral responses to external stimuli. Cognitive processes dependent on integration of activity from multiple networks may be, as a result, particularly vulnerable to disruption. Lay Summary Parts of the brain can work together by synchronizing activity of the neurons. We recorded electrical activity of the brain in adolescents with autism spectrum disorder, and then compared the recording to that of their peers without the diagnosis. We found that in participants with autism, there were a lot of very short time periods of non-synchronized activity between frontal and parietal parts of the brain. Mathematical models show that the brain system with this kind of activity is very sensitive to external events.
We make a first attempt at distinguishing information-carrying visual signal by comparing visual ... more We make a first attempt at distinguishing information-carrying visual signal by comparing visual characteristics of American Sign Language and everyday human motion, to identify what clues might be available in one but not in the other. The comparison indicated significantly higher fractal complexity in sign language across tested frequency bands (0.01-15 Hz), as compared to everyday motion. A comparison of our results with the work also showing high fractal complexity in the speech signal allows us to suggest the underlying properties of linguistic signals which allow babies to 'tune to' a specific channel, or modality, during language acquisition.
Uploads
Papers by Evguenia Malaia