Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015, HAL (Le Centre pour la Communication Scientifique Directe)
…
6 pages
1 file
Listening abilities in humans have developed in rural environments which are the dominant setting for the vast majority of human evolution. Hence, the natural acoustic constraints present in such ecological soundscapes are important to take into account in order to study human speech. Here, we measured the impact of basic properties of a typical 'natural quiet' and non reverberant soundscape on speech recognition. A behavioural experiment was implemented to analyze the intelligibility loss in spoken word lists with variations of Signal-to-Noise Ratio corresponding to different speaker-to-listener distances in a typical low-level natural background noise recorded in a plain dirt open field. To highlight clearly the impact of such noise on recognition in spite of its low level, we contrasted the 'noise + distance' condition with a 'distance only' condition. The recognition performance for vowels and consonants and for different classes of consonants is also analyzed.
In the real world, human speech recognition nearly always involves listening in background noise. The impact of such noise on speech signals and on intelligibility performance increases with the separation of the listener from the speaker. The present behavioral experiment provides an overview of the effects of such acoustic disturbances on speech perception in conditions approaching ecologically valid contexts. We analysed the intelligibility loss in spoken word lists with increasing listener-to-speaker distance in a typical low-level natural background noise. The noise was combined with the simple spherical amplitude attenuation due to distance, basically changing the signal-to-noise ratio (SNR). Therefore, our study draws attention to some of the most basic environmental constraints that have pervaded spoken communication throughout human history. We evaluated the ability of native French participants to recognize French monosyllabic words (spoken at 65.3 dB(A), reference at 1 meter) at distances between 11 to 33 meters, which corresponded to the SNRs most revealing of the progressive effect of the selected natural noise (28.8 dB to 218.4 dB). Our results showed that in such conditions, identity of vowels is mostly preserved, with the striking peculiarity of the absence of confusion in vowels. The results also confirmed the functional role of consonants during lexical identification. The extensive analysis of recognition scores, confusion patterns and associated acoustic cues revealed that sonorant, sibilant and burst properties were the most important parameters influencing phoneme recognition. . Altogether these analyses allowed us to extract a resistance scale from consonant recognition scores. We also identified specific perceptual consonant confusion groups depending of the place in the words (onset vs. coda). Finally our data suggested that listeners may access some acoustic cues of the CV transition, opening interesting perspectives for future studies.
Interspeech 2018, 2018
To increase the range of modal speech in natural ambient noise, individuals increase their vocal effort and may pass into the 'shouted speech' register. To date, most studies concerning the influence of distance on spoken communication in outdoor natural environments have focused on the 'productive side' of the human ability to tacitly adjust vocal output to compensate for acoustic losses due to sound propagation. Our study takes a slightly different path as it is based on an adaptive speech production/perception experiment. The setting was an outdoor natural soundscape (a plane forest in altitude). The stimuli were produced live during the interaction: each speaker adapted speech to transmit French disyllabic words in isolation to an interlocutor/listener who was situated at variable distances in the course of the experiment (30m, 60m, 90m). Speech recognition was explored by evaluating the ability of 16 normal-hearing French listeners to recognize these words and their constituent vowels and consonants. Results showed that in such conditions, speech adaptation was rather efficient as word recognition remained around 95% at 30m, 85% at 60m and 75% at 90m. We also observed striking differences in patterns of answers along several lines: different distances, speech registers, vowels and consonants.
Glob J Otolaryngol, 2017
Backround: Speech signals are known to be altered to a significant level by background noise, reverberation, and less than desirable specifications in frequency and temporal responses of communication channel. Everyday noises in an individual’s own acoustic environment is a heterogeneous group and dynamic in that it changes throughout the day as well over a period of days/months. Therefore impact of these everyday noises on speech perception in individuals with normal hearing and hearing impairment is not easy to predict from clinical measures of word recognition scores. Effect of noise on word recognition score (WRS) is routinely measured using Speech in noise (SPIN) test, where in the competing signal is the audiometric noise. Hence, the present study aims to assess speech perception of an individual with hearing loss as measured using recorded noises which are representative of typical noises encountered by these individual’s in their daily routine. Material and Method: The participants included 15 adults with bilateral mild to moderate hearing loss in the age range of 30-50 years. Detailed Audiological evaluation was carried out on all participants prior to the study to ensure compliancy to the selection criteria. Real world noises were recorded at locations which are frequented by the subjects in their daily life. This was done by a prior study (Sreelekha, 2014) using survey and questionnaire tool and all daily acoustic environment were tabulated on a frequency measure. Recording of these reported real world noises was done using a Sound Level Meter (SLM) with a condenser microphone connected to a laptop computer and software for audio recording. The speech perception was measured by using standard phoneme perception test in kannada (devaki..) and mixing these noise levels to the test materials at predetermined SNR level, i.e 0dB and +10dB SNR (Signal to Noise Ratio). Results: Results showed that certain noises like traffic noise effectively reduced speech perception ability more significantly than other noises. The effect was more for 0dB SNR than for +10dB SNR. The result suggests that not only overall amplitude of noise spectrum, the spectro-temporal distribution of energy also plays a key role in masking the cues for consonant perception of hearing impaired individual. Conclusion: The results of the current study reflects the importance of auditory ecology in understanding the acoustical world that an individual with hearing impairment is exposed to and how our intervention strategy must be changed depending on this.
Archives of Acoustics
This study is concerned with the influence of spatial separation of disturbing sources of noise on the speech intelligibility. Spatial separation of speech and disturbing sources without changing their acoustic power may contribute to the significant improvement in the speech intelligibility. This problem has been recently analysed in many papers [1-5]. These works have confirmed an important role of the spatial configuration of sources. However, there have been no work investigating this problem for nonsense words (logatoms) that may provide more rigorous tests of this phenomenon. Moreover, this problem has not been analysed for Polish speech. It is important to emphasize that the acoustic and phonetic properties of Polish speech are somewhat different from those of English one. Therefore, the attempt to investigate the influence of the spatial separation of sound sources was made in this study. In the situation with more than one spatially separated disturbances, there may occur a so-called spatial suppression phenomenon, that is "mutual suppression" of disturbing sounds in the auditory system that brings about an increase in the speech intelligibility. This phenomenon is also called the spatial unmasking of speech [5, 6]. The research consist in determination of the speech intelligibility in the presence of one or two statistically independent speech-shaped noise sources varying in configuration. Only two pairs of the spatial configurations were investigated. Character of the dependences obtained in the study implies that the spatial suppression occurs in certain configurations of sources only. This effect brings about an increase in the speech intelligibility and can be explained on the basis of the binaural masking level difference (BMLD). It seems then that the BMLD may be a more general phenomenon and includes not only difference in the detection threshold of a pure tone masked by noise but also the improvement in the speech intelligibility, while speech is presented at the background of disturbing signals.
Normal hearing adults were tested with the New Zealand recordings of the Bamford-KowalBench (BKB) sentences and the Consonant-Nucleus-Consonant (CNC) monosyllabic words in order to establish list equivalence and obtain normative data for sound field presentation. CNC words were presented at a fixed level (65 dB SPL) at +5 dB signal to noise ratio (SNR). An adaptive task was used to measure speech recognition thresholds (dB SNR) for BKB sentences, with a fixed noise level of 60 dB SPL. Noise consisted of 100-talker babble. Pairs of BKB lists were used for the adaptive task. After removal of the word and sentence lists with the greatest differences, lists were equivalent but linguistic background continued to have a significant effect on scores, with moderately large effect sizes. English monolingual speakers had better performance than bilingual speakers for both CNC words and BKB sentences. Normative speech scores are presented for lateralized speech in noise recognition (speech and noise at +/-45° azimuth with right and left side presentations for CNCs and BKBs) and speech and noise in the front at 0° azimuth for BKBs. Combined results for all participants and for monolingual versus bilingual participants are presented.
Frontiers in Built Environment, 2021
Masking noise and reverberation strongly influence speech intelligibility and decrease listening comfort. To optimize acoustics for ensuring a comfortable environment, it is crucial to understand the respective contribution of bottom-up signal-driven cues and top-down linguistic-semantic cues to speech recognition in noise and reverberation. Since the relevance of these cues differs across speech test materials and training status of the listeners, we investigate the influence of speech material type on speech recognition in noise, reverberation and combinations of noise and reverberation. We also examine the influence of training on the performance for a subset of measurement conditions. Speech recognition is measured with an open-set, everyday Plomp-type sentence test and compared to the recognition scores for a closed-set Matrix-type test consisting of syntactically fixed and semantically unpredictable sentences (c.f. data by Rennies et al., J. Acoust. Soc. America, 2014, 136, 26...
Journal of the American Academy of Audiology, 1995
A prevailing complaint among individuals with sensorineural hearing loss (SNHL) is difficulty understanding speech, particularly under adverse listening conditions. The present investigation compared the speech-recognition abilities of listeners with mild to moderate degrees of SNHL to normal-hearing individuals with simulated hearing impairments, accomplished using spectrally shaped masking noise. Speech-perception ability was assessed using the predictability-high sentences from the Speech Perception in Noise test. Results revealed significant differences between groups in sentential-recognition ability, with the hearing-impaired subjects performing poorer than the masked-normal listeners. These findings suggest the presence of a secondary distortion degrading sentential-recognition ability in the hearing impaired, implications of these data will be discussed concerning the mechanism(s) responsible for speech perception in the hearing impaired.
Con base en las actividades diarias e interrelación de las personas en las organizaciones y que la vida del hombre está supeditada al correcto uso de la capacidad comunicativa, y con mayor razón en un mundo laboral cada vez más globalizado y con nuevas herramientas de información (Tics); y por las relaciones interpersonales al interior de la organización por tal razón se ve la necesidad de ofrecer un curso en el cual se reflexione sobre las distintas maneras de comunicarnos claramente a nivel organizacional o social, en el que se estimule un comportamiento comunicacional asertivo en donde se reconoce la verdadera clave del éxito en el clima laboral, la cual reside en una comunicación asertiva que implica respeto a la hora de transmitir un mensaje. Es decir, no se trata de valorar solamente el talento propio sino también, el ajeno. De este modo es posible aprender de los demás teniendo en cuenta su opinión de una forma responsable, consciente y madura para un óptimo desempeño en el contexto organizacional. La línea tecnológica a que se dirige es el cliente. 3. FORMULACION DE LAS ACTIVIDADES DE APRENDIZAJE:
Placing Disability, Literary Disability StudiesSpringer eBooks, S. B. Mintz, G. Fraser (eds.), 2024
ALEA. Estudos Neolatinos, 2024
Prehistory Papers Volume III (2023) ISBN: 978-0-9525029-6-8, 2023
al 22nd International Congress on Ancient Bronzes, “Bronzes in Context”. École Française d’Athènes, American School of Classical Studies at Athens, Scuola Archeologica Italiana di Atene – Atene, 14-19 ottobre 2024.
International Journal of Electronic Government Research, 2010
Choreographic Practices, 2018
Journal of Agronomy and Crop Science, 2006
The Prostate, 2009
Patient Preference and Adherence, 2017
Oil & Gas Science and Technology, 2006
ISRN Inflammation (Online), 2013
Journal of Asthma, 2019