When people listen to speech, they are faced with various levels of uncertainty concerning the content of the message they receive.
One side of this uncertainty concerns the acoustics of the signal that is transmitted to the ear. There is a very large amount of variation in how sounds of the language are produced by speakers. Therefore, a given “phonological category” can be realized with various articulatory and acoustic properties that would make the correspondence between acoustics and the linguistic category difficult to reach.
On the other side, when a listener processes a linguistic message, it tries to reach an understanding of the meaning that is in adequation with the initial intention of the speaker. It is generally assumed that listeners may choose an interpretation that is influenced by other words in the sentence. For example, if one hears “Someone has forgotten a b?g in the train”, one may favour to recognize “bag” rather than “bug”.
Another aspect of acoustic uncertainty is that manipulating the pronunciation of some words in a sentence (for example pronouncing the first words with the mouth more constricted than usual) will also affect the way the other sounds in the sentence are recognized. An example is, if a speaker produces “Please say this word again” with a constricted vocal tract, a final sequence containing a vowel that would usually be perceived as /i/ (like in “bit”) may then be perceived as /Ɛ/ (like in “bet”).
The aim of the A_LEA project is to investigate how these sources of uncertainty interact together in the perception of speech. A_LEA stands for “Articulation: Linguistics / Entropy / Acoustics” (another name for uncertainty is “entropy”) and our project focuses on studying the articulation between linguistic entropy and acoustic entropy in speech perception.
This is the result of a collaboration with Étienne Gaudrain (Centre de Recherches en Neurosciences de Lyon, CNRS / Université Lyon 1 / INSERM).
This project has received funding from:
- CNRS (MITI : Mission pour les Initiative Transverses et l’Interdisciplinarité, Institut des Sciences Humaines – Institut des Sciences Biologiques, Mai-Décembre 2018);
- This is still on-going work…
Acoustic properties of speech signals are known to be highly variable due to extensive variability in both structural aspects of the body parts involved (length of the vocal tract, volume of the nasal resonators, length and width of the vocal folds…) and the mobile configuration of the organs that are involved in speech production (lip protrusion, vocal tract aperture, tongue configuration…).
Some aspects of this variability are mainly associated with so-called “voice” properties that provide information concerning a speaker’s gender, identity, mood… For example, male speakers would tend (on the average) to have thicker vocal folds and longer vocal tracts, which would give rise to both a lower fundamental frequency and lower vocal tract resonancies. But speakers are also able to manipulate their vocal folds’ tension and / or their vocal tract configuration (aperture, tongue position) in order to control their fundamental frequency and / or their resonance frequencies in order to produce specific linguistic categories (2 different vowels for example).
This project investigates how listeners process such information in situations where both information (speakers’ structural properties and vocal tract configuration) vary.
This is the result of a collaboration with Pr. Deniz Başkent (University Medical Center Groningen-UMCG, ENT Department / Rijksuniversiteit Groningen) and Étienne Gaudrain (Centre de Recherches en Neurosciences de Lyon, CNRS / Université Lyon 1 / INSERM — University Medical Center Groningen-UMCG, ENT Department).
Communications & Publications
- Crouzet, Olivier, Gaudrain, Etienne & Başkent, Deniz (2019). Perceptual adaptation to formant changes associated with either vowel categories or vocal tract length: Implications for speech perception in cochlear implanted patients. 2019 Conference on Implantable Auditory Prostheses (CIAP), July 14-19, Lake Tahoe, CA, USA. [Download the poster]
This project has received funding from: