When people listen to speech, they are faced with various levels of uncertainty concerning the content of the message they receive.
One side of this uncertainty concerns the acoustics of the signal that is transmitted to the ear. There is a very large amount of variation in how sounds of the language are produced by speakers. Therefore, a given "phonological category" can be realized with various articulatory and acoustic properties that would make the correspondence between acoustics and the linguistic category difficult to reach.
On the other hand, when a listener processes a linguistic message, it tries to reach an understanding of the meaning that is in adequation with the initial intention of the speaker. It is generally assumed that listeners may choose an interpretation that is influenced by other words in the sentence. For example, if one hears "Someone has forgotten a b?g in the train", one may favour to recognize "bag" rather than "bug".
Another aspect of acoustic uncertainty is that manipulating the pronunciation of some words in a sentence (for example pronouncing the first words with the mouth more constricted than usual) will also affect the way the other sounds in the sentence are recognized. An example is, if a speaker produces "Please say this word again" with a constricted vocal tract, a final sequence containing a vowel that would usually be perceived as /i/ (like in "bit") may then be perceived as /Ɛ/ (like in "bet").
The aim of the A_LEA project is to investigate how these sources of uncertainty interact together in the perception of speech. A_LEA stands for "Articulation: Linguistics / Entropy / Acoustics" (another name for uncertainty is "entropy") and our project focuses on studying the articulation between linguistic entropy and acoustic entropy in speech perception.
This works takes place in collaboration with Étienne Gaudrain (Centre de Recherches en Neurosciences de Lyon, CNRS / Université Lyon 1 / INSERM.