•  

Accuracy • Independence • Integrity

July 22, 2017   |   Ithaca, NY

Opinion

Faculty research: Bilingual children separate sound systems

At first glance, the process of learning a language can seem like an incredibly daunting task. Environmental input presented at a fairly rapid rate must be mapped onto detailed representations in the brain. Specifically, a word’s meaning, sounds and grammatical functions all must be extracted from the incoming speech stream. And yet, this potentially arduous task is typically executed with little effort by children barely a year of age. For example, Carey and Bartlett reported in 1978 that children can learn a word with as little as one exposure — as anyone who spends time around young children can attest, often when we wish they weren’t listening.

%image_alt%
Skott Freedman, assistant professor of speech language pathology and audiology, recently published research on how bilingual children learn multiple languages and keep sounds systems separate.

But what happens when a child is not learning one language, but multiple languages? Bilingual children are not only charged with learning the sounds and words of one language but also a whole other set of sounds and words in another language. They encounter the difficult task of keeping the two languages separate to avoid code-switching, or switching between languages while speaking.I recently published a study in the International Journal of Bilingualism titled “Using whole-word production measures to determine the influence of phonotactic probability and neighborhood density on bilingual speech production.” How’s that for a mouthful? The study was designed to further examine the productions of bilingual children. It has commonly been debated whether a bilingual child has one large set of sounds from both languages or, conversely, two separate sound systems.

One way of testing this theory is to measure a child’s productions in both languages using some measure of complexity and then to compare the two languages. To add in another dimension, we can also evaluate the degree to which children approximate their language. For example, if a child says “tar” for the word “star,” they produced 3 out of 4 possible sounds. In other words, the child approximated the word with 75 percent accuracy. The two above variables, sound complexity and sound approximation, formed the basis for my research study. A hypothesis proposed several years ago and dubbed the target-driven hypothesis predicts that, though bilingual children may differ in their productions between languages, they will nevertheless maintain a similar level of overall approximation. This was tested by Bunta et al. in 2006 with an English-Hungarian bilingual child and confirmed. And yet, no study to date has tested this hypothesis in Spanish, the fastest-growing language in the U.S.

My study examined the speech productions of five English-Spanish bilingual children during a picture-naming task and compared their productions with five English-only speaking children and five Spanish-only speaking children. Results showed that the target-driven hypothesis was confirmed again. Interestingly, while bilingual children produced more complex forms in Spanish than ­­in English, they nonetheless approximated English and Spanish to the same degree. Perhaps while learning a language some inner algorithm determines how much one n­­­­­eeds to articulate in order to be understood, regardless of the different kinds of sounds between languages. Otherwise, children should have been more easily understood in Spanish.

Finally, no production differences emerged between the bilingual children and their monolingual counterparts in English or Spanish. It seems then there must be a sufficient amount of independence between a bilingual’s two sound systems.

Considering that bilingual children in the current study produced words with greater sound complexity in Spanish than in English, this suggests some degree of phonological (i.e. sound) autonomy is nevertheless maintained. In summary, bilingual children manage not only to learn two sets of words at one time and keep these two systems separate, they even keep the two sound systems separate! That’s quite a feat. Sort of inspires you to want to go out and start learning a new language, huh? ¡Buena suerte amigos!

Skott Freedman is an assistant professor of speech language pathology and audiology at Ithaca College. Email him at sfreedman@ithaca.edu.