By Tom Thompson
For me the exploration of language acquisition in children is a frontier just as fascinating, even as thrilling, as exploring outer space or the deep sea. Learning to understand a child’s language acquisition is like cracking a deeply encrypted code.
Even before they’re born, babies are able to eavesdrop on every conversation the mother has. And when they emerge, we can show that they recognize the rhythms and intonations of the mother’s voice, as well as stories and songs first heard in the womb.
A child’s language skills are directly related to the number of words and complex conversations he, or she, has with others. If a baby hears few words, if a child is rarely read to, or talked with, then he will not have normal language development. Not just a technical skill, language is learned through nurturing face-to-face social interaction. Fortunately, we are innately predisposed to pay attention to little children and talk to them.
But despite millennia of child rearing, we have really not known much about early language and brain development. I can remember being impressed some years ago with the linguistic analysis of babies’ language by using nipples connected to computers to register sucking. More recent leading edge research - especially in neuroscience technology that measures magnetic fields generated by the activity of brain cells –is helping us to investigate how, where and with what frequency babies from around the world process speech sounds in the brain when they are listening to adults speak in both their native and non-native languages.
Some of those new technologies include MEG (magnetoencephalography), MRI (magnetic resonance imaging and DTI (diffusion tensor imaging). Professor Deb Roy has employed a “continuous capture disc array” to study the evolution of the language development of his newborn son. With his research team at the Massachusetts Institute of Technology (MIT), Professor Roy has analyzed his son’s language abilities by way of over a quarter of a million recordings. The conclusion? Babies are computational wizards. Professor Roy has demonstrated, for example, how babbling is a warm-up to speech, as the child tries to figure out how to put the articulatory organs together to make sounds. We’re all familiar with babies everywhere making delightful little oohh and aahh sounds when a parent is face-to-face with them, talking and smiling.
Much of that babbling includes the beginning of distinctive noise parameters of their own language community. By nine months babies show a preference for listening to sound combinations that are possible in their language, even if the sound combinations don’t form real words. The Chinese baby, with rapid pitch changes just like Chinese, starts to babble in a way that sounds Chinese. Swedish babies babble with a rising intonation pattern typical of adult speakers of Swedish. We now have a better understanding of what babies are doing in their cribs by themselves and play with sounds. By playing in this way, they learn how to make the sounds they hear us produce.
Once babies can hear they start responding to sounds. So babies are learning about speech a long time before they can talk. In independent efforts, Judit Gervain at Paris Descartes University and Patricia Kuhl at the University of Washington, are discovering that the baby brain responds from day one to the sequence in which sounds are arranged, suggesting that the algorithms for language learning are part of the neural fabric infants are born with. “For a long time we had this linear view. First, babies are learning sounds, then they are understanding words, then many words together. But,” as Gervain adds,” we now know that that babies are starting to learn grammatical rules from the beginning.” Not only could babies discern repetition, they are also sensitive to where repetition occurs in the sequence.
It’s a popular theory now that babies “take statistics” on how frequently they hear sounds that they want to duplicate on their own in what will be their native language. And then they choose accordingly. It’s in that way that way babies organize the acoustic stream of their lives. How do children manage to do it? There is clearly some genetic foundation that enables human beings to acquire language. But children clearly have powerful learning mechanisms, largely triggered by social interaction, that enable them to learn the specific properties of their own language.
Tom Thompson writes often on foreign language topics. He lives in Washington, DC.
Writing systems | Language and languages | Language learning | Pronunciation | Learning vocabulary | Language acquisition | Motivation and reasons to learn languages | Being and becoming bilingual | Arabic | Basque | Celtic languages | Chinese | English | Esperanto | French | German | Greek | Hebrew | Indonesian | Italian | Japanese | Korean | Latin | Portuguese | Russian | Sign Languages | Spanish | Swedish | Other languages | Minority and endangered languages | Constructed languages (conlangs) | Reviews of language courses and books | Language learning apps | Teaching languages | Languages and careers | Language and culture | Language development and disorders | Translation and interpreting | Multilingual websites, databases and coding | History | Travel | Food | Other topics | Spoof articles | How to submit an article
Why not share this page:
If you need to type in many different languages, the Q International Keyboard can help. It enables you to type almost any language that uses the Latin, Cyrillic or Greek alphabets, and is free.
Note: all links on this site to Amazon.com, Amazon.co.uk and Amazon.fr are affiliate links. This means I earn a commission if you click on any of them and buy something. So by clicking on these links you can help to support this site.