New Technique Reveals Fresh Findings on the Role Brain Regions Play in Early Language Acquisition
New research co-authored by Wellesley College neuroscientist Sharon Gobes reveals surprising information about the regions of the brain used in acquisition of vocalizations—and when those regions begin to play a role.
Gobes, an assistant professor of neuroscience who studies the cognitive and neural mechanisms underlying animal behavior, uses songbirds as a model system to study auditory memory formation and vocal learning. Baby songbirds are known to learn songs by listening to the song of an adult tutor, memorizing this song, and imitating what they remember hearing—much like the way young children copy sounds and babble in their effort to learn language. Because of these similarities, songbirds offer scientists an excellent model for understanding how humans learn to speak.
The researchers observed the vocal learning habits of male zebra finches to pinpoint which circuits in the bird’s brains are necessary for learning their father’s songs. Previously, researchers believed that when the young birds listened to and memorized the songs of their father, the brain’s auditory regions (areas that process sound) would dominate activity while motor regions (areas that control movement) wouldn’t play a part until later on.
Thanks to a technique known as optogenetics, a method that uses light to cause an electrical disruption of neural activity in order to precisely target brain activity, Gobes and her colleagues at Duke University Medical Center and Harvard University determined that regions of the brain involved in planning and controlling complex vocal sequences may also be necessary for learning sounds by imitation. According to the researchers, knowing which brain circuits are involved in learning by imitation could have broader implications for diagnosing and treating human developmental disorders.
In past studies, researchers would have to depend entirely on drugs or more invasive techniques to determine which regions played a role in the birds’ learning and memory. While effective, these methods could not full resolve which areas were actually engaged during listening to the father’s song. According to Gobes, optogenetics has never before been used in the study of songbirds. The technique offers what Gobes called “very, very precise temporal control of brain regions.”
The researchers paired voice recognition software with optogenetics to scramble signals in small sets of neurons in a young bird’s brain for a few hundred milliseconds while the bird listened to his tutor’s song. Using this method in combination with several other techniques, researchers were able to test which brain regions were active during the learning process.
“During the study, we only switched on the light at the exact moment when then father bird was singing, so we could know for sure that we were only studying the moments during which song acquisition took place.” Gobes said. “We discovered that the motor regions are engaged in learning the tutor song from the very first moment onward—way before the bird starts to replicate its father’s song.”
The study showed that a pre-motor region in the pupil’s brain controls the execution of learned vocal sequences and helps to encode information when the pupil is listening to his tutor. The study revealed that the same circuitry used for vocal control also participates in auditory learning, raising the possibility that vocal circuits in our own brain also help encode auditory experience important to speech and language acquisition.
The article, “Motor Circuits Are Required to Encode a Sensory Model for Imitative Learning,” coauthored by Sharon H. M. Gobes of Wellesley College; Todd F. Roberts, Malavika Murugan, and Richard Mooney from the Department of Neurobiology at Duke University Medical Center; and Bence P. Ölveczky from the Department of Organismic and Evolutionary Biology at Harvard University, is now available in the journal Nature Neuroscience.