Using lasers to manipulate brain activity, researchers zero in on mechanisms underlying key hearing phenomena.
PHILADELPHIA - Being able to understand speech is essential to our evolution as humans. Hearing lets us perceive the same word even when spoken at different speeds or pitches, and also gives us extra sensitivity to unexpected sounds. Now, new studies from the Perelman School of Medicine at the University of Pennsylvania clarify how these two crucial features of audition are managed by the brain.
In the first study, published online in eLife this week, Maria N. Geffen, Ph.D., an assistant professor in the departments of Neuroscience and Otorhinolaryngology and Head and Neck Surgery, and her team, including first co-authors Ryan G. Natan and John J. Briguglio, both doctoral candidates, discovered how different neurons work together in the brain to reduce responses to frequent sounds, and enhance responses to rare sounds.
When navigating through complex acoustic environments, we hear both important and unimportant sounds, and an essential task for our brain is to separate out the important sounds from unimportant ones.
"In everyday conversations, you want to be able to carry on a discussion, yet simultaneously perceive when someone else calls your name," Geffen said. "Similarly, a mouse running through the forest wants to be able to detect the sound of an owl approaching even though there are many other, more ordinary sounds around him."
"It's really important to understand the mechanisms underlying these basic auditory processes, given how much we depend on them in everyday life," she added.
Researchers found that a perceptual phenomenon known as "stimulus-specific adaptation" in the brain might help with this complex task. This feature of perception occurs across all our senses. In the context of the hearing, it is a reduction of auditory cortical neurons' responses to the frequently-heard, "expected" sounds of any given environment.
This desensitization to expected sounds creates a relatively heightened sensitivity to unexpected sounds--which is desirable because unexpected sounds often carry extra significance.
While this phenomenon has been studied for decades, there were limited tools available previously to examine the function of specific cell types in stimulus-specific adaptation. In the study, Geffen's team used recently discovered optogenetics techniques--which enable a given type of neuron to be switched on or off at will with bursts of light delivered to a lab mouse's brain through optical fibers.
The team found that, surprisingly, two major types of cortical neurons provided two separate mechanisms for this type of adaptation.
Both are inhibitory interneurons, which lessen and otherwise modulate the activity of the main excitatory neurons of the cortex. The researchers found that one population of interneurons, somatostatin-positive interneurons, exerts much more inhibition on excitatory neurons during repeated, and therefore, expected tones. The other tested population, parvalbumin-positive interneurons, turned out to inhibit responses to both expected and unexpected tones--but in a way that also has the net effect of enhancing stimulus-specific adaptation.
By recording the activity during the presentation of test tones, the researchers were able to craft a detailed model of how these different interneuron types are wired to their excitatory neuron targets, and how the stimulus-specific adaptation phenomenon emerges from this network.
Geffen and her team now plan to use their optogenetics techniques to study how the manipulation of these interneurons affects mice's behavioral responses to expected and unexpected sounds.
In the second study, published online in the Journal of Neurophysiology last month, Geffen and members of her laboratory, including Isaac M. Carruthers, Ph.D., identified an important general principle for brain organization that allows us to detect a word pronounced by different speakers.
Depending on who pronounces the same word, the resulting sound can have very different physical features: for example, some people have higher-pitched voices or talk slower, than others. Yet our brain is ultimately able to determine that the two different sounds represent the same underlying word.
This is a problem that speech recognition software has been grappling with for many years. A solution to that is to find a representation of the word that is invariant to the acoustic transformations. Invariant representation refers to the brain's general ability to perceive an object as that object, despite considerable variation in how the object is presented to the senses. Humans tend to take this ability for granted, but it is often greatly diminished in people with hearing aids or cochlear implants.
Neuroscientists widely assume that invariant representation emerges in the brain from multiple stages of processing. In each stage, networks of neurons remove more noise and distortion and other inessential features of the input, until, at the highest level, farthest removed from the sensory organs, different pronunciations of a word presumably will activate only a single cluster of neurons specifically representing that word. This has been shown to work in vision.
In the audition, however, little has been known about how or even whether the mammalian auditory cortex accomplishes this. In the study, however, Geffen colleagues were able to show, using their rat model of the mammalian audition, that neurons in a higher auditory processing area of the rat brain, the supra-rhinal auditory field, varied significantly less in their responses to distorted rat vocalizations, compared to neurons in the primary auditory cortex, which receives relatively raw signals from the inner ear.
"We found that this higher area of the auditory cortex does indeed have a better capability for generalizing inputs than the lower cortical area," Geffen said. "This is consistent with the idea that invariant representations are created gradually through hierarchical processing in the auditory pathway."
She now plans to extend this line of research to determine whether a similar brain mechanism enables hearing and recognizing speech in the presence of noise.
Both studies were supported in part by the National Institutes of Health (R03DC013660, R01DC014479), the Klingenstein Foundation Award in Neurosciences, the Burroughs Wellcome Fund Career Award at Scientific Interface, Human Frontiers in Science Foundation Young Investigator Award, and Pennsylvania Lions Club.
Other coauthors on the Journal of Neurophysiology study are Diego Laplagne, Andrew Jaegle, John Briguglio, Laetitia Mwilambwe-Tshilobo, and Ryan G. Natan. Other coauthors on the eLife study are Laetitia Mwilambwe-Tshilobo, Sara Jones, Mark Aizenberg, and Ethan Goldberg.