Noisy sound inputs pass through networks of excitatory and inhibitory neurons in the auditory cortex that clean up the signal (in part with guidance from the listener paying attention) and detect the distinct features of the sounds, allowing the brain to recognize communication sounds regardless of differences in how they occur. It is pronounced by a loudspeaker and ambient noise. Credit: Manaswini Kar
In a paper published today in Communication biologyAuditory neuroscientists at the University of Pittsburgh describe a machine learning model that helps explain how the brain perceives the meaning of communication sounds, such as animal calls or spoken words.
The algorithm described in the study shows how social animals, including monkeys and guinea pigs, use their brain’s sound processing networks to differentiate between sound categories — such as mating calls, food or danger — and act accordingly.
The study is an important step towards understanding the intricacies and intricacies of neural processing that underlie voice recognition. Insights from this work pave the way for understanding, and eventually treating, disorders that affect speech recognition, and improving hearing aids.
“More or less everyone we know will lose some of their hearing at some point in their lives, either as a result of aging or exposure to noise. It’s important to understand the biology of voice recognition and find ways to improve it,” said the senior author. and Pete, Associate Professor of Neuroscience, Srivatsun Sadagopan, Ph.D. “But the process of vocal communication is fascinating in itself. The ways our brains interact with each other and can take in ideas and communicate them through sound are nothing short of magical.”
Humans and animals encounter an astonishing diversity of sounds every day, from the cacophony of the jungle to the buzzing inside a crowded restaurant. Regardless of the sound pollution in the world that surrounds us, humans and other animals are able to communicate and understand each other, including their tone of voice or tone.
When we hear “hello,” for example, we recognize its meaning regardless of whether it is said in an American or British accent, whether the speaker is a woman or a man, or if we are in a quiet room or busy nesting.
The team started with the idea that the way the human brain recognizes and picks up the meaning of communication sounds might be similar to how it recognizes faces compared to other objects. Faces are very diverse but have some common characteristics.
Instead of matching every face we encounter with a perfect “typical” face, our brain captures useful features, such as the eyes, nose, and mouth and their relative positions, and creates a mental map of these small characteristics that define a face.
In a series of studies, the team showed that communication sounds may also consist of such small characteristics. The researchers first built a machine learning model of sound processing to recognize the different sounds made by social animals.
To test whether the brain responses fit the model, they recorded the brain activity of guinea pigs listening to the sounds of their kin’s communication. Neurons in areas of the brain responsible for processing sounds lit up with a wave of electrical activity when they heard a noise that contained features found in certain types of these sounds, similar to a machine learning model.
Then they wanted to check the performance of the model against the behavior of animals in real life.
Guinea pigs were kept in a pen and exposed to different categories of sounds – squeaks and grunts which were categorized as distinct vocal cues. The researchers then trained the guinea pigs to walk to different corners of the enclosure and receive fruit rewards depending on which category of sound was played.
Next, they made the tasks more challenging: To mimic the way humans learn the meaning of words spoken by people with different accents, the researchers made guinea pig calls through voice-changing software, speeding them up or slowing them down, and raising or lowering their pitch. , or add noise and echo.
Not only were the animals able to perform the task consistently as if the calls they heard had not changed, they continued to perform well despite the artificial echo or noise. Even better, the machine learning model described their behavior (and the underlying activation of sound-processing cells in the brain) perfectly.
As a next step, the researchers are translating the model’s accuracy from animals to human speech.
“From an engineering standpoint, there are much better models for speech recognition. The unique thing about our model is that we have a close match to behavior and brain activity, which gives us more insight into biology. In the future, these ideas could be used,” said lead author Satyabrata Pareda. , PhD, a postdoctoral fellow in the Department of Neurobiology at Pitt, “used to help people with neurodevelopmental conditions or to help engineer better hearing aids.”
“Many people have conditions that make it difficult for them to recognize speech,” said Manasweni Kar, a student in Sadagoppan’s lab. “Understanding how the stereotypical brain recognizes words and makes sense of the auditory world around it will make it possible to understand and help those who struggle.”
more information:
Srivatsun Sadagopan et al, Adaptive mechanisms facilitate robust performance in noise and echo in an auditory classification paradigm, Communication biology (2023). DOI: 10.1038/s42003-023-04816-z
the quote: Machine Learning Model Sheds Light on How Brains Recognize Communication Sounds (2023, May 2), Retrieved May 2, 2023 from https://phys.org/news/2023-05-machine-brains-communic.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.