MIT neuroscientists develop neural computer model that can locate sounds as well as the human brain

The human brain is finely tuned not only to recognize particular sounds, but also to determine which direction they came from. By comparing differences in sounds that reach the right and left ear, the brain can estimate the location of environmental sounds. MIT neuroscientists have now developed a computer model that can also perform that complex task.

The model, which consists of several convolutional neural networks, not only performs the task as well as humans do, it also struggles in the same ways that humans do.

“We now have a model that can actually localize sounds in the real world,” says Josh McDermott in a statement. McDermott is an associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.  “And when we treated the model like a human experimental participant and simulated this large set of experiments that people had tested humans on in the past, what we found over and over again is that the model recapitulates the results that you see in humans.”

Findings from the new study also suggest that humans’ ability to perceive location is adapted to the specific challenges of our environment, says McDermott, who is also a member of MIT’s Center for Brains, Minds, and Machines.

“The model seems to use timing and level differences between the two ears in the same way that people do, in a way that’s frequency-dependent,” McDermott says. The researchers also showed that when they made localization tasks more difficult, by adding multiple sound sources played at the same time, the computer models’ performance declined in a way that closely mimicked human failure patterns under the same circumstances.

“As you add more and more sources, you get a specific pattern of decline in humans’ ability to accurately judge the number of sources present, and their ability to localize those sources,” says lead author Andrew Francl. “Humans seem to be limited to localizing about three sources at once, and when we ran the same test on the model, we saw a really similar pattern of behavior.”

The researchers trained one set of models in a virtual world with no echoes, and another in a world where there was never more than one sound heard at a time. In a third, the models were only exposed to sounds with narrow frequency ranges, instead of naturally occurring sounds. When the models trained in these unnatural worlds and were evaluated on the same battery of behavioral tests, the models deviated from human behavior, and the ways in which they failed varied depending on the type of environment they had been trained in. These results support the idea that the localization abilities of the human brain are adapted to the environments in which humans evolved, the researchers say.

The researchers are now applying this type of modeling to other aspects of audition, such as pitch perception and speech recognition, and believe it could also be used to understand other cognitive phenomena, such as the limits on what a person can pay attention to or remember, McDermott says.

This study was published in Nature Human Behavior

Article written by Rhonda Errabelli

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer

About the Author

Steve Fink

Steve Fink is the Editor-in-Chief of BrainTomorrow.com, GutNews.com and StudyFinds.com. He is formerly the Vice President of News Engagement for CBS Television Stations’ websites, and spent 20 years with CBS.

Leave a Reply

Your email address will not be published. Required fields are marked *