A Brain Device Capable of Speaking Our Thoughts is on the Horizon

Scientists in the Netherlands successfully created an auditory output from brain impulses that could be played back. The researchers stumbled upon the phenomenon while studying neural activity in an individual with epilepsy. Small electrodes, intended to pinpoint the specific source of her seizures, were used to capture the brain signals while she spoke. Although the noises were incomprehensible as words, they demonstrate advancements in the area of brain computer interface (bci) connections.

Those who have anarthria, or the inability to speak due to neurological disorders, may use gadgets that convert the motion of other parts of the body into letters or words. In a recent study, a bci implanted into an individual suffering from locked-in syndrome was capable of producing 90 characters/minute, however, this rate is still far from the usual 150 characters per/minute in a real-time conversation.

It has long been a goal of neuroscientists to extract speech signals from people’s brains more quickly, however, it is difficult to extract solely speech-related brain signals. For a person to be able to speak, there must first be some type of transfer from the premotor area involved in speech comprehension and the motor coretx cortex that controls the actual movement of the mouth.  The issue is that, similar to controlling arm movement, speech articulation and output are very intricate. It’s all about feedback, a 50-millisecond sequence between vocalizing something and hearing it.

UC San Francisco researchers created a program that could generate speech. The recent testing of the device included sensors that were inserted in a stroke victim’s cerebral speech regions. Although the produced synthesized speech was not quite in actual time, optimized technology could allow for a shorter response time.

“We have this super limited data set of just 100 words, and we also had a very short experimental time so we weren’t able to provide her with ample time to practice,” said Christian Herff, a computer scientist at Maastricht University and one of the lead authors of the new study. “We just wanted to show that if you train on audible speech, you can get something on imagined speech as well.”

“We were able to use his mimed, whispered signals to produce, and to decode the language output,” says Gopala Anumanchipalli, a computer and neural engineer at UCSF and UC Berkeley who worked on the research. “And we are right now in the process of generating speech, in real time, for that subject.”

Eddie Chang, a neuroscientist from UCSF, and his team created a system that was more accurate and understandable as a result of Herff’s method. With only 50 words in Chang’s repertoire, approximately 12 words were produced each minute, making it the first artificial speech device successfully used by an individual with anarthria. However, because there is no feedback mechanism, a person cannot amend a word selection if the machine makes a mistake. Furthermore, though closer than previous studies, the speech was not in real time.

“Most people use thousands of words, not 50,” says Frank Guenther, a Boston University speech neuroscientist. “The idea behind a closed-loop system was to just give them the ability to create acoustics that could be used to produce any sound. On the other hand, a 50-word system would be much better than the current situation if it worked very reliably, and Chang’s team is much closer to the reliable decoding end of things than anyone else.”

Intelligibility of synthesized consonants and vowel sounds may be improved with more feedback from the specific regions of the brain that produce speech. If this is not achievable, developing enhanced algorithms for comprehending and forecasting what a mind is attempting to achieve will become increasingly critical.

“So currently our focus is on using a more complex algorithm that is capable of higher-quality speech, and really focusing on the training aspect,” said Herff. The ultimate goal is to achieve some kind of authentic speech quality and comprehension. “That’s the common direction all of the groups doing this are going toward—doing it in real time,” added Anumanchipalli.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *