Two separate studies published today in Nature indicate that, in the future, brain-computer interfaces (BCIs) could help restore communication for people who are unable to speak due to severe paralysis. In both studies, the researchers used brain implants that could pick up brain signals, which were then translated into sentences on a screen using algorithms. While this isn’t a new concept, what’s interesting is that both research teams were able to do it much faster and with greater precision than existing technologies.
In it study at stanfordresearchers implanted electrodes into the brain of a patient with amyotrophic lateral sclerosis (ELA) in two areas associated with speech. The BCI was designed to detect brain activity when the patient tried to speak. Those signals were then fed into an algorithm that associated certain patterns of brain activity with phonemes—the sounds that make up speech. To train the algorithm, the researchers had the patient try vocalize or silently deliver sample sentences in 25 sessions that will last approximately four hours each.
In it UC San Francisco and UC Berkeley Study, researchers surgically placed a paper-thin sheet containing 253 electrodes into the brain of a person severely paralyzed due to a stroke. As in the Stanford study, the researchers had the patient train the algorithm by trying to speak so that he could recognize which brain signals were associated with different phonemes. Those cues were then translated into facial expressions and voice modulation on a digital avatar.
Although the studies used slightly different approaches, the results were similar in terms of accuracy and speed. The Stanford study had an error rate of 9.1 percent when limited to a 50-word vocabulary and 23.8 percent when expanded to a 125,000-word vocabulary. After about four months, the Stanford algorithm was able to convert brain signals into words at about 68 words per minute. The UC San Francisco and Berkeley algorithm was able to decode at an average speed of 78 words per minute. It had an 8.2 percent error rate for a 119-word vocabulary and about a 25 percent error rate for a 1,024-word vocabulary.
Although a 23 to 25 percent error rate is not enough for everyday use, it is a significant improvement over existing technology. At a news conference, Edward Chang, UCSF chair of neurological surgery and a co-author of the UCSF study, noted that the effective rate of communication for existing technology is a “labor-intensive” five to 15 words per minute compared with 150 to 250 words per minute of natural technology. speech.
“Sixty to 70 wpm is a real milestone for our field as a whole because it comes from two different centers and two different approaches,” Chang said at the briefing.
That being said, these studios are more proof of concept than prime time ready technology. One potential problem is that these treatments require long sessions to train the algorithm. However, researchers from both teams told reporters at a press conference that they were hopeful algorithm training would be less intensive in the future.
“These are very early studies and we don’t have a huge database of other people’s data. As we make more of these recordings and get more data, we should be able to transfer what the algorithms learn from other people to new people,” says Frank Willett, a Howard Hughes Medical Institute research scientist and co-author of the Stanford study. study. Willett, however, noted that this was not guaranteed and that more research was needed.
Another problem is that the technology has to be easy enough for people at home to use, without requiring caregivers to go through complicated training. Brain implants are also invasive, and in these particular studies, the BCI had to be connected via wires to a device outside the brain. skull that was later connected to a computer. There are also concerns about electrode degradation and the fact that these may not be permanent solutions. To reach consumer use, the technology will have to be rigorously vetted, which can be a long and expensive process.
In addition, the studies were conducted in patients who still had some ability to move. Some neurological conditions, such as late-stage ALS, can cause what is called “locked-in syndrome.” In this state, a person still has the ability to think, see, and hear, but can only communicate by blinking or other small movements. People with locked-in syndrome need this type of technology the most, but more research is needed to see if this method would be effective.
“We’ve crossed a performance threshold that we’re both excited about because it crosses the usability threshold,” Chang says, noting that the potential benefit of this technology is tremendous if it can be deployed securely and widely. “We are thinking about it very seriously and what are the next steps.”