Wearing a Toronto Blue Jays cap on his head, William Johnson turns to his wife Ann and asks how she feels about the baseball team.
“Anything is possible,” replies Ann, 48, who lives in Regina.
Her husband jokingly responds that she doesn’t seem to have much confidence in them.
At this, Ann laughs, pauses, and says, “You’re right about that.”
It is the first conversation the couple have had in 18 years in Ann’s own voice, recorded as part of a clinical trial in California in which Ann is participating.
When she was 30 years old, Ann suffered a stroke that left her unable to speak. She was diagnosed with locked-in syndrome, which means that she cannot speak and she has limited movements.
Since then, simple conversations can last several minutes, as Ann relies on devices that require her to spell each word with eye movements.
But new scientific advances show how artificial intelligence (AI) is making it easier for people with brain injuries to have more seamless conversations, like the one Ann had with her husband about the Blue Jays.
The investigation published in the journal Nature Wednesday shows how the sentences Ann is thinking about can be spoken, in her own voice, by an online avatar. Although its commercial application is years away, researchers and others consider it a significant advance in quickly (and aloud) word formation by interpreting brain signals.
“This is a really big breakthrough,” said Margaret Seaton, clinical research coordinator at the University of California, San Francisco (UCSF), who worked on the study.
“[Ann] described it as extremely emotional to hear her own voice after more than 18 years without one.”
An AI-powered medical device is helping a stroke victim use his own voice again after 20 years. The technology is still in the research phase, but doctors say the advances would be life-changing.
Faster than previous voice tools
During an online press conference on Tuesday, the study’s lead investigator, Edward Chang, said that “the loss of speech after injury is devastating.”
“Speech is not just about communicating words, but also about who we are. Our voice and our expressions are part of our identity,” said Chang, who is also a professor of neurological surgery at the UCSF Weill Institute for Neurosciences.
For many Canadians, this type of paralysis that leaves them unable to speak may be due to brain damage caused by an accident or stroke, or even a diagnosis such as amyotrophic lateral sclerosis (ALS).
The researchers’ ability to convert brain signals into words isn’t new, but the speed at which the technology operates and has a virtual avatar speak words is what makes this latest study significant in the field, experts say. .
Sunnybrook Hospital neurologist and director of Canada’s largest ALS clinic, Dr. Lorne Zinman, calls the devices in this research an “incredible innovation.”
“Most ALS patients will develop speech difficulties, and many will lose their ability to speak,” Zinman said.
“The development of new technologies that allow them to communicate can have a major impact on improving their quality of life.”

About two years ago, Chang and his team at UCSF showed how electrodes implanted in a person’s brain can transcode neural activity in words written on a screen.
At the time, the technology could only register about 15 words per minute, but the group’s latest research shows how advances have made it possible to register 78 words per minute.
On average, a typical person speaks 150 to 200 words per minute, so while it’s still not on par with normal speech, the researchers say they’re getting closer to restoring a natural flow.
“We think these results are important because they open the door to new applications where people with paralysis will have personalized interactions with family and friends,” Chang said.

The device correctly answered 75% of the words
Using this particular study, Chang and his team implanted a sheet of 253 electrodes on the surface of Ann’s brain over areas known to be crucial for speech production.
In order for a person to speak, the brain sends signals to different parts of the face, such as the tongue, jaw, and lips. But in Ann’s case, her stroke left her unable to respond to these signals.
To pick up these signals that her brain was trying to transmit, the researchers placed a port in Ann’s head and used a wire to connect electrodes in her brain to a series of computers.
For about two weeks, Ann worked with the system, repeatedly trying to say different sentences silently by moving her mouth as much as she could.
The sentences included more than 1,000 words, which Seaton says cover 85 percent of the average person’s daily vocabulary.
The data collected from this was then fed into AI algorithms to train the system to recognize what kind of signals Ann’s brain sends to develop different word sounds.
The researchers then ran a testing phase, where Ann would think the phrases and the algorithm would be able to verbalize them, depending on the activity it picked up from her brain.

The findings show that the computer got three out of four words right, so 25 percent of the time, the algorithm guessed the word Ann wanted to say.
“It was really exciting to see how quickly she could get the computer to understand what she was trying to say,” her husband William said.
And the researchers were also able to customize the voice coming from the avatar to be Ann’s, creating an algorithm to synthesize speech and feeding it a recording they had of her voice speaking at her wedding.
The study conducted by researchers at the University of California was published. next to another in nature On Wednesday Francis Willett and his team from Stanford University did it.
Willett’s study is also looking at ways to collect brain activity and turn it into intended words, but they did so by monitoring individual neurons in the brain with an array of very small electrodes, and found that a person with ALS was able to communicate 62 words per minute. . in words written through a device.
Right now, Zinman says the ALS patients he sees at Sunnybrook can communicate in different ways.
At first, he says, a patient can type or type, but the disease often ends up taking away the ability to move.
In this case, he says that people can use a device that relies on their eye movements to spell words.
“You can imagine how long it would take to spell a sentence with your eyes,” he said.
With these new devices, Zinman says the person only has to think of a word for it to appear.
“That’s the really exciting part about these brain-computer interfaces,” he said, adding that they will allow patients to converse with their loved ones.
Years away from commercial devices
As significant as these findings are, the University of California researchers acknowledge that this technology is still years away from actually being used in people’s daily lives.
When it comes to better algorithms that can more accurately decode brain signals, Seaton said improvements could be seen in the near future.

But Seaton says they’d also like to see the device go wireless and portable, which will likely take much longer to become a reality.
At the time, Seaton described the port on Ann’s head as an “active wound site” that needs to be monitored. As a result, he says the technology can only be used in a laboratory with the support of a researcher.
Updates to the device, along with regulatory approval, are likely more than five years away, Seaton estimates.
Yalda Mohsenzadeh, an assistant professor of computer science and a member of the Brain and Mind Institute at Western University in London, Ontario, says she hopes at some point wearable devices can be used on top of the scalp, so surgery isn’t necessary. for the electrodes.

In addition, he noted that these devices must demonstrate that they can be used safely and reliably over a long period of time, among different types of people.
“For a technology like this to be used realistically, you first need to address that it can work under all of these variabilities that we have for individuals and between individuals,” he said.
Seaton says they are working to recruit more people with different brain injuries, to validate their findings in a larger group.
As for Ann, she hopes that her involvement has helped advance this field and that more breakthroughs are just around the corner.
“Hopefully one day this will become something that is achievable for people who can’t speak,” William said.