Facebook has has issued an update about his ambitious plans for a computer reading interface that reads the brain, thanks to a team of scientists supported by Facebook Reality Labs at the University of California, San Francisco. The UCSF researchers have just published the results of an experiment in decoding the speech of people with implanted electrodes. Their work demonstrates a method to quickly read entire words and sentences from the brain & # 39; read & # 39; – bringing Facebook a bit closer to his dream of a non-invasive system for typing thoughts.
People can already type with brain-computer interfaces, but those systems often ask them to spell individual words with a virtual keyboard. In this experiment that was published in Nature communication today test subjects listened to multiple-choice questions and spoke the answers aloud. An electrode array registered activity in parts of the brain associated with speech understanding and production, looking for patterns that matched specific words and sentences in real time.
If participants hear someone ask, "Which musical instrument do you prefer to listen to," they would respond with one of many options such as "violin" or "drums" while recording their brain activity. The system guesses when they ask and answer a question, and then guesses the contents of both speech events. The predictions were formed by earlier context – so as soon as the system determined which question topics they heard, it would reduce the range of likely answers. The system could produce results with an accuracy of 61 to 76 percent, compared to the accidentally expected 7 to 20 percent.
"Here we show the value of decoding both sides of a conversation – both the questions someone hears and what they answer," said lead author and professor of neurosurgery at UCSF, Edward Chang, in a statement. But Chang noted that this system recognizes only a very limited number of words so far; participants were only asked nine questions with 24 total answer options. The subjects in the study – who were prepared for epilepsy surgery – used highly invasive implants. And they spoke aloud, they didn't just think about it.
That is very different from the system that Facebook described in 2017: a non-invasive, mass-market cap that allows people to type more than 100 words per minute without manual text input or speech-to-text transcription. Facebook also emphasizes a Reality Labs-supported headset that reads brain activity with near-infrared light, making a non-invasive interface more likely.
As Facebook says, even with very limited capacities, virtual and augmented reality glasses could use brain reading. "Even a handful of imaginary words – such as" selecting "or" deleting "can decode would offer completely new ways of communicating with today's VR systems and the AR glasses of tomorrow," Reality Labs reports. Facebook is not the only major company working on brain-computer interfaces: Eleur Musk's Neuralink recently revealed new work on a wire-shaped brain reading implant.
Even if we never see this brain-reading technology in Facebook products (something that could probably cause it just a little concerns), researchers could use it to improve the lives of people who cannot talk due to paralysis or other problems. "Currently, patients with speech loss due to paralysis are limited to spelling words slowly," Chang said. “But in many cases the information needed to speak fluently is still in their head. We only need the technology to express it. & # 39;