It’s probably a good idea to keep your opinions to yourself when your friend gets a terrible new haircut – but soon you may not have a choice.
That’s because scientists at the University of Texas at Austin have trained an artificial intelligence (AI) to read a person’s mind and turn their innermost thoughts into text.
Three study participants listened to stories while lying in an MRI machine, while an AI ‘decoder’ analyzed their brain activity.
They were then asked to read another story or make up their own, after which the decoder could convert the MRI data into text in real time.
The breakthrough raises concerns about “mental privacy,” as eavesdropping on others’ thoughts could be the first step.
Scientists at the University of Texas at Austin have trained an artificial intelligence (AI) to read a person’s mind and turn their innermost thoughts into text
Three study participants listened to stories while lying in an MRI machine, which trained the AI ’decoder’ to associate their brain activity with certain words. They were then asked to read another story or make up their own, after which the decoder could convert the MRI data into text
Lead author Jerry Tang said he wouldn’t give a “false sense of security” by saying the technology may not have the potential to eavesdrop on people’s minds in the future, saying it could be “misused” now become.
HOW DOES IT WORK?
Three study participants listened to stories while lying in an MRI machine, which collected brain activity.
An AI tool called the “decoder” was then given the MRI data and the stories they listened to, and trained to associate the brain activity with certain words.
The participants were then placed back in the MRI machine and asked to either read a different story or make up a new one in their heads.
The decoder can then convert the MRI data into text in real time and capture the main points of their new story
But he said: ‘We take very seriously the concern that it could be used for bad purposes.
And we want to spend a lot of time moving forward to try to avoid that.
“I think right now, when the technology is still in such an early stage, it’s important to be proactive and take the lead by, for example, establishing policies that protect people’s mental privacy, entitle people to their thoughts and their brain data.
“We want to make sure people use it only when they want to, and that it helps them.”
Indeed, the technology is not yet a threat to privacy, as it took 16 hours of training before the AI could successfully interpret a participant’s thoughts.
Even after that, it couldn’t exactly replicate the stories they read or created, just capturing the main points.
For example, a person listening to a speaker say “I don’t have my driver’s license yet,” translated his thoughts as “she hasn’t even started learning to drive yet.”
Participants were also able to “sabotage” the technology, using methods such as mentally enumerating the names of animals to prevent it from reading their minds.
The technology couldn’t exactly replicate the stories they read or created, it just captured the main points
Researchers found they could read a person’s mind with about 50 percent accuracy by using a new MRI scanning method
The study, published in Natural Neurosciencereveals that the decoder that uses language processing technology similar to ChatGPT, the AI chatbot.
ChatGPT is trained on a huge amount of text data from the Internet, which allows it to generate human-like text in response to a given prompt.
The brain has its own “alphabet” made up of 42 different elements that refer to a specific concept such as size, color or location, and combines them all to form our complex thoughts.
Each ‘letter’ is handled by a different part of the brain, so by combining all the different parts it is possible to read someone’s mind.
The US-based team did this by recording MRI data from three areas of the brain connected to natural language while the participants listened to 16 hours of podcasts.
The three brain regions analyzed were the prefrontal network, the classical language network, and the parietal-temporal-occipital association network.
The technology was also able to interpret what people saw when they watched silent movies, or their thoughts as they imagined telling a story
The algorithm then got the scans, which compared patterns in the audio to patterns in the recorded brain activity.
It could pick up what the person was thinking about half the time.
That meant it produced text that closely, and sometimes exactly, matched the words people listened to – this was worked out using just their brain activity.
The technology was also able to interpret what people saw when they watched silent movies, or their thoughts as they imagined telling a story.
Unlike other mind-reading technology, it works when people think of every word, not just the one in a setlist – although it struggles with pronouns like “he” and “me”.
It detects activity in language-forming brain areas, rather than how someone imagines moving their mouth to form specific words.
Dr. Alexander Huth, senior author of the study, said: ‘We were quite shocked that this worked as well as it does.
“This is a problem I’ve been working on for 15 years.”
The researchers say the breakthrough could help people who are mentally aware but unable to speak, such as stroke victims or those with motor neuron disease.
Silicon Valley is very interested in mind-reading technology, which will one day allow people to type just by thinking the words they want to communicate.
The decoder could pick up what the person was thinking about half the time. That meant it produced text that closely, and sometimes exactly, matched the words people were listening to – working this out using just their brain activity
Elon Musk’s company, Neuralink, is working on a brain implant that could provide direct communication with computers.
But the new technology is relatively unusual in its field, reading minds without the use of a brain implant, eliminating the need for surgery.
While it currently requires a bulky, expensive MRI machine, in the future people will be able to wear patches on their heads that use light waves to penetrate the brain and provide information about blood flow.
This allows people’s thoughts to be detected as they move.
Dr. Huth added: “For a non-invasive method, this is a real leap forward from what has been done before, which usually consists of a few words or short sentences.”
On concerns that the technology could be used on someone without them knowing, such as by an authoritarian regime interrogating political prisoners or an employer spying on employees, the researchers say the system can only read an individual’s mind after it is trained in their thought patterns, so could not be secretly applied to anyone.
Dr. Huth said, “If people don’t want something decoded from their brains, they can control it using just their cognition — they can think of other things, and then it all goes wrong.”
Mind-reading AI turns your thoughts into images with 80% accuracy
Artificial intelligence can create images from text prompts, but scientists revealed a gallery of images the technology produces by reading brain activity.
The new AI-powered algorithm reconstructed about 1,000 images, including a teddy bear and an airplane, from these brain scans with 80 percent accuracy.
Osaka University researchers used the popular Stable Diffusion model, included in OpenAI’s DALL-E 2, which can create any image from text input.
The team showed the participants individual sets of images and collected fMRI (functional magnetic resonance imaging) scans, which the AI then decoded.
Read more here
Scientists fed the AI brain activity of four study participants. The software then reconstructed what it saw in the scans. The top row shows the original images shown to participants and the bottom row shows the AI-generated images