Whether it’s the misconception that the moon landings never happened or the false claim that Covid vaccines contain microchips, conspiracy theories abound. Sometimes with dangerous consequences.
Now researchers have discovered that such beliefs can be changed by having a conversation with artificial intelligence (AI).
“Conventional wisdom says that people who believe in conspiracy theories rarely, if ever, change their minds, especially in the face of evidence,” said Dr. Thomas Costello, co-author of the American University study.
He added that this is thought to be because people adopt such beliefs to satisfy various needs, such as the desire for control. However, the new study offers a different take.
“Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has ‘fallen down the rabbit hole’ and come to believe in a conspiracy theory,” the team wrote.
The researchers say the approach relies on an AI system that can draw on a wide range of information to generate conversations that encourage critical thinking and provide personalized, fact-based counterarguments.
“The AI knew in advance what the person believed and because of that, it was able to tailor its persuasion to their precise belief system,” Costello said.
Written in Science magazineCostello and his colleagues reported how they conducted a series of experiments with 2,190 participants who believed in conspiracy theories.
While the experiments varied slightly, all participants were asked to describe a particular conspiracy theory they believed in and the evidence they believed supported it. This information was then incorporated into the An artificial intelligence system called “DebunkBot”.
Participants were also asked to rate on a 100-point scale how true they believed the conspiracy theory to be.
They then consciously engaged in a three-round conversation with the AI system about either their conspiracy theory or a non-conspiracy topic. Afterward, participants again rated how true they thought their conspiracy theory was.
The results revealed that those who discussed non-conspiracy topics only slightly reduced their “truth” rating after the conversation. However, those who discussed their conspiracy theory with AI showed, on average, a 20% drop in their belief that it was true.
The team said the effects appeared to last for at least two months, while the approach worked for almost all types of conspiracy theories, though not those that were true.
The researchers added that the size of the effect depended on factors including how important the belief was to the participant and their trust in the AI.
“About one in four people who began the experiment believing in a conspiracy theory came out of the experiment without that belief,” Costello said.
“In most cases, AI can only do something incremental, making people a little more skeptical and unsure, but a select few were completely disabused of its conspiracy.”
The researchers added that reducing belief in one conspiracy theory appeared to reduce participants’ belief in other similar ideas, at least to a small degree, while the approach could have real-world applications: for example, AI could respond to conspiracy theory-related posts on social media.
Professor Sander van der Linden of the University of Cambridge, who was not involved in the work, wondered whether people would willingly interact with such AI in the real world.
He also said it was unclear whether similar results would be found if participants had chatted with an anonymous human, while there are also questions about how the AI was convincing conspiracy believers, given that the system also uses strategies such as empathy and affirmation.
But he added: “Overall, it’s a really novel and potentially important finding and a good illustration of how AI can be leveraged to combat misinformation.”