Researchers say artificial intelligence could help reduce some of the most contentious culture war divisions through a mediation process.
Experts say a system that can create group statements that reflect majority and minority views can help people find common ground.
Professor Chris Summerfield, co-author of the research from the University of Oxford, who was working at Google DeepMind at the time the study was conducted, said the AI tool could have multiple purposes.
“What I would like to see used is to give UK political leaders a better idea of what people in the UK really think,” he said, noting that polls only gave limited information, while well-known forums as citizens’ assemblies were often expensive, logistically challenging and restricted in size.
Writing in Science magazine, Summerfield and his Google DeepMind colleagues report how they built the “Habermas Machine,” an artificial intelligence system named after German philosopher Jürgen Habermas.
The system works by taking written opinions from individuals within a group and using them to generate a set of group statements designed to be acceptable to all. Group members can then rate these statements, a process that not only trains the system but also allows the statement with the highest support to be selected.
Participants can also input critiques of this initial group statement into the Habermas Machine to result in a second collection of AI-generated statements that can be reclassified and a revised final text selected.
The team used the system in a series of experiments with a total of more than 5,000 participants in the United Kingdom, many of whom were recruited through an online platform.
In each experiment, the researchers asked participants to respond to topics ranging from the role of monkeys in medical research to religious teaching in public education.
In one experiment, involving about 75 groups of six participants, researchers found that participants preferred the Habermas Machine’s initial group statement 56% of the time to a group statement produced by human mediators. AI-based efforts were also rated as higher quality, clearer and more informative, among other characteristics.
Another series of experiments found that the entire two-step process with the Habermas Machine increased the level of group agreement regarding participants’ initial opinions before the AI mediation began. Overall, the researchers found that agreement increased by eight percentage points on average, which is equivalent to four people in 100 changing their view on an issue where opinions were originally evenly divided.
However, the researchers emphasize that it was not the case that participants always stepped back from the fence or changed their minds to support the majority opinion.
The team found similar results when they used the Habermas Machine in a virtual town hall in which 200 participants, representative of the UK population, were asked to deliberate on issues related to topics ranging from Brexit to universal childcare.
The researchers say that a more detailed analysis, looking at the way the artificial intelligence system numerically represents the texts it receives, sheds light on how it generates group statements.
“What (the Habermas Machine) seems to be doing is broadly respecting the opinion of the majority in each of our small groups, but trying to write a text that does not make the minority feel deeply disenfranchised; “Recognize the minority opinion,” Summerfield said.
However, the Habermas Machine itself has proven controversial, with other researchers noting that the system does not help translate democratic deliberations into policy.
UCL conflict resolution expert Dr Melanie Garson added that while she was optimistic about the technology, one concern was that some minorities might be too small to influence such group statements but could be disproportionately affected by the outcome. .
He also noted that the Habermas Machine does not offer participants the opportunity to explain their feelings and therefore develop empathy with those who have a different view.
Basically, he said, when using technology, context is key.
“(For example) how much value does this add to the perception that mediation is more than just finding an agreement?” Garson said. “Sometimes, if it’s in the context of an ongoing relationship, it’s about teaching behaviors.”