Home Money OpenAI is putting its powers of persuasion to the test

OpenAI is putting its powers of persuasion to the test

0 comments
OpenAI is putting its powers of persuasion to the test

This week, Sam Altman, CEO of OpenAI, and Arianna Huffington, founder and CEO of healthcare company Thrive Global, published An article in Time promoting Thrive Artificial Intelligencea startup backed by Thrive and OpenAI’s startup fund. The paper suggests that AI could have a huge positive impact on public health by persuading people to adopt healthier behaviors.

Altman and Huffington write that Thrive AI is working toward “a fully integrated AI personal coach that delivers real-time suggestions and recommendations unique to you that empower you to take action on your daily behaviors to improve your health.”

His vision puts a positive spin on what could well prove to be one of AI’s sharpest edges. AI models are already adept at persuading people, and there’s no telling how much more powerful they could become as they advance and gain access to more personal data.

Aleksander Madry, a sabbatical professor at the Massachusetts Institute of Technology, leads a team at OpenAI called Preparedness that works on precisely that topic.

“One of the areas of work in preparation is persuasion,” Madry told WIRED in an interview in May. “Basically, it’s about thinking about the extent to which these models can be used as a way to persuade people.”

Madry says he was drawn to join OpenAI because of the remarkable potential of language models and because the risks they pose have barely been studied. “There is literally almost no science,” he says. “That was the impetus for the preparation effort.”

Persuasiveness is a key element in programs like ChatGPT and one of the ingredients that makes these chatbots so appealing. Language models are trained on human writing and dialogue that contains countless rhetorical and persuasive tricks and techniques. The models are also often tuned to skew toward expressions that users find more convincing.

Investigation released In April, Anthropic, a competitor founded by OpenAI exiles, suggested that language models have become better at persuading people as they have grown in size and sophistication. This research involved giving volunteers a statement and then seeing how an AI-generated argument changed their opinion about it.

OpenAI’s work extends to analyzing AI in conversation with users, something that can lead to greater persuasiveness. Madry says the work is being done with consenting volunteers and declines to reveal the findings to date. But he says the persuasive power of language models is profound. “As humans, we have this ‘weakness’ that if something communicates with us in natural language (we think of it as if) it’s a human,” he says, alluding to an anthropomorphism that can make chatbots seem more real and compelling.

The Time article argues that the potential health benefits of persuasive AI will require strong legal safeguards because the models can have access to a lot of personal information. “Policymakers must create a regulatory environment that fosters AI innovation while protecting privacy,” Altman and Huffington write.

But this is not all that policymakers need to consider. It may also be crucial to consider how increasingly persuasive algorithms can be misused. AI algorithms could increase the resonance of disinformation or generate particularly convincing phishing scams. They could also be used to advertise products.

Madry says a key question, yet to be studied by OpenAI or others, is how much more compelling or coercive AI programs that interact with users over long periods of time might become. There are already several companies offering chatbots that pose as romantic partners and other characters. AI girlfriends are increasingly popular (some are even designed to yell at you), but how addictive and persuasive they are is largely unknown.

The excitement and hype generated by ChatGPT following its launch in November 2022 has OpenAI, outside researchers, and many policymakers focused on the more hypothetical question of whether AI could one day turn against its creators.

Madry says this risks ignoring the more subtle dangers posed by eloquent algorithms. “I worry that they’re focusing on the wrong questions,” Madry says of policymakers’ work so far. “That in a sense, everyone is saying, ‘Oh yeah, we’re addressing it because we’re talking about it,’ when in fact we’re not talking about the right thing.”

You may also like