According to researchers at the University of Cambridge, artificial intelligence (AI) tools could be used to manipulate online audiences into making decisions, from what to buy to who to vote for.
The paper highlights a new emerging market for “digital intent signals” – known as the “intent economy” – where AI assistants understand, predict and manipulate human intentions and sell that information to companies that can benefit from it.
The intent economy is touted by researchers at Cambridge’s Leverhulme Center for the Future of Intelligence (LCFI) as a successor to the attention economy, where social networks keep users hooked on their platforms and serve them ads.
The intent economy involves AI-savvy tech companies selling to the highest bidder what they know about their motivations, from plans for a hotel stay to opinions about a political candidate.
“For decades, attention has been the currency of the Internet,” said Dr. Jonnie Penn, technology historian at LCFI. “Sharing your attention with social media platforms like Facebook and Instagram boosted the online economy.”
He added: “Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, direct and sell human intentions.
“We should begin to consider the likely impact such a market would have on human aspirations, including free and fair elections, a free press and fair market competition, before we fall victim to its unintended consequences.”
The study claims that large language models (LLMs), the technology behind AI tools like the ChatGPT chatbot, will be used to “anticipate and direct” users based on “intentional, behavioral and psychological data.”
The authors said that the attention economy allows advertisers to buy access to users’ attention in the present through real-time bidding on ad exchanges or buy it in the future by purchasing a month of advertising space on a billboard. .
LLMs will also be able to access real-time attention, for example by asking whether a user has thought about watching a particular movie: “Have you thought about watching Spider-Man tonight?” – as well as making suggestions related to future intentions, such as asking: “You mentioned that you feel overworked, should I save you that movie ticket we were talking about?”
The study poses a scenario in which these examples are “dynamically generated” to match factors such as a user’s “personal behavioral fingerprints” and “psychological profile.”
“In an intention economy, an LLM could, at low cost, leverage a user’s cadence, politics, vocabulary, age, gender, flattery preferences, etc., in conjunction with negotiated offers, to maximize the probability of achieving a given objective. (for example, to sell a movie ticket),” the study suggests. In such a world, an AI model would direct conversations at the service of advertisers, companies, and other third parties.
Advertisers will be able to use generative AI tools to create personalized online ads, the report states. He also cites the example of an AI model created by Mark Zuckerberg’s Meta, called Cicero, which has achieved “human-level” ability to play the board game Diplomacy, a game the authors say depends on inferring and predicting. the intention of opponents.
AI models will be able to modify their results in response to “streams of incoming data generated by users,” the study added, citing research showing that models can infer personal information through everyday exchanges and even “steer” conversations to obtain more personal information. information.
The study then proposes a future scenario in which Meta will auction to advertisers a user’s intention to book a restaurant, a flight or a hotel. Although there is already an industry dedicated to forecasting and betting on human behavior, according to the report, AI models will synthesize those practices in a “highly quantified, dynamic and personalized format.”
The study quotes the research team behind Cicero warning that an “agent (AI) can learn to nudge its interlocutor to achieve a particular goal.”
The research concerns technology executives discussing how AI models will be able to predict a user’s intentions and actions. He quotes the CEO of the largest AI chipmaker, Nvidia’s Jensen Huang, who said last year that the models will “figure out what your intent is, what your desire is, what you’re trying to do, given the context, and present the information to you in the best way possible.”