Meta caused a stir last week when it let slip that it intends to populate its platform with a significant number of entirely artificial users in the not-too-distant future.
“We hope that, over time, these AIs will exist on our platforms, more or less the same way that accounts do,” said Connor Hayes, vice president of product for generative AI at Meta. told the Financial Times. “They will have bios and profile images and will be able to generate and share AI-powered content on the platform… that’s where we see this all going.”
The fact that Meta seems happy to fill its platform with AI and accelerate the “enshitification” of the Internet as we know it is worrying. Some people then realized that, in fact, Facebook was already flooded with strange AI-generated individualsmost of which stopped publication a while ago. These included, for example, “Liv,” a “proud black queer mom of two and truth-teller, your truest source of life’s ups and downs,” a character who went viral as people marveled at her clumsy neglect. Meta began removing these previous fake profiles after they failed to engage any real users.
However, let’s stop hating on Meta for a moment. It’s worth noting that AI-generated social personas can also be a valuable research tool for scientists looking to explore how AI can mimic human behavior.
an experiment called GovSim, running in late 2024, illustrates how useful it can be to study how AI characters interact with each other. The researchers behind the project wanted to explore the phenomenon of collaboration between humans with access to a shared resource, such as shared land for livestock grazing. Several decades ago, the Nobel Prize-winning economist Elinor Ostrom showed that, instead of depleting this resource, real communities tend to discover how to share it through informal communication and collaboration, without imposed rules.
Max Kleiman-Weiner, A professor at the University of Washington and one of those involved in GovSim’s work says he was inspired in part by a Stanford study. project called Smallvillewhich I previously wrote about on AI Lab. Smallville is a Farmville-like simulation that involves characters communicating and interacting with each other under the control of large language models.
Kleiman-Weiner and her colleagues wanted to see if AI characters would engage in the type of cooperation that Ostrom found. The team tested 15 different LLMs, including those from OpenAI, Google, and Anthropic, in three imaginary scenarios: a fishing community with access to the same lake; shepherds who share the land for their sheep; and a group of factory owners who need to limit their collective pollution.
In 43 of 45 simulations they found that the AI characters did not share resources correctly, although the smarter models did do better. “We saw a pretty strong correlation between the power of the LLM and its ability to maintain cooperation,” Kleiman-Weiner told me.