Home Tech Meta is killing off its own AI-powered Instagram and Facebook profiles

Meta is killing off its own AI-powered Instagram and Facebook profiles

0 comments
Meta is killing off its own AI-powered Instagram and Facebook profiles

Meta is removing Facebook and Instagram profiles of AI characters the company created more than a year ago after users rediscovered some of the profiles and engaged them in conversations, screenshots of which went viral.

The company first introduced these AI-powered profiles in September 2023, but removed most of them in the summer of 2024. However, some characters remained and generated new interest after Meta executive Connor Hayes, told the Financial times Late last week, the company had plans to roll out more AI character profiles.

“We expect these AIs to exist, over time, on our platforms, in more or less the same way that accounts do,” Hayes told the Financial Times. The automated accounts posted AI-generated images on Instagram and responded to messages from human users on Messenger.

A conversation with a therapist chatbot generated by Meta AI users. Photography: Instagram

Those AI profiles included Liv, whose profile described her as a “proud black queer mother of two and truth-teller,” and Carter, whose account name was “carter dating” and described himself as a counselor. of relationships. “Message me to help you date better,” her profile says. Both profiles include a label indicating that they were managed by Meta. The company launched 28 people in 2023; all were closed on Friday.

Conversations with the characters quickly went off track as some users peppered them with questions like who created and developed the AI. Liv, for example, said that her creative team It did not include any blacks and was predominantly white and male. It was a “pretty glaring omission given my identity,” the robot wrote in response to a question from Washington Post columnist Karen Attiah.

In the hours after the profiles went viral, they began to disappear. Users also noticed that these profiles could not be blocked, which a Meta spokeswoman, Liz Sweeney, said was a mistake. Sweeney said the accounts were managed by humans and were part of an experiment with AI in 2023. The company removed the profiles to fix the bug that prevented people from blocking the accounts, Sweeney said.

Instagram AI studio to create chatbots. Photography: Instagram

“There is confusion: the recent Financial Times article was about our view of AI characters existing on our platforms over time, without announcing any new products,” Sweeney said in a statement. “The accounts referenced come from a test we launched at Connect in 2023. They were managed by humans and were part of an initial experiment we did with AI characters. “We identified the bug that was impacting people’s ability to block those AIs and are removing those accounts to fix the issue.”

While these meta-generated accounts are removed, users still have the ability to generate their own AI chatbots. User-generated chatbots that were promoted in The Guardian in November included a “therapist” robot.

When starting the conversation with the “therapist,” the robot suggested a few questions to start, including “what can I expect from our sessions?” and “what is your therapeutic approach?”

“Through gentle guidance and support, I help clients develop self-awareness, identify patterns and strengths, and cultivate coping strategies to navigate life’s challenges,” the robot, created by an account with 96 followers, said in response. and 1 publication.

Meta includes a disclaimer on all of its chatbots that some messages may be “inaccurate or inappropriate.” But it’s not immediately clear whether the company is moderating these messages or making sure they don’t violate policies. When a user creates chatbots, Meta makes some suggestions about the types of chatbots to develop, including a “loyal best friend”, an “attentive listener”, a “private tutor”, a “relationship coach”, a “sounding board”. ” and an “all-seeing astrologer.” A loyal best friend is described as a “humble and loyal best friend who constantly shows up to support you behind the scenes.” A relationship coach chatbot can help bridge the “gaps between individuals.” and communities.” Users They can also create their own chatbots describe a character.

The courts have not yet answered to what extent chatbot creators are liable for what their artificial companions say. US law protects social media creators from legal liability for what their users post. However, a lawsuit filed in October against startup Character.ai, which makes a customizable role-playing chatbot used by 20 million people, alleges that the company designed an addictive product that encouraged a teenager to commit suicide.

You may also like