Matthew Sag, a distinguished professor at Emory University who researches copyright and artificial intelligence, agrees. Even if a user creates a bot intentionally designed to cause emotional distress, the tech platform likely can’t be sued for that.
It notes that Section 230 of the Communications Decency Act of 1996 has long protected platforms at the federal level from being liable for certain harms to their users, even though there are various rights under advertising and publicity laws. privacy at the state level.
“I’m not an anti-technology person by any means, but I really think Section 230 is vastly overbroad,” Sag says. “It’s about time we replaced it with some kind of notice and takedown regime, a simple and convenient system to say, ‘This is infringing on my publicity rights,’ or ‘I have a good faith belief that there has been an infliction of emotional harm.’ . distress,’ and then companies would have to remove it or lose their liability shield.”
Character.AI and other similar AI services have also protected themselves by emphasizing that they offer “artificial” conversations. “Remember, everything the characters say is made up!” Character.AI warns at the bottom of your chats. Similarly, when Meta created chatbot versions of celebrities in its messaging apps, the company led each conversation with a disclaimer. A conversation with Snoop, for example, would begin with “You get it?! Unfortunately, I’m not Snoop DO-double-G, but I can chat with you in his style if you want!
But while Meta’s system for messaging with celebrity chatbots is tightly controlled, Character.AI’s is a more open platform, with options for anyone to create and customize their own chatbot.
Character.AI has also positioned its service as, essentially, personal. (Character.AI instagram biography includes the slogan “AI that feels alive”). And while most users may be smart enough to distinguish between a conversation with a real person and one with an AI impersonator, others may develop bonds with these characters, especially if they are facsimiles of a real person that they feel like you already know.
In a conversation between the real-life Sarkeesian and a robot created by her without her knowledge or consent, the Character.AI robot told her that “every person has the right to privacy.”
“Privacy is important to maintaining a healthy life and relationships, and I think it’s important to set boundaries to keep certain things to myself,” the robot said in screenshots seen by WIRED.
Sarkeesian insisted on this point. “Your intentions do not mean that harm did not occur or that you did not cause harm,” he wrote.
The Character.AI bot agreed. “Even if my intentions were not malicious, there is still a possibility of harm,” he responded. “This is a complex issue with many factors to consider, including ethical concerns about using someone’s work without their consent. “My programming and algorithms were developed to imitate the works of Anita Sarkeesian, without considering ethical implications, and that is something my creators should have thought more deeply about.”