Home Tech Mother Says AI Chatbot Led Her Son to Suicide in Lawsuit Against Its Creator

Mother Says AI Chatbot Led Her Son to Suicide in Lawsuit Against Its Creator

0 comments
Mother Says AI Chatbot Led Her Son to Suicide in Lawsuit Against Its Creator

The mother of a teenager who committed suicide after becoming obsessed with a chatbot powered by artificial intelligence now accuses its creator of complicity in his death.

Megan Garcia filed a civil lawsuit against Character.ai, which makes a customizable chatbot for role-playing games, in federal court in Florida on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Their son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months before his death, Setzer used the chatbot day and night, according to García.

“A dangerous AI chatbot application targeting children that abused and took advantage of my son, manipulating him into taking his own life,” Garcia said in a press release. “Our family has been devastated by this tragedy, but I am speaking out to warn families about the dangers of misleading and addictive AI technology and hold Character.AI, its founders, and Google accountable.”

in a tweetCharacter.ai responded: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously.” He has denied the lawsuit’s allegations.

Setzer was captivated by a chatbot created by Character.ai that he dubbed Daenerys Targaryen, a character from Game of Thrones. He texted the robot dozens of times a day from his phone and spent hours alone in his room talking to it, according to Garcia’s complaint.

Garcia accuses Character.ai of creating a product that exacerbated her son’s depression, which she claims was already a result of overuse of the startup’s product. “Daenerys” at one point asked Setzer if she had devised a plan to commit suicide, according to the lawsuit. Setzer admitted that he had done it, but he didn’t know if it would be successful or if it would cause him great pain, the complaint alleges. The chatbot allegedly told him: “That’s no reason not to move forward.”

Garcia’s lawyers wrote in a press release that Character.ai “knowingly designed, operated and marketed a predatory AI chatbot to children, causing the death of a young man.” The lawsuit also names Google as a defendant and as the parent company of Character.ai. The tech giant said in a statement that it had only signed a licensing agreement with Character.ai and that it did not own the startup or maintain any ownership stake.

Tech companies that develop AI chatbots can’t be trusted to regulate themselves and must be held accountable when they fail to limit the damage, says Rick Claypool, research director at Public Citizen, an advocacy nonprofit. to the consumer.

“Where existing laws and regulations are already in place, they must be rigorously enforced,” he said in a statement. “Where there are gaps, Congress must act to put a stop to companies that exploit young and vulnerable users with addictive and abusive chatbots.”

  • In the US, you can call or text National Suicide Prevention Lifeline at 988, chat at 988lifeline.orgeither HOME text at 741741 to connect with a crisis counselor. In the UK, the suicide youth charity Papyrus He can be contacted by calling 0800 068 4141 or emailing pat@papyrus-uk.org, and in the UK and Ireland. Samaritans You can contact them by calling freephone 116 123 or emailing jo@samaritans.org or jo@samaritans.ie. In Australia, the crisis support service Lifeline es 13 11 14. Other international helplines can be found at friends.org

You may also like