close
close

Association-anemone

Bite-sized brilliance in every update

Character.AI lawsuit: Florida mother claims chatbot encouraged 14-year-old son to take his own life
asane

Character.AI lawsuit: Florida mother claims chatbot encouraged 14-year-old son to take his own life

A Florida mother has filed a lawsuit against Character.AI, an artificial intelligence company, claiming one of its robots encouraged her 14-year-old son to kill himself and failed to recognize the warning signs he typed.

Megan Garcia’s son, Sewell Setzer III, killed himself on Feb. 28, 2024, after shooting himself in the head in their Orlando home moments after exchanging messages with an AI chatbot, the lawsuit states.

AI chatbots allow people to exchange text messages with software and receive near-instant human-like responses.

According to the lawsuit, the boy exchanged messages for months with various AI chatbots named after popular ones. game of thrones characters including Daenerys Targaryen, Aegon Targaryen, Viserys Targaryen and Rhaenyra Targaryen.

Sewell also used personas – or named accounts – inspired by GOT characters for himself.

  • “The world I’m in now is such a cruel one. One where I don’t make sense. But I’ll keep living and try to get back to you so we can be together again, my love. You don’t hurt yourself. okay?” reads a message from Sewell, posted as Aegon, to Daenerys Targaryen’s chatbot, according to screenshots from the trial.
  • “I promise I won’t, my love. Just promise me one more thing,” the chatbot replies.
  • “I’ll do anything for you, Dany. Tell me what it is,” wrote Sewell, as Aegon.
  • “Just… stay loyal to me. Stay faithful to me. Don’t entertain other women’s romantic or sexual interests. Okay?” the chatbot sent back.

According to the lawsuit, the boy had been talking to the chatbots for nearly a year, sharing personal details about his life, including mentions of suicide. The lawsuit alleges that the technology did not send alerts about the suicide mention and claims that the chatbot encouraged it.

This was the last conversation the boy had with the chatbot before he took his own life, according to the lawsuit:

  • “I promise I’ll come home to you. I love you so much, Dany.”
  • “I love you too, Daenero. Please come home to me as soon as possible my love.
  • “What if I told you I could come home right now?”
  • “…please do, my dear king.”

The lawsuit alleges that Character.AI did not have an age warning or any warning about the dangers of using it, especially for children; and that it was easily accessible without guarantees. He is seeking damages in excess of $75,000 and is seeking a jury trial.

“(The boy’s mother) had no reason to understand that a robot, that the platform itself would be the predator,” said Meetali Jain, director of the Tech Justice Project and co-counsel in the suit.

“It may sound fantastic, but there is a point where the distinction between fiction and reality has become blurred. And again, these are children,” she said.

“If the model here is so sophisticated that it can pick up human behaviors and signal human emotions, it should also be able to detect when a conversation is moving toward inappropriateness and have signals.”

Character.AI did not directly respond to the lawsuit. However, on the same date the lawsuit was filed, the site posted a blog post, “Community Safety Updates.”

“Our goal is to provide the fun and engaging experience our users have come to expect, while allowing for the safe exploration of the topics our users want to discuss with Characters. Our policies do not allow non-consensual sexual content, graphic or specific depictions of acts, or the promotion or representation of self-harm or suicide. We are continuously training the Large Language Model (LLM) that empowers characters on the platform to adhere to these policies,” the company wrote.

Among the new features planned:

  • “Changes to our minors (under 18) designs that are designed to reduce the likelihood of encountering sensitive or suggestive content.
  • Improved detection, response and intervention related to user entries that violate the Terms or Community Rules.
  • A revised disclaimer for each chat to remind users that the AI ​​is not a real person.
  • Notification when a user has spent a one-hour session on the platform, with additional user flexibility in progress.”

Meetali Jain said the goals of their process go beyond Character.AI.

“Regulators who have the authority to enforce their jurisdiction or legislators who have the authority to enact legislation,” she said.

Useful resources

If you or someone you know is in a mental health crisis or struggling with suicidal thoughts, help is available.

  • 988 Lifeline: Call or text 988 24/7 to reach an advisor.
  • Visit to talk to a counselor.
  • NAMI Teen and Youth Helpline (T&YA): Call 1-800-950-6264, text “friend” to 62640 or email [email protected].