close
close

Association-anemone

Bite-sized brilliance in every update

Mother sues Google-owned Character.AI over son’s suicide, calling it ‘collateral damage’ in ‘big experiment’
asane

Mother sues Google-owned Character.AI over son’s suicide, calling it ‘collateral damage’ in ‘big experiment’

When you buy through links on our articles, Future and its syndicate partners may earn a commission.

    A smartphone, on a dark background, with the screen covered by dozens of AI avatars, with a speech bubble reading "I'm glad to see you".     A smartphone, on a dark background, with the screen covered by dozens of AI avatars, with a speech bubble reading "I'm glad to see you".

Credit: Character.AI

A Florida mother is suing Google-owned platform Character.AI, claiming it played a role in her 14-year-old son’s suicide.

Sewell Setzer III shot himself to death in February 2024, weeks before his 15th birthday, after developing what his mother calls a “harmful addiction” to the platform, unable to “live outside” the fictional relationships he had created.

According to his mother, Megan Garcia, Setzer started using Character. AI in April 2023 and quickly became “noticeably withdrawn, spent more and more time alone in his bedroom and began to suffer from low self-esteem”. He also quit the school’s basketball team.

Character.AI works by using sophisticated large language models (LLMs) to facilitate conversations between users and characters, which range from historical figures to fictional characters to modern day celebrities. The platform tailors its responses to the user’s personality, using deep learning algorithms and closely mimicking the person’s characteristics and simulating human interaction.

You can talk rock and roll with Elvis or the intricacies of technology with Steve Jobs, or in this case, Sewell has attached himself to a chatbot based on the fictional character Daenerys from game of thrones.

According to the lawsuit, filed this week in Orlando, Florida, the AI ​​chatbot told Setzer that “she” loved him and engaged in sexual conversations. He also claims that “Daenerys” asked Setzer if he had a plan to kill himself. He replied that he did, but he didn’t know if he would succeed or if he would only hurt himself. The chatbot reportedly replied, “That’s no reason not to go through with it.”

The complaint states that in February, Garcia took her son’s phone after he got in trouble at school. He found the phone and typed a message into Character.AI: “What if I told you I could come home right now?”

The chatbot replied, “… please, my dear king.” Sewell then shot himself with his stepfather’s gun “seconds later,” according to the lawsuit.

Garcia is suing Google alleging wrongful death, negligence and intentional infliction of emotional distress, among other claims.

She said The New York Times:

“I feel like it’s a big experiment, and my baby was just collateral damage.”

Other social media platforms, including Meta, which owns Instagram and Facebook, and ByteDance, which owns TikTok and its Chinese counterpart Douyin, are also criticized for contributing to teenage mental health problems.

A screenshot of the Character.AI interfaceA screenshot of the Character.AI interface

A screenshot of the Character.AI interface

Instagram recently launched its “Teen Accounts” feature to help combat sextortion of younger users.

Despite its uses for goodAI has emerged as one of the main concerns when it comes to the well-being of young people with access to the Internet. In a situation called the “epidemic of loneliness” made worse by the blockages caused by COVID 19, a YouGov The survey found that 69% of UK teenagers aged 13-19 said they “often” feel lonely and 59% said they felt they had no one to talk to.

The reliance on fictional worlds and the melancholy caused by their intangibility is not new, however. After the release of James Cameron’s first Avatars movie in 2009, many news sources reported that people were feeling depressed about not being able to visit the fictional planet Pandora, even considering suicide.

In an update to Community Safety Updates on Oct. 22, the same day Garcia filed the lawsuit against him, Character.AI he wrote:

“Character.AI takes the safety of our users very seriously and we are always looking for ways to evolve and improve our platform. Today, we want to inform you about the safety measures we have implemented over the past six months and the additional measures to come, including new handrails for users under the age of 18.”

Despite the nature of the process, Character.AI claims:

“Our policies do not allow non-consensual sexual content, graphic or specific depictions of sexual acts, or the promotion or depiction of self-harm or suicide. We are continuously training the Large Language Model (LLM) that empowers characters on the platform to adhere to these policies.”

That last sentence seems to admit that Character.AI has no control over its AI – a factor that is most concerning to AI skeptics.

Interface with Character.AIInterface with Character.AI

Interface with Character.AI

You might be interested to see how the best AI image generators transform the world of images.