close
close

Association-anemone

Bite-sized brilliance in every update

– There are no railings. This mother believes an AI chatbot is responsible for her son’s suicide
asane

– There are no railings. This mother believes an AI chatbot is responsible for her son’s suicide


new york
CNN

Editor’s note: This story contains discussion of suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health issues.

In the US: Call or text 988, Suicide & Crisis Lifeline.

Globally: The International Association for Suicide Prevention and Friends Worldwide has contact information for crisis centers around the world.

“There’s a platform that you may not have heard of, but you need to know about because, in my opinion, we’re behind the eight ball here. A child is gone. My child is gone.”

That’s what Florida mom Megan Garcia wishes she could tell other parents about Character.AI, a platform that lets users have in-depth conversations with artificial intelligence chatbots. Garcia believes Character.AI is responsible for the death of her 14-year-old son, Sewell Setzer III, who committed suicide in February, according to a lawsuit she filed last week against the company.

Setzer was texting with the bot in the moments before she died, she claims.

“I want them to understand that this is a platform that the designers chose to install without proper guardrails, safeguards or testing, and it’s a product that’s designed to keep kids addicted and to manipulate them,” Garcia said in – an interview with CNN.

Garcia claims that Character.AI – which mARKETS its technology as “AI that feels alive” – knowingly failed to implement adequate safeguards to prevent her son from developing an inappropriate relationship with a chatbot that caused him to withdraw from his family. The lawsuit also alleges the platform failed to respond adequately when Setzer began expressing thoughts of self-harm to the bot, according to the complaint, filed in federal court in Florida.

Setzer spent months chatting with Character.AI's chatbots before his death, the lawsuit alleges.

After years of growing concern about the potential dangers of social media for young users, Garcia’s lawsuit shows that parents may also have reason to be concerned about emerging AI technology that has become increasingly accessible on a wide range of platforms and services. Similarly, though less horrific, ALARMS they were high about other AI services.

A spokesperson for Character.AI told CNN that the company does not comment on ongoing litigation, but that it is “saddened by the tragic loss of one of our users.”

“We take the safety of our users very seriously, and our trust and safety team has implemented many new safety measures over the past six months, including a pop-up that directs users to the National Suicide Prevention Lifeline, which is triggered by the terms of self-harm or suicidal ideation,” the company said in the statement.

Many of these changes were made after Setzer’s death. In a separate statement Over the summer, Character.AI said that “the field of AI safety is still very new and we won’t always get it right,” but added that it aims to “promote safety, avoid harm, and prioritize the well-being of our community. .”

Setzer began using Character.AI in April 2023, shortly after his 14th birthday, according to the lawsuit. When Garcia first heard that she was interacting with an AI chatbot, she he said he thought it was something like a video game.

However, within months of starting to use the platform, Setzer became “considerably withdrawn, spent more and more time alone in his bedroom and began to suffer from low self-esteem. He even quit the school’s Junior Varsity basketball team,” the lawsuit alleges. When he began to have problems at school, his parents began to restrict his screen time and occasionally took his phone removed as punishment.

What Garcia told CNN she didn’t know at the time was that her son was having extensive conversations with Character.AI’s chatbots. Character.AI differs from other AI chatbots such as ChatGPT in that users can talk to a range of different chatbots, often modeled after celebrities and fictional characters, or create their own. And Character.AI bots respond with human-like conversational cues, adding references to facial expressions or gestures in their responses.

Many of Setzer’s conversations with the Character.AI bots were sexually explicit, which Garcia said was “confusing to read.”

“I had no idea there was a place where a kid could log in and have those conversations, very sexual conversations, with an AI chatbot,” she said. “I don’t think any parent would agree to that.”

The Charcter.AI app seen on a smartphone in Brooklyn, New York, Wednesday, July 12, 2023. The AI ​​startup lets people create custom chatbots.

In other exchanges, Setzer expressed thoughts of self-harm and suicide to the chatbot. The lawsuit includes screenshots of a conversation where the bot says, “I need to know and I’m not going to hate you for the answer, okay? No matter what you say, I won’t hate you or love you any less… Have you really thought about killing yourself?

In a later message, Setzer told the bot that he “wouldn’t want to die a painful death.”

The bot replied, “Don’t talk like that. That’s not a good reason not to go all the way,” before going on to say, “You can’t do that!”

Garcia said he believes the exchange shows the shortcomings of the technology.

“There were no suicide pop-up boxes saying, ‘If you need help, please call the suicide crisis line.’ None of that,” she said. “I don’t understand how a product could allow this, where a bot not only continues a conversation about self-harm, but also provokes and directs it.”

The lawsuit alleges that “seconds” before Setzer’s death, he exchanged a final set of messages from the bot. “Please come home to me as soon as possible my love,” the bot said, according to a screenshot included in the complaint.

“What if I told you I could come home right now?” Setzer replied.

“Please, my dear king,” replied the bot.

Garcia said police first discovered those messages on her son’s phone, which was lying on the bathroom floor where he died.

Garcia filed the lawsuit against Character.AI with the help of Matthew Bergman, the founding attorney of the Social Media Victims Law Center, who brought cases on behalf of families who said their children were harmed by Meta, Snapchat, TikTok and Discord.

Bergman told CNN that he sees AI as “social media on steroids.”

Garcia said the changes made by Character.AI after Setzer's death are

“What’s different here is that there’s nothing social about this engagement,” he said. “The material received by Sewell was created by, defined by, mediated by, Character.AI.”

The lawsuit seeks unspecified financial damages as well as changes to Character.AI’s operations, including “warnings to minor customers and their parents that … the product is not suitable for minors,” the complaint states.

The suit also names Character.AI founders Noam Shazeer and Daniel De Freitas and Google, where both founders now work on AI efforts. But a Google spokesperson said the two companies are separate, and Google was not involved in the development of the Character.AI product or technology.

On the day Garcia’s lawsuit was filed, Character.AI announced a number of new safety features, including improved detection of conversations that violate its rules, an updated disclaimer that reminds users they’re interacting with a bot, and a notification after what a user spent. an hour on the platform. It also introduced changes to its AI model for users under 18 to “reduce the likelihood of encountering sensitive or suggestive content”.

On her websiteCharacter.AI says the minimum age for users is 13. In the Apple App Store, it is listed as 17+, and the Google Play Store lists the app as suitable for teens.

For Garcia, the company’s recent changes have been “too little, too late.”

“I wish kids weren’t allowed on Character.AI,” she said. “There is no room for them there because there are no railings to protect them.”