close
close

Association-anemone

Bite-sized brilliance in every update

Orlando mother sues over AI platform’s role in son’s suicide death
asane

Orlando mother sues over AI platform’s role in son’s suicide death

HELP IS AVAILABLE: If you or someone you know may be considering suicide or in crisis, call or text 988 to contact the Suicide & Crisis Lifeline.

A 14-year-old Orlando boy in love with a Character.AI chatbot killed himself earlier this year after telling the AI ​​chatbot he was coming home to her right away.

This week the boy’s mother, Megan Garcia, filed a wrongful-death lawsuit in federal court in Orlando against the company Charater.AI — Character Technologies — and its founders, along with Alphabet and Google, which the lawsuit alleges are invested in the company.

Sewell Setzer III

Screenshot

/

Federal complaint by Megan Garcia

Sewell Setzer III

The complaint highlights the dangers of AI companion apps for children. She claims the chatbots engaged users, including children, in sexual interactions, gathering private data for artificial intelligence.

The lawsuit says the boy, Sewell Setzer III, began using Character.AI last April and that his mental health rapidly and severely declined as he became addicted to interacting with the AI. He was caught in consumptive interactions with chatbots based on “Game of Thrones” characters.

The boy became withdrawn, sleep deprived, depressed and had trouble at school.

Unaware of Sewell’s AI addiction, his family sought counseling for him and took away his cell phone, the federal complaint states. But one evening in February, she found him and, using his name “Daenero,” told the AI ​​character she loved — Daenerys Targaryen — that she was coming home to her.

“I love you, Daenero. Please come home to me as soon as possible, my love,” he replied.

“What if I told you I could come home right now?” the boy sent a message.

“…please do, my dear king,” he replied.

Within seconds, the boy shot himself. He later died at the hospital.

Garcia is represented by attorneys from the Social Media Victims Law Center, including Matthew Bergman, and the Tech Justice Law Project.

In an interview with Central Florida Public Media EngageBergman said his client is “singularly focused on preventing this from happening to other families and saving children like her son from the fate that came his way… It’s an outrage that such a dangerous product is just being unleashed the public”.

A statement from Character.AI said: “We are devastated by the tragic loss of one of our users and wish to extend our deepest condolences to the family.” it is “heart broken by tragic loss”. The company describes new safety measures added in the past six months, with more to come, “including new handrails for users under 18.”

It’s hiring a head of trust and security and a head of content policy.

“We also recently implemented a pop-up resource that is triggered when the user enters certain phrases related to self-harm or suicide and directs them to the National Suicide Prevention Lifeline,” according to the company. Community Safety Updates page.

New features include: changes to its designs for users under 18 to reduce “sensitive and suggestive content”, better monitoring and intervention for terms and conditions violations, a revised disclaimer to remind users that AI is not a real person and a notification when the user has spent an hour on the platform.

Bergman described the changes as “baby steps” in the right direction.

“They do not cure the underlying dangers of these platforms,” ​​he added.

Copyright 2024 Central Florida Public Media