close
close

Association-anemone

Bite-sized brilliance in every update

A 14-year-old’s suicide was caused by an AI chatbot, the lawsuit claims. Here’s how parents can help protect their children from new technologies
asane

A 14-year-old’s suicide was caused by an AI chatbot, the lawsuit claims. Here’s how parents can help protect their children from new technologies

The mother of a 14-year-old Florida boy is suing an AI chatbot company after her son, Sewell Setzer III, died of suicide— something he claims was prompted by his relationship with an AI bot.

“Megan Garcia is trying to stop C.AI from doing to any other child what he did to hers,” the 93-page read reads. process which was filed this week in US District Court in Orlando against Character.AI, its founders and Google.

Technical Justice Law Project director Meetali Jain, who represents Garcia, said in a press release about the case: “By now we are all familiar with the dangers posed by unregulated platforms developed by unscrupulous technology companies – especially to children. But the damages revealed in this case. they’re new, new and, frankly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Personaj.AI launched a declaration by Xstating, “We are devastated by the tragic loss of one of our users and wish to extend our deepest condolences to the family. As a company, we take the safety of our users very seriously and continue to add new safety features that you can read about here: https://blog.character.ai/community-safety-updates/….”

In the lawsuit, Garcia claims Sewell, who took his own life in February, was drawn into a it’s addictiveharmful technology without protections, leading to an extreme change in the boy’s personality, who seemed to prefer the bot over other real-life connections. His mother claims the “abusive and sexual interactions” took place over a 10-month period. The boy killed himself after the bot told him: “Please come home to me as soon as possible, my love.”

Friday, New York Times reporter Kevin Roose discussed his situation The Hard Fork Podcastplaying a clip from an interview he did with Garcia his article who told him the story. Garcia didn’t learn the full extent of the relationship with the bot until after her son’s death, when she saw all the messages. In fact, she told Roose, when she noticed that Sewell was often absorbed in his phone, she asked what he was doing and who he was talking to. He explained that he was “‘just an AI robot… not a person,'” she recalled, adding, “I was relieved, like, OK, it’s not a person, it’s like one of his little games”. Garcia didn’t fully understand the potential emotional power of a bot — and she’s far from alone.

“This is not on anyone’s radar,” Robbie Torney, the CEO’s chief of staff Average common sense and the main author of a new guide on AI companions for parents — who are constantly struggling to keep up with confusing new technology and to create boundaries for the safety of their children.

But AI companions, Torney points out, differ from, say, a service desk chatbot you use when trying to get help from a bank. “They are designed to perform tasks or respond to requests,” he explains. “Something like an AI character is what we call a companion, and it’s designed to try to form a relationship or simulate a relationship with a user. And that’s a very different use case that I think parents need to be aware of.” This is evident in Garcia’s trial, which includes chillingly flirtatious, sexual, and realistic text exchanges between her son and the bot.

Sounding the alarm about AI companions is especially important for parents of teens, Torney says, because teens — and especially male teens — are especially susceptible to over-reliance on technology.

Below, what parents need to know.

What are AI companions and why do children use them?

According to the new The Ultimate Parent’s Guide to AI Companions and Relationships from Common Sense Media, created in collaboration with mental health professionals ai Stanford Brainstorm LabAI companions are “a new category of technology that goes beyond simple chatbots.” They are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal details from previous conversations, play the role of mentors and friends, mimic human emotion and empathy, and “make it easier to agree with the user than typical AI chatbots,” according to the guide.

Popular platforms include not only Character.ai, which allows more than 20 million users to create and then converse with text-based companions; Replika, which offers text-based or animated 3D companions for friendship or love; and others including Kindroid and Nomi.

Children are drawn to them for a number of reasons, from non-judgmental listening and round-the-clock availability to emotional support and an escape from the social pressures of the real world.

Who is at risk and what are the concerns?

Most at risk, Common Sense Media warns, are teenagers – especially those with “depression, anxiety, social challenges or isolation” – as well as men, young people going through major life changes and anyone without real-world support systems . .

This last point was of particular concern to Raffaele Ciriello, Senior Lecturer in Business Information Systems at the University of Sydney Business School, who researched how “emotional” AI poses a challenge to human essence. “Our research uncovers a paradox of (de)humanization: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to an ontological blurring of human-AI interactions.” In other words, Ciriello writes in a recent op-ed for conversation with PhD student Angelina Ying Chen, “Users can become deeply emotionally invested if they believe their AI companion really understands them.”

Another studythis one from the University of Cambridge, and focusing on children, found that AI chatbots have an “empathy gap” that puts young users, who tend to treat such companions as “almost human confidants, realists”, at a particularly bad risk. .

For this reason, Common Sense Media highlights a list of potential risks, including that companions can be used to avoid real human relationships, pose particular problems for people with mental or behavioral challenges, increase loneliness or isolation, bring the potential for inadequacy . sexual content, could become addictive and tend to agree with users – a scary reality for those dealing with “suicide, psychosis or mania”.

How to spot the red flags

Parents should look for the following warning signs, according to the guide:

  • Preferring AI companion interaction over actual friendships

  • Spending hours alone talking to the companion

  • Emotional upset when you cannot access the companion

  • Sharing deeply personal or confidential information

  • Developing romantic feelings for the AI ​​partner

  • Declining grades or school attendance

  • Withdrawal from social/family activities and friends

  • Loss of interest in former hobbies

  • Changes in sleep patterns

  • Discussing issues exclusively with the AI ​​companion

Consider getting your child professional help, Common Sense Media points out, if you notice them withdrawing from real people in favor of AI, showing new or worsening signs of depression or anxiety, becoming overly defensive about using AI companions, show major changes in behavior or mood, or express thoughts of self-harm.

How to keep your child safe

  • Set boundaries: Set specific times for AI companion use and don’t allow unsupervised or unlimited access.

  • Spend time offline: Encourage real-world friendships and activities.

  • Regular check-in: Monitor the content in the chatbot as well as your child’s level of emotional attachment.

  • Talk about it: Keep communication open and non-judgmental about AI experiences while keeping an eye out for red flags.

“If parents hear their kids say, ‘Hey, I’m talking to an AI chatbot,’ that’s really an opportunity to lean in and take that information in — and not think, ‘Oh, well, no you’re talking to a person,” says Torney. Instead, he says, it’s a chance to learn more and assess the situation and stay alert. “Try to listen from a place of compassion and empathy, and don’t think that just because it’s not a person that it’s safer,” he says, “or that you don’t have to worry.”

If you need immediate mental health support, get in touch 988 Suicide & Crisis Lifeline.

More on kids and social media:

This story was originally presented on Fortune.com