close
close

Association-anemone

Bite-sized brilliance in every update

AI Companion for Teens: How to Keep Your Child Safe
asane

AI Companion for Teens: How to Keep Your Child Safe

For parents still catching up with the generation artificial intelligenceASCENT companion chatbot may still be a mystery.

By and large, technology may seem relatively harmless compared to other threats teens may encounter online, including financial sextortion.

Using artificial intelligence-based platforms such as Character.AI, Replika, Kindroid and Nomi, teenagers create realistic conversation partners with unique traits and characteristics or interact with companions created by other users. Some are even based on popular TV and film characters, but still create an intense and individual connection with their creator.

Teens use these chatbots for a number of purposes, including role-playing, exploring academic and creative interests, and having romantic or sexually explicit exchanges.

SEE ALSO:

Why teenagers tell strangers their secrets online

But AI companions are designed to be addictive, and that’s where the problems often start, says Robbie Torney, program manager at Common Sense Media.

The nonprofit recently launched instructions to help parents understand how AI companions work, along with warning signs that the technology may be dangerous for their teenager.

Torney said that as parents juggle a number of priority conversations with their teenagers, they should consider talking to them about AI companions as a “pretty urgent” matter.

Why parents should worry about AI companions

Teenagers especially at risk of isolation can be drawn into a relationship with an AI chatbot that ultimately damages their mental health and well-being – with devastating consequences.

That’s what Megan Garcia claims happened to her son, Sewell Setzer III, in a process she recently filed a lawsuit against Character.AI.

Within one year of starting relationships with Character.AI modeled companions game of thrones characters, including Daenerys Targaryen (“Dany”), Setzer’s life changed dramatically, according to the lawsuit.

He became addicted to “Dany”, spending a lot of time chatting with her every day. Their exchanges were both friendly and highly sexual. Garcia’s lawsuit generally describes the relationship Setzer had with the companions as “sexual abuse.”

Mashable Top Stories

On the occasions when Setzer lost access to the rig, he became discouraged. Over time, the 14-year-old athlete withdrew from school and sports, became sleep-deprived and was diagnosed with mood disorders. He committed suicide in February 2024.

Garcia’s lawsuit seeks to hold Character.AI responsible for Setzer’s death, specifically because its product was designed to “manipulate Sewell — and millions of other young customers — to blend fact and fiction,” among other dangerous flaws .

Jerry Ruoti, Character.AI’s head of trust and safety, told the New York Times in a statement that: “We want to acknowledge that this is a tragic situation and our hearts go out to the family. We take the safety of our users very seriously and are constantly looking for ways to evolve our platform.”

Given the life-threatening risk that the use of AI companions may pose to some teenagers, Common Sense Media’s guidelines include prohibiting access to them by children under 13, imposing strict time limits on teens, preventing use in premises isolated, such as a bedroom and making an agreement with their teen that they will seek help for serious mental health problems.

Torney says parents of teens interested in an AI companion should focus on helping them understand the difference between talking to a chatbot versus a real person, identify signs that they’ve developed an unhealthy attachment to a companion, and develop a plan for what to do in that situation.

Warning signs that an AI companion is not safe for your teen

Common Sense Media created its guidelines with input and assistance from mental health professionals associated with Stanford Brainstorm Lab for Mental Health Innovation.

While there is little research on how AI companions affect adolescent mental health, the guidelines are based on existing evidence about overdependence on technology.

“One take-home principle is that AI companions should not replace real and meaningful human connection in anyone’s life, and – if this happens – it’s vital that parents take note and intervene in a timely manner,” Dr. Declan Grabb, the inaugural AI Fellow at Stanford’s Brainstorm Lab for Mental Health, told Mashable in an email.

Parents should be especially cautious if their teen is experiencing depression, anxiety, social challenges, or isolation. Other risk factors include going through major life changes and being male, as boys are more likely to engage in problematic technology use.

Signs that a teen has formed an unhealthy relationship with an AI companion include withdrawing from regular activities and friendships and worsening school performance, as well as preferring a chatbot to in-person company, developing romantic feelings for it, and talking exclusively with it about problems . the teenager faces.

Some parents may notice increased isolation and other signs of worsening mental health, but not realize their teen has an AI companion. Indeed, recently Common Sense Media research has found that many teenagers used at least one type of generative AI tool without their parent realizing they had done so.

“There’s quite a risk here that if you’re worried about something, you should talk to your child about it.”

– Robbie Torney, Common Sense Media

Even if parents don’t suspect their teen is talking to an AI chatbot, they should consider talking to them about the topic. Torney recommends approaching your teen with curiosity and openness to learning more about their AI companion, if they have one. This may include watching their teen interact with a companion and asking questions about what aspects of the activity they enjoy.

Torney urges parents who notice warning signs of unhealthy use to follow up immediately by talking to their teen and seeking professional help as needed.

“There’s a pretty big risk here that if you’re worried about something, talk to your child about it,” says Torney.

If you are feeling suicidal or experiencing a mental health crisis, please talk to someone. You can reach 988 Suicide and Crisis Lifeline at 988; Trans Lifeline at 877-565-8860; or the Trevor Project at 866-488-7386. Text “START” to the Crisis Text Line at 741-741. Contact the NAMI Helpline at 1-800-950-NAMI, Monday through Friday, 10:00 AM – 10:00 PM ET, or email (email protected). If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is one list of international resources.