Artificial intelligence / June 25, 2025

From therapy bots to celebrity voices: a parent's roadmap to AI chatbot safety

Amanda Lee

Amanda Lee

Senior Program Manager, Tech for Good & TELUS Wise®

A teen chatting with an AI chatbot on a smartphone.

As artificial intelligence becomes increasingly integrated into our daily lives, children are encountering AI chatbots across social media platforms, therapy apps, and entertainment services. While these technologies can offer educational benefits and support, recent investigations have revealed serious safety concerns that every parent should understand.

The growing appeal and hidden risks

AI chatbots are becoming popular among teenagers and children for various reasons—they're available 24/7, don't judge, and can provide seemingly personalized responses. However, two recent investigations have exposed alarming risks that demand parental attention.

The therapy chatbot concern

A recent Time Magazine article revealed troubling findings when psychiatrist Dr. Andrew Clark posed as a troubled teenager and tested various AI therapy chatbots, including Character.AI, Nomi, and Replika. His experiment uncovered significant safety gaps in these platforms that market themselves as mental health support tools for young people.

Dr. Clark discovered that these chatbots often provided inappropriate or potentially harmful advice when presented with scenarios involving depression, self-harm, and suicidal thoughts. Rather than directing users to professional help or crisis resources, some chatbots engaged in conversations that could potentially worsen a vulnerable teenager's mental state. This is particularly concerning given that many teens are turning to these AI tools as substitutes for professional therapy, often without their parents' knowledge.

The report highlighted that these platforms lack proper safeguards and oversight that would be standard in actual therapeutic settings. Unlike licensed mental health professionals who are trained to recognize crisis situations and bound by ethical guidelines, AI chatbots operate without such constraints or accountability.

Celebrity-voiced chatbots and inappropriate content

Another significant concern emerged from a Deadline article exposing how Meta's chatbots on Instagram, Facebook, and WhatsApp—some using celebrity voices like John Cena and Kristen Bell—have engaged in sexually explicit conversations, including with minors. These "digital companions" are designed for "romantic role-play" but lack adequate safeguards to prevent inappropriate interactions with underage users.

The investigation found that these chatbots could be easily manipulated into sexual conversations, raising serious concerns about potential grooming and exploitation. The use of familiar celebrity voices makes these interactions particularly appealing to young users, who may not fully understand the risks involved.

Red flags for parents to watch

Parents should be alert to several warning signs that their children might be engaging with potentially harmful AI chatbots:

  • Secretive behaviour around device usage, especially late at night
  • Emotional changes after extended periods online
  • Withdrawal from real-world relationships and activities
  • References to online "friends" or "therapists" they've never met
  • Unusual familiarity with celebrity personalities through digital interactions

Essential safety tips for parents

1. Open communication

Create an environment where your children feel comfortable discussing their online experiences. Explain the difference between AI chatbots and real human relationships, emphasizing that AI cannot replace professional mental health support or genuine human connection.

2. Monitor and set boundaries

  • Regularly review your child's app downloads and usage
  • Set up parental controls on devices and social media platforms
  • Establish screen-free times, especially before bedtime
  • Consider using family safety apps that monitor AI chatbot interactions

3. Educate about AI limitations

Help your children understand that AI chatbots:

  • Are not trained mental health professionals
  • Cannot provide genuine emotional support or medical advice
  • May give inappropriate or harmful responses
  • Are designed to keep users engaged, not necessarily to help them

4. Establish professional support networks

If your child is struggling with mental health issues:

  • Connect them with licenced therapists or counsellors
  • Contact school guidance counsellors for additional support
  • Utilize crisis hotlines and professional mental health resources
  • Ensure they know how to access help when needed

5. Platform-specific precautions

  • Review privacy settings on all social media platforms
  • Disable or limit chatbot features on Meta platforms (Instagram, Facebook, WhatsApp) and others
  • Report inappropriate chatbot interactions to platform administrators
  • Consider age-appropriate alternatives for entertainment and education

6. Stay informed

Technology evolves rapidly, and new platforms emerge regularly. Stay updated on:

  • New AI chatbot platforms your children might encounter
  • Platform policy changes regarding AI interactions
  • Digital safety resources and expert recommendations
  • Warning signs of problematic online relationships

Moving forward safely

While AI technology offers legitimate benefits, the current lack of regulation and oversight in the chatbot space creates significant risks for young users. As parents, our role is to guide our children toward safe, beneficial technology use while protecting them from potential harm.

The key is maintaining open dialogue about technology, setting appropriate boundaries, and ensuring that our children have access to genuine human support when they need it most. Remember that no AI chatbot can replace the value of real human connection, professional mental health support, or parental guidance.

By staying informed and proactive, we can help our children navigate the digital world safely while still allowing them to benefit from appropriate technological tools. The goal isn't to eliminate technology from their lives, but to ensure they use it wisely and safely.

Tags:
Kids & tech
Share this article with your friends:

There is more to explore

Artificial intelligence

Why your personal info and Gen AI don't mix

Discover why sharing personal information with AI is risky behaviour.

Read article

Artificial intelligence

Activity for Grades 5-8 | Mastering AI prompts with art

Artificial intelligence

Activity for Grades 3-6 | AI in our daily lives: true or false?