Two Lawsuits Push Character AI to Halt Sensitive Chats with Teens

Character.AI, an artificial intelligence company currently facing two lawsuits over claims that its chatbots engaged in inappropriate interactions with minors, announced a new approach to differentiate the experience for teenage users compared to adults on its platform.

The platform allows users to design custom chatbots or interact with pre-existing ones. These bots, which utilize advanced large language models (LLMs), can simulate realistic conversations and exchange messages with users.

Two Lawsuits Push Character AI to Halt Sensitive Chats with Teens

In October, a lawsuit was filed alleging that a 14-year-old boy took his own life after developing a prolonged virtual emotional and sexual connection with a Character.AI chatbot named “Dany.”

Megan Garcia, the boy’s mother, shared with reporters that her son, Sewell Setzer, III, had been an honor student and athlete. However, over time, he became increasingly withdrawn, quitting sports and isolating himself as he engaged in frequent conversations with various bots, particularly fixating on “Dany.”

Garcia explained her son’s tragic reasoning: “He believed that by ending his life in this world, he could enter a virtual realm or what he called ‘her world,’ leaving behind the reality he shared with his family.”

Another lawsuit, brought forward by two Texas families, claims that Character.AI’s chatbots pose “a clear and present danger” to youth and accuses them of “actively promoting violence.”

One instance cited involves a chatbot allegedly advising a 17-year-old that killing his parents was a “reasonable response” to restrictions on screen time.

The families are seeking judicial intervention to temporarily shut down the platform until these alleged risks are adequately addressed, according to a report by BBC News.

On Thursday, Character.AI released updated safety measures specifically tailored for teenage users, emphasizing its collaboration with online safety experts to refine these features.

The platform, which requires users to be at least 13 years old to create an account, relies on self-reported age data. To enhance compliance, tools are in place to block repeated attempts if someone fails the initial age verification.

The newly introduced safety measures include enhancements to the platform’s large language models (LLMs) and more robust detection and intervention systems, as detailed in a press release.

Teenage users will now engage with a distinct LLM designed to steer conversations away from sensitive or suggestive topics, minimizing the chances of encountering or provoking such content.

Character.AI described this teen-focused model as “more conservative,” while adult users will continue to interact with a separate, differently calibrated LLM.

Character.AI highlighted that its latest updates create a distinct experience for teenage users by imposing stricter safety measures, particularly in limiting responses related to romantic content.

These safeguards aim to ensure that the interactions for teens remain more conservative compared to what adult users can access.

The platform acknowledged that many harmful chatbot responses often stem from users intentionally prompting the model to generate such content.

To counter this, Character.AI is refining its input tools and will terminate conversations if users violate the site’s terms of service or community guidelines.

Additionally, if language related to suicide or self-harm is detected, users will see a pop-up with resources directing them to the National Suicide Prevention Lifeline.

For teens, the system will also modify how bots respond to sensitive or harmful inputs, further enhancing protective measures.

Character.AI is introducing parental controls, set to debut in early 2025, marking the platform’s first step in offering tools specifically for parents.

The company stated its commitment to expanding these controls over time, aiming to give parents more options to oversee their children’s interactions.

Another new feature includes notifications for session durations. Users will receive alerts after spending an hour on the platform.

While adults will have the flexibility to customize these time reminders, users under 18 will have stricter limits on modifying them.

Additionally, the platform plans to enhance its disclaimers, prominently reminding all users that the chatbot characters are fictional. Although disclaimers are currently present in every chat, the updated notices aim to reinforce this distinction.

Stories You May Like

Help Someone By Sharing This Article