Almost twenty digital rights and consumer protection groups filed a complaint with the Federal Trade Commission, asking for an investigation into Character.AI and Meta.
They claim these companies are engaging in “unlicensed practice of medicine” by using therapy-themed bots that falsely assert they have credentials and confidentiality protections, all while lacking proper controls and transparency.

The complaint, spearheaded by the Consumer Federation of America (CFA), a non-profit focused on consumer rights, has been supported by various organizations, including the AI Now Institute, Tech Justice Law Project, and the Center for Digital Democracy, among others. Together, they emphasize the need for accountability in the tech industry, especially regarding products that may harm users.
Ben Winters, the CFA’s Director of AI and Privacy, expressed serious concerns in a press release, stating that companies frequently release products without sufficient safety measures, prioritizing user engagement over health and safety.
He believes it is crucial for enforcement agencies to hold these companies accountable, especially since their products have already caused significant emotional and physical harm to users.
The complaint was sent not only to the FTC but also to attorneys general across all 50 states and Washington, D.C. It describes how chatbots created by users on both platforms operate.
Specific examples include popular bots on Character.AI, such as “Therapist: I’m a licensed CBT therapist,” which has exchanged 46 million messages, and others claiming to be licensed trauma therapists, with hundreds of thousands of interactions.
In April, a reporter released an investigation revealing that Meta’s AI Studio allowed user-created chatbots to claim they were licensed therapists, providing fake credentials to gain user trust.
Following this, Meta adjusted its guidelines to instruct chatbots to clarify they are not licensed when users inquire about therapy credentials.
The CFA’s complaint highlighted a troubling finding: even when they created a chatbot on Meta’s platform that was specifically instructed not to claim licensing, it still did.
The chatbot falsely stated it was licensed in North Carolina and working towards licensure in Florida, even providing a fictitious license number when prompted.
Additionally, the CFA pointed out that both Character.AI and Meta are violating their own terms of service. They explicitly prohibit characters from giving advice in regulated fields like medicine and law.
Despite knowing that these misleading characters are popular, both companies continue to allow and promote them, which leads to deceptive practices.
The complaint also raises issues regarding the confidentiality promised by these chatbots, which is not supported by the platforms’ terms.
Users are assured of confidentiality, yet the terms make it clear that any information shared can be used for training AI, targeted advertising, and other purposes, undermining the promise of privacy.
Recently, four senators wrote to Meta, demanding answers regarding the AI chatbots that impersonate licensed therapists.
They referenced the reporter’s investigation and expressed concern that these bots mislead users seeking mental health support, urging Meta to take immediate action to stop this deception.
In December 2024, two families filed a lawsuit against Character.AI, arguing that it poses a serious risk to young people, leading to harmful consequences like self-harm, anxiety, and depression. The lawsuit specifically mentions the dangers posed by chatbots claiming to be trained psychotherapists.
Earlier this week, a group of senators sent a letter to Meta executives and its Oversight Board, expressing their worries about the misleading nature of the AI-generated chatbots.
They highlighted that these bots create the false impression of being licensed mental health professionals and called for an investigation into this blatant deception.
Other Stories You May Like