China is set to impose new restrictions on AI chatbots to prevent them from negatively impacting people’s emotions, particularly concerning issues like suicide or self-harm.
This decision follows the release of draft regulations by the Cyberspace Administration, aiming to address what they define as “human-like interactive AI services.” The public is invited to comment on these proposals until January 25.
These upcoming regulations are notable as they represent the first global effort to manage AI systems that mimic human traits, according to Winston Ma, a law professor at NYU.

The proposals come at a time when Chinese firms are rapidly advancing in the development of AI companions and virtual influencers. Compared to previous regulations on generative AI from 2023, this new approach emphasizes the importance of emotional safety alongside content safety.
The draft regulations outline several key points. Firstly, AI chatbots will be prohibited from producing content that could promote self-harm or suicide, as well as engaging in harmful verbal interactions that could affect users’ mental health.
Additionally, if a user mentions suicide, the service providers are required to involve a human operator immediately and notify the user’s guardian or another designated person.
Moreover, AI chatbots are not allowed to create content related to gambling, violence, or obscenity. For minors using AI for emotional support, parental consent will be necessary, and there will be limits on how long they can interact with the technology.
Platforms will also need to have mechanisms to identify minors, even if users do not reveal their age, and in uncertain cases, they must apply minor settings while allowing appeals.
Further regulations will require tech companies to alert users after two hours of continuous interaction with AI. They will also need to conduct security assessments for chatbots that have over one million registered users or more than 100,000 active users monthly. Additionally, the guidelines encourage the use of human-like AI in areas such as cultural exchange and companionship for the elderly.
This initiative comes on the heels of two prominent Chinese AI chatbot startups, Z.ai and Minimax, seeking to go public in Hong Kong. Minimax is recognized globally for its Talkie AI application, which enables users to converse with virtual characters.
This app, along with its Chinese counterpart, Xingye, contributed significantly to the company’s revenue in the past year, boasting over 20 million monthly active users.
Z.ai, also referred to as Zhipu, has filed under the name “KAT.” Although the firm hasn’t disclosed its monthly user figures, it claims its technology supports around 80 million devices, including smartphones and smart vehicles.
This year, there has been a growing concern about how AI affects human behavior. In September, Sam Altman, the CEO of OpenAI, which operates ChatGPT, mentioned that one of the biggest challenges for the company is managing how its chatbot handles discussions about suicide. This concern was amplified when a family in the U.S. filed a lawsuit against OpenAI following the tragic suicide of their teenage son.
In response to the increasing urgency of these issues, OpenAI recently announced the recruitment of a “Head of Preparedness.” This new role will focus on evaluating the risks associated with AI, particularly regarding its effects on mental health and cybersecurity.
More individuals are also seeking connections through AI. Recently, a woman in Japan made headlines by marrying her AI boyfriend, highlighting the growing trend of using technology for personal relationships.
Additionally, two platforms that specialize in virtual character interactions, Character.ai and Polybuzz.ai, have gained significant popularity. According to SimilarWeb’s rankings, they were listed among the top 15 AI chatbots and tools currently in use.
Other Stories You May Like