TOPICS 

    Subscribe to our newsletter

     By signing up, you agree to our Terms Of Use.

    FOLLOW US

    • About Us
    • |
    • Contribute
    • |
    • Contact Us
    • |
    • Sitemap
    封面
    NEWS

    China Drafts Rules to Regulate AI ‘Boyfriends’ and ‘Girlfriends’

    The proposal would require platforms to monitor suicide risk, protect minors, restrict harmful content, and trigger human intervention in cases of distress.

    As China continues to tighten oversight of AI, its cyberspace regulator has moved to rein in AI “boyfriends” and “girlfriends,” proposing rules that would require platforms to intervene when users express suicidal or self-harm tendencies. 

    The draft regulation released Dec. 27 by the Cyberspace Administration of China requires chatbot providers to strengthen protection for minors and restrict obscene or harmful content, while also encouraging similar services tailored to elderly users. 

    The regulator defines “anthropomorphic interactive services” as AI systems that “simulate human personality traits, thinking patterns, and communication styles” and engage in “emotional interaction.” The draft is open for public comment through Jan. 25. 

    In recent years, Chinese tech companies have moved quickly to develop AI companions as more users turn to chatbots for virtual relationships. Two leading Chinese AI chatbot startups, Zhipu AI and MiniMax, filed for public listings in Hong Kong over the past week, potentially becoming the world’s first such startups to go public.

    The draft rules prohibit AI chatbots from encouraging suicide or self-harm, or from harming users’ mental health through verbal abuse or emotional manipulation. Chatbots would also be barred from generating gambling-related, obscene, or violent content. 

    Minors and elderly users would be required to provide guardian or emergency contact information during registration. If such a user expresses suicidal intent, the service provider would be required to initiate immediate human intervention and contact the designated person. Minors would also need guardian consent before accessing AI companionship. And guardians would be able to block specific AI personas, set time limits, and restrict in-app spending.

    While the draft encourages companionship for the elderly, it bans AI service providers from simulating relatives or real-life personal relationships. 

    Zhao Wenya, a lawyer at Beijing Ocean Law Firm, told Sixth Tone that the “very strict” safety obligations pose significant implementation challenges, including the need for stronger recognition of human language cues by AI algorithms, such as how to identify suicidal intent. 

    The draft also requires platforms to clearly disclose that users are interacting with AI rather than humans, and to prompt users to stop if continuous use exceeds two hours. 

    Chatbots with more than one million registered users or over 100,000 monthly active users would be required to submit security assessment reports to the regulator, the draft adds.

    According to AI product tracker Aicpb.com, MiniMax’s virtual character chat app Xingye had 4.6 million monthly active users in December, while Mao Xiang, a similar product developed by TikTok owner ByteDance, reported 4.7 million. Minimax’s listing prospectus shows Xingye and its overseas version Talkie generated more than $18.7 million in revenue the first nine months of last year, with users spending more than 70 minutes per day on average.

    The draft rules also require that, unless users give “separate and explicit consent,” platforms would be prohibited from using interaction data and sensitive personal information for large language model training.

    This goes beyond the “default consent” approach in many existing chatbot privacy policies, under which users must proactively opt out if they do not want their data used for training. 

    Yet legal experts caution that regulation alone may not offer a one-size-fits-all solution. 

    Editor: Marianne Gunnarsson.

    (Header image: Olemedia/Getty Creative/VCG)