TOPICS 

    Subscribe to our newsletter

     By signing up, you agree to our Terms Of Use.

    FOLLOW US

    • About Us
    • |
    • Contribute
    • |
    • Contact Us
    • |
    • Sitemap
    封面
    NEWS

    AI Developer Not Liable for Hallucination Errors: Chinese Court

    The decision finds developers are not automatically responsible for AI hallucinations unless users can show fault and harm.

    A Chinese court has ruled for the first time that a developer is not legally responsible for an AI “hallucination” — fabricated information presented as fact — establishing a precedent for how such cases may be handled under Chinese law.

    The ruling, issued by the Hangzhou Internet Court, classifies AI-generated content as a service rather than a product in cases involving hallucinations. As a result, users must prove that a developer was at fault in the content-generation process and that the error caused actual harm.

    The decision was made in a lawsuit dismissed last month against an unnamed developer. The court found that the developer bore no liability after its AI invented a nonexistent campus of a real Chinese university and later told the user it would compensate him 100,000 yuan ($14,400) for the mistake. Neither party has appealed.

    The case arose in June 2025, when the plaintiff, surnamed Liang, used the AI to search for information about the university. The system fabricated a campus and continued to insist it existed even after Liang challenged the claim.

    When Liang presented evidence disproving the information, the AI responded that if its content were incorrect, it would compensate him and suggested he sue for damages through the Hangzhou Internet Court. 

    Liang subsequently sued the developer for nearly 10,000 yuan in damages, arguing that the false information had misled him and that the AI promised compensation. 

    Rejecting the claim, the court ruled that the AI “does not possess civil subject status and therefore cannot independently make legally binding expressions of intent.” It added that the developer had not authorized the system to express intent on the company’s behalf.

    The court also found that Liang failed to demonstrate actual harm, noting that the false information did not affect his subsequent decisions.

    The court said AI-generated content generally does not constitute high-risk activity and that developers have limited ability to control AI responses. Imposing strict liability, it said, could hinder technological innovation.

    AI hallucinations have increasingly made headlines in China. Last year, a security guard drew widespread attention after an AI offered to sign his poetry with its company for 100,000 yuan and even proposed a signing date — a deal that never materialized.

    Experts say the Hangzhou judgment helps clarify how courts may approach disputes involving AI-generated misinformation. 

    “The ruling offers important guidance for how courts may apply civil liability principles in future cases involving AI-related infringement disputes,” Tsinghua University Law School professor Cheng Xiao said. 

    Under existing regulations, AI service providers are required to review and remove prohibited, harmful, or illegal content, but they are not obligated to guarantee the accuracy of all generated information.

    To address this, Cheng said, platforms should clearly warn users about the limitations of AI-generated content while continuing to improve their accuracy and reliability. Courts, he added, should assess whether providers have met their duty of care by considering factors such as the potential impact of the content on users’ rights.

    Editor: Marianne Gunnarsson.

    (Header image: imaginima/Getty Creative/VCG)