Subscribe to our newsletter

     By signing up, you agree to our Terms Of Use.


    • About Us
    • |
    • Contribute
    • |
    • Contact Us
    • |
    • Sitemap

    Chinese Man Uses ChatGPT To Create Fake News, Arrested

    Police said the suspect created fake content about nine people killed in a train accident, which he spread on search giant Baidu.

    A content creator surnamed Hong from Dongguan in southern China’s Guangdong province has been arrested for producing and spreading AI-generated fake news. Local police announced Sunday that the suspect used ChatGPT to create fake content about nine people killed in a train accident. 

    As part of its three-month campaign to counter vulgar and fake content online, police in Gansu’s Pingliang City spotted news about nine victims of a train accident in the city’s Kongtong District. 

    On April 25, police determined that the report was fake and tracked down Hong, the person who ran the Baidu accounts used for spreading the rumors. 

    The police said they found different versions of this report citing other cities across the province, including Lanzhou, Longnan, Dingxi, and Qingyang. All reports had similar details. 

    Hong admitted to spreading the rumors across his 21 accounts on China’s search giant Baidu, which drew more than 15,000 clicks online and netted him an undisclosed income.

    Hong told police that he used ChatGPT, which has become an internet sensation in China since its launch, to draft and edit reports based on social news keywords over the past few years so as to avoid plagiarism checks on Baidu’s content platform. 

    Hong is accused of “picking quarrels and provoking trouble,” police said in the statement.

    In February, another ChatGPT-generated government notice, initially intended as a joke, caused massive public confusion and ended in a police investigation.

    With billions being poured into research, China has stepped up its regulation of AI-generated content. The country’s top internet watchdog has also rolled out draft rules for generative AI amid mounting concerns over misinformation and privacy issues. 

    These rules require all generative AI products that produce content such as text, images, audio, videos, and code, using algorithms, models, and other rules, to undergo a security review before they are released to the public. 

    The Cyberspace Administration of China also added that the content must be “true and accurate,” and measures must be introduced to prevent discriminatory content.

    Editor: Apurva. 

    (Header image: Marco BERTORELLO/VCG)