This April, after enduring days of escalating online harassment, the mother of a 7-year-old — known online simply as the “Shanghai woman” — jumped to her death from her apartment building.
The attacks centered on, of all things, a “thank you” post the woman wrote to a delivery driver who had travelled 27 kilometers to bring food to her father during the city’s COVID-19 lockdown. Although she explained that the rider had turned down her offer of cash and that she was unemployed at the time, her decision to send the driver 200 yuan ($27) in phone credits was judged insufficient by the court of online opinion. Social media users deemed her a “petty Shanghainese” who had taken advantage of a hard-working courier — and who deserved to be punished for her transgression.
Attributing a person’s suicide to any one event is risky, but despite a lack of hard data, there is a growing perception in China that online harassment isn’t just out of control; it’s claiming lives. Three months before the Shanghai lockdown, a 15-year-old teenager who was sold by his birth parents as a baby — and was later rejected by them when he reached out to reconnect — took his own life, a decision many observers linked to a vicious social media campaign to shame him for “hyping” his story.
As the toll of online violence mounts, regulators and platforms have struggled to respond effectively. Although China’s internet watchdog has vowed to fight online harassment, it remains unclear whether officials and tech companies fully grasp the nature and scale of the issue, much less whether they have the mindset and tools needed to effectively combat it.
In fairness, Chinese social media companies have taken steps to clamp down on trolls, harassment, and hate speech on their platforms. According to Weibo’s community guidelines, verbal abuse, the venting of personal anger, humiliating others, and hate speech based on personal origin, including birthplace and cultural background, are all classified as “harmful information” subject to potential deletion.
Similarly, short video app Douyin — the version of TikTok accessible on the Chinese mainland — singles out region-based discrimination and hate speech as content subject to moderation. But platforms often struggle to balance their desire to regulate speech with their business interests, and in the absence of legal repercussions for hosting hate speech, they all too often err on the side of the latter. Frequently, posts are only deleted after the real-world damage is done, in response to pressure from the public. By then, it’s too late.
Internet regulators, too, are typically far more cautious when it comes to hate speech and verbal harassment than for other sensitive categories, such as politics or pornography. This is true even in relatively clear-cut cases, such as hate speech grounded in cultural and regional discrimination, prejudice, and bigotry. In the case of the “Shanghai woman,” her identity as a Shanghai local — and the wealth and privilege that are implied — made her an easy target for online vigilantes.
It is not just an attitude problem. Tactics matter. Currently, hate speech is often dealt with on a case-by-case basis. Rather than recognize online hate as a collective problem with the potential to cause real world harm, it is treated as a series of individual online disputes.
Meanwhile, because it is difficult to establish a link between any one online comment and the offline fallout, such as suicide or self-harm, bringing attackers to justice is all but impossible. In the absence of laws regulating hateful or violent speech online, victims or their families are left to pursue justice through other avenues, such as slander or defamation.
A keyword search for “hate speech” on China Judgements Online, China’s national court record database, returns just two results, both from the same case. A similar keyword search for “online violence” produces just 54 court records. In most cases, hate speech and online violence were mentioned, not as the cause of these lawsuits, but as background context.
Even more concerning, after years of corporate neglect and social media polarization, many people have come to accept the existence of cyber violence, or even justify it. For example, when a 20-year-old female student died in May after an impatient emergency call operator failed to dispatch an ambulance in a timely fashion, anger at the operator’s callous attitude boiled over online. Social media users posted the operator’s social media accounts, real name, address, and photo, as well as the accounts of her boyfriend.
In the comments, users embraced the hate. “This is my first time doing online violence against someone, coach me,” wrote one user. “If she is not treated with online violence, people in this comment thread should all be held responsible,” added another. Neither comment has been deleted as I write this.
To avoid the worst-case scenario, in which we find ourselves stuck in a spiral of hatred, every effort should be made to increase public awareness of the nature and consequences of online violence and harassment. We know from the examples of politics and pornography that platforms can regulate content, but doing so will require regulators and platforms to accept that hate speech and trolling are not isolated cases. Rather, they are symptoms of a toxic information environment, one which they bear responsibility for.
In China, the Beijing Suicide Research and Prevention Center can be reached for free at 800-810-1117 or 010-82951332. In the United States, the National Suicide Prevention Lifeline can be reached for free at 1-800-273-8255. A fuller list of prevention services by country can be found here.
All this week, Sixth Tone is taking a closer look at online harrassment, digital trolls, and cyberbullying on the Chinese internet. Part one, on a journalist’s search for online trolls, can be found here.
Editors: Cai Yineng and Kilian O’Donnell.
(Header image: Pavel Naumov/VCG)