
Viral Deepfake Demo Prompts ByteDance to Limit New AI Video Tool
Within just three days of release, Chinese tech giant and TikTok owner ByteDance has temporarily restricted features in its new generative video model.
The company announced changes after a popular content creator demonstrated that it could recreate his voice, office environment, and even generate a rear view of his body using only a single photo.
The pre-trial version of the AI model, dubbed Seedance 2.0, entered open testing Saturday on Jimeng, ByteDance’s AI model, immediately prompting comparisons with Sora 2, ChatGPT creator OpenAI’s video generation model released last year.
On Monday, online influencer Pan Tianhong, founder of tech media outlet Media Storm and known online as Tim, posted a video showing the tool generating highly realistic scenes based on limited input.
He described the results as “terrifying” and suggested that traditional film production would soon face disruption. “Traditional film and television production has entered a countdown to being swept away by an AI tsunami,” Pan said on video-streaming platform Bilibili.
The post rapidly drew millions of views, propelling the AI model into the spotlight. Across domestic social media, short clips such as playing basketball with LeBron James, cats fighting Godzilla, and famous battle scenes emerged, with commenters praising the new tool.
By the time Seedance 2.0 officially launched Thursday, the topic had garnered more than 70 million views on microblogging Weibo, with some users expressing concerns over authorship and copyright protection, while experts warned of potential legal risks.
In response, domestic media quoted a Jimeng member of staff saying that Seedance 2.0 would restrict the use of real-person reference materials to maintain what it called “a healthy and sustainable media environment.” The system now blocks direct uploads of celebrity faces and requires users to verify identity before generating content of themselves.
Despite the new restrictions, Seedance 2.0 is being rolled out gradually though access requires points, with higher-tier memberships unlocking capabilities, including faster processing, higher resolution, and lip-sync features.
The tool has also drawn praise from industry figures. Producer of the hugely popular video game “Black Myth: Wukong” called it “the strongest video generation model” currently available, while Tang, a professional AI-generated content creator and top Jimeng collaborator, said the model reduced production time for a one-minute video from three or four days to about half a day.
“The model shows a qualitative leap in visual understanding — from dialogue and performance to camera movement and effects,” she said.
Speaking to Sixth Tone, Tang, who runs both tutorial and short AI video accounts, said Seedance 2.0 was a “double-edged sword.” The new model benefits her short-video account by improving the efficiency, she added, but poses greater challenges for her second, more technically oriented account.
“Those of us who benefited from technical advantages now need new outlets,” Tang said. “Experience alone is no longer enough — creators will need stronger IP and distinctive identities.”
At the same time, the model’s realism has amplified concerns over authorship, likeness rights, and copyright protection. After AI-generated fight scenes featuring Hong Kong actor Stephen Chow circulated online, his agent publicly questioned whether such works constituted infringement.
Sha Lei, a professor at the Institute of Artificial Intelligence at Beihang University in Beijing, told domestic media that such limitations reflect necessary safeguards. “When technological progress accelerates, maintaining boundaries against misuse becomes essential,” he said.
Editor: Marianne Gunnarsson.
(Header image: VCG)










