Beijing’s Internet Court has delivered a landmark ruling in a case involving the unauthorized use of an actress’s image through AI-powered face-swapping technology. The court found two companies liable for infringing on the actress’s personal information rights, ordering them to issue public apologies and provide financial compensation for damages. This decision signals a growing legal awareness in China regarding the ethical and legal implications of artificial intelligence, particularly concerning the protection of individual likenesses in the rapidly expanding digital media landscape.
The case centered around a short drama where the actress’s face was digitally superimposed onto a character without her consent. This led to public confusion, with many believing she had participated in the production. The actress subsequently filed a lawsuit against both the production company responsible for creating the drama and the video platform hosting it, alleging unauthorized commercial use of her image. The ruling underscores the increasing importance of protecting personal data and intellectual property in the age of AI-generated content.
During the trial, the production company argued that the AI-generated character merely resembled the actress, claiming no specific prompts referencing her were used in the creation process. They maintained the resemblance was statistically probable and that the segment was quickly removed after her complaint, resulting in no actual harm. The video platform, meanwhile, asserted it had legally obtained distribution rights and was therefore not responsible for the content itself. However, the court rejected both defenses.
Judge Zhao Qi, referencing China’s Civil Code, clarified that even slight alterations to an individual’s likeness through AI do not negate infringement if the person is still recognizable to the public. “The segments bore a strong resemblance to the actress, and public comments identified the character as her,” Judge Zhao stated, adding that the production company’s inability to demonstrate its AI process further weakened its claims. The court found that two segments within the 44-episode drama clearly utilized face-swapping technology, contributing to the public misidentification.
The court also held the streaming platform accountable, despite its possession of distribution rights, for failing to adequately review the content before publication. “The streaming platform, despite having distribution rights, was held liable for not reviewing the content, thus failing in its duty to prevent infringement,” Judge Zhao explained. This aspect of the ruling highlights the responsibility of platforms to proactively monitor and prevent the dissemination of infringing material, even when obtained through legal channels. Taylor Wessing provides further details on the case.
The Rise of AI and the Need for Clear Legal Boundaries
This case arrives amid a surge in the popularity of short-form dramas and the increasing accessibility of AI-powered tools. The court acknowledged the growing appeal of these short dramas but emphasized that technological advancements must not come at the expense of individual rights. Judge Zhao Qi stressed the need for both creators and platforms to “enhance content review processes to prevent infringements,” adding that “strict adherence to legal boundaries by all industry players is essential for the healthy development of the short drama market.”
The ruling comes as China continues to refine its regulatory framework surrounding artificial intelligence. A recent report from ICLG.com details China’s key developments in AI governance in 2025, highlighting the government’s focus on balancing innovation with ethical considerations and legal protections. The Beijing Internet Court’s decision aligns with this broader trend, demonstrating a willingness to enforce existing laws in the context of new technologies.
Implications for AI-Generated Content and Personality Rights
This ruling sets a significant precedent for future cases involving AI-generated content and the protection of personality rights in China. It clarifies that the unauthorized use of an individual’s likeness, even through AI manipulation, constitutes a violation of personal information rights. The decision also underscores the responsibility of platforms to actively monitor and prevent the distribution of infringing content, even if they have legally obtained distribution rights. IAPP offers a global perspective on AI governance law and policy, including developments in China.
The case also touches upon the complex intersection of AI, copyright, and personality rights, as explored in recent jurisprudence from Beijing. The National Law Review provides analysis of these emerging legal trends.
Looking ahead, this ruling is likely to encourage further legal challenges against the unauthorized use of personal likenesses in AI-generated content, not only in China but potentially globally. It reinforces the need for clear legal frameworks and ethical guidelines to govern the development and deployment of AI technologies, ensuring that individual rights are protected in the digital age. The ongoing evolution of AI law will undoubtedly be a critical area to watch in the coming years.
What are your thoughts on the implications of this ruling for the future of AI and personal privacy? Share your comments below.