The images appeared on X, the social media platform formerly known as Twitter, in December 2025. They were AI-generated, sexualized depictions of UK journalist Daisy Dixon, created using the platform’s own Grok AI tool. It took days for X to geoblock the function that enabled their creation, and even then, the abusive images continued to spread.
Dixon’s experience is not isolated. A growing wave of deepfake abuse, overwhelmingly targeting women, is exposing the failures of legal systems, tech platforms, and societal protections. Whereas the technology to create these manipulated images, audio, and videos has existed for some time, its weaponization against women is a rapidly accelerating phenomenon.
According to a 2023 report, 98 percent of all deepfake videos online were pornographic, and 99 percent depicted women. The prevalence of these videos had increased by an estimated 550 percent since 2019. The tools to create them are widely available, often free, and require minimal technical skill. Once posted, the content can be endlessly replicated, saved, and shared, making complete removal virtually impossible.
Underreporting remains a significant obstacle to accountability. Survivors who do come forward often face retraumatization as they are repeatedly asked to view and describe the abusive content to police, lawyers, and platform moderators. They may also encounter skepticism, with questions like, “are you sure it’s not real?” or “did you share intimate images before?” If a case reaches court, a survivor’s past and personal life are often scrutinized, while the perpetrator’s actions are not.
The scale of the harm is substantial, yet prosecutions remain rare. A UN Women survey found that 41 percent of women in public life who experienced digital violence also reported facing offline attacks or harassment linked to it. Deepfake abuse can even serve as an online catalyst for so-called “honour-based crimes” in certain cultural contexts, potentially leading to extreme physical violence or death.
The legal landscape is struggling to keep pace. Less than half of countries have laws addressing online abuse, and even fewer have specific legislation covering AI-generated deepfake content. Many “revenge porn” or image-based abuse laws were written before deepfakes existed, creating significant loopholes. In some countries, deepfake pornography or AI-generated nude images fall into legal grey areas, leaving survivors unsure whether the abuse is even illegal or whether perpetrators can be prosecuted.
Enforcement is further hampered by a lack of resources. Even when laws exist, investigators require digital forensics expertise, cross-border coordination, and platform cooperation to build a case. Digital forensics backlogs often stall cases before they even begin. Platforms are often slow or unwilling to share data with law enforcement, particularly in cross-border investigations.
Tech platforms have historically shielded themselves behind “intermediary” status, avoiding responsibility for user-generated content. This approach is facing increasing scrutiny as the harms grow more apparent.
Several jurisdictions are beginning to seize action. Brazil amended its criminal code in 2025, increasing penalties for causing psychological violence against women using AI or technology to alter their image or voice. The European Union’s Artificial Intelligence Act imposes transparency obligations around deepfakes. The United Kingdom’s Online Safety Act prohibits sharing digitally manipulated explicit images, but its applicability to the creation of deepfakes and cases where intent to cause distress cannot be proven remains uncertain. The United States Take It Down Act explicitly covers AI-generated intimate imagery and requires platform removal within 48 hours.
Addressing deepfake abuse requires urgent, coordinated action from governments, institutions, and tech platforms. This includes passing legislation with clear definitions of AI-generated abuse, focusing on consent, and establishing strict liability for perpetrators. Justice systems need training, resources, and dedicated capacity to collect and preserve digital evidence, while digital forensics backlogs must be addressed. Tech companies must be legally required to proactively monitor for and remove abusive content within mandatory timelines, cooperate with law enforcement, and face financial consequences for failing to act. Real support for survivors, including trauma-informed legal and law enforcement professionals and free legal aid, is also essential. Finally, education on digital literacy, consent, and online safety must begin at a young age and reach everyone, as prevention is as important as prosecution.
UN Women has warned that What we have is not a niche internet problem, but a “global crisis.”