Can Taylor Swift defeat pornographic deepfakes?

The case is symptomatic of the challenges posed by the democratization of generative artificial intelligence. It could mark awareness against a scourge which targets almost exclusively women. Singer Taylor Swift was the subject of pornographic deepfakes last week, ultra-realistic photo and video montages generated using AI and distributed on social networks. The content allegedly comes from a Telegram group specializing in the creation of sexual montages of women.

One of the images featuring the American star was seen nearly 50 million times on X, where it remained accessible for almost 24 hours before being deleted, causing fear among the millions of fans of most listened to artist in the world in 2023. The incident pushed the “swifties” to mobilize to counter the proliferation of these deepfakes. Fans massively published the message written in capital letters: “PROTECT TAYLOR SWIFT” with extracts from the artist’s concerts to drown out the explicit content. More than 200,000 publications of support were recorded over the weekend, in a general protest movement against deepfakes porn of women

The White House wants to push Congress to pass a federal law against deepfake porn

This case once again highlights the inability of platforms, notably X, to act against disinformation, a phenomenon that has accelerated with the emergence of generative artificial intelligence. An observation which has stirred up the American political class in the midst of the presidential race.

The White House has called on Congress to legislate to protect people who are victims of this type of AI-generated pornographic videos. For the moment, around ten American states have already taken measures in this direction but no federal law criminalizes the dissemination of deepfakes. The platforms were also ordered to act more quickly to fight against the dissemination of these images.

Faced with the controversy, X, Elon Musk’s network, ended up removing the controversial images, closing the accounts involved in their distribution and blocking all searches linked to the singer’s name. “A temporary measure, taken with extreme caution, because we prioritize safety,” according to the platform, which did not specify how long it would last.

X also announced the creation of a moderation center to fight against the sexual exploitation of children which will employ around a hundred people in the United States. A way of giving assurances to the authorities. Since its takeover by the billionaire, a fervent defender of freedom of expression, and the dismantling of its teams responsible for combating abusive content online, the platform has regularly been singled out for its laxity in terms of moderation and its role in amplification of toxic messages. The company is also in the sights of the European Commission, which has opened an investigation into alleged breaches of European rules on content moderation and transparency.

Women are the first victims of deepfakes

Beyond the regulatory aspect, the case also illustrates the paradox brought by the rise of deepfakes. The phenomenon is regularly highlighted for the risks that these montages pose to democracies, when they are used to disrupt an election or a conflict, for example, while these use cases represent a small portion of the videos which are broadcast through the planet.

In reality, technology is massively used to create sexual videos of women, famous and anonymous, without their consent. According to a study by cybersecurity company Home Security Heroes, pornographic deepfakes represented 98% of global production of the genre last year. 113,000 videos of this type were posted online on pornographic platforms last year, according to the magazine Wired. These contents are easy to find in the search engines of adult sites. And with advances in AI, specialized forums now provide advice to allow anyone to undress the person of their choice in a few clicks from a simple photo. A practice denounced in December by French journalist Salomé Saqué, herself a victim of a deepfake.

As journalist Lucie Ronfaut pointed out last year in her newsletter #Rule30 on Numerama, “the vast majority of harmful applications of deepfakes are misogynistic”. “The root of the problem is not technological,” she explained. “Creating a pornographic video of a woman without her consent is not (necessarily) wanting to deceive Internet users. It’s a desire for control. It doesn’t matter whether it’s real or fake. The trauma of “Victims are real. So is the interest of men. To humiliate someone, to appropriate them, to ruin their life, even out of curiosity.”

The fact that the target of these images is now the icon Taylor Swift, the most famous woman in the world, personality of the year 2023 from Time magazine, who made Apple bow to rights holders and put the finger on the bad practices of the global concert giant Live Nation, will perhaps finally move the lines. “If Taylor Swift can’t defeat Deepfake Porn, no one can,” says Wired, which sees this affair as a triggering event comparable to “Celebgate”, the iCloud data leak of 2014 which led to the distribution of nude photos of celebrities and pushed Apple to strengthen its security features. Taylor Swift has not yet spoken out on the subject. But his next speech will be closely followed.

The editorial team recommends

News from the RTL editorial team in your inbox.

Using your RTL account, subscribe to the RTL info newsletter to follow all the latest news on a daily basis

Read more

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.