Home » world » EU Investigates X Over AI‑Generated Sexual Deepfakes and DSA Violations

EU Investigates X Over AI‑Generated Sexual Deepfakes and DSA Violations

by Omar El Sayed - World Editor

“`html

X faces EU Scrutiny Over AI-Generated Content,Deepfakes

Brussels – The European Commission has launched a formal investigation into X,formerly known as Twitter,over concerns it is failing to adequately address the risks associated with the platform’s integration of Artificial Intelligence. The probe centers on X’s “Grok” AI and its potential to disseminate illegal content, notably sexually explicit deepfakes and manipulated media.

EU Commission Cites Failure to Assess Risks

The Commission alleges that X did not conduct sufficient risk assessments before launching Grok, violating article 35 of the Digital Services Act (DSA). This article mandates that platforms implement “appropriate and effective mitigation measures” to address systemic risks. The DSA empowers the EU to regulate large online platforms, ensuring a safer digital surroundings for users.

Officials are particularly concerned about the potential for Grok to amplify harmful content and endanger the physical and mental well-being of citizens. Executive Vice President Henna Virkkunen stated that sexual deepfakes represent a form of violence and that protecting vulnerable individuals – especially women and children – is paramount, even amidst technological advancements. The investigation will specifically examine whether X’s algorithms systematically violate Articles 34 and 35 of the DSA,and if the company deliberately weakened protective measures.

Digital Services Act: A Growing Trend?

The DSA, which came into force in February 2024, applies to all online platforms operating in the EU with over 45 million users. It imposes strict obligations relating to content moderation, transparency, and accountability. This action against X signals the end of a ‘grace period’ for generative AI models within social networks.The EU is increasingly focused on holding platforms responsible for the content hosted on their sites, even when that content is generated by AI.

potential Consequences for X

The Commission’s investigation could result in significant penalties for X,including fines of up to six percent of its global annual turnover. The EU also has the authority to order interim measures to immediately address any imminent threats to user safety. This probe represents a critical test of the DSA’s enforcement capabilities.

Regulation Key Requirement X’s Alleged Violation Potential Penalty
Digital Services Act (DSA) Platforms must assess and mitigate systemic risks. Failure to conduct a risk assessment before launching Grok. Fines up to 6% of global annual turnover.
DSA Articles 34 & 35 Duty of care to protect users from illegal content. Potential algorithmic amplification of harmful content. Interim measures to address immediate dangers.

The Broader Implications for AI Regulation

This case arrives

What penalties could the EU impose on X if it is found violating the Digital Services Act over deepfakes?

EU Investigates X Over AI‑Generated Sexual Deepfakes and DSA Violations

The European Union has launched a formal examination into X (formerly twitter) over concerns regarding the proliferation of AI-generated sexual deepfakes and potential violations of the Digital services Act (DSA). This marks the first investigation under the DSA targeting a major social media platform for systemic risks related to illegal content.The investigation, announced on January 29, 2026, signals a significant escalation in the EU’s efforts to regulate online platforms and protect users from harmful content.

What are Deepfakes and Why the Concern?

Deepfakes, created using artificial intelligence, are manipulated videos or images that convincingly depict individuals doing or saying things they never did. While not all deepfakes are malicious, the creation of non-consensual, sexually explicit deepfakes – often targeting women – has become a widespread and deeply damaging form of online abuse.

* Psychological Harm: Victims experience severe emotional distress, reputational damage, and potential real-world consequences.

* Erosion of Trust: The increasing sophistication of deepfakes undermines trust in online media and information.

* Legal Ramifications: Creating and distributing deepfakes can lead to legal repercussions, including defamation and privacy violations.

* Rapid Spread: Social media platforms facilitate the rapid and widespread dissemination of these harmful images and videos.

The DSA and X’s Obligations

The Digital services Act, which came into full effect in February 2024, imposes stringent obligations on very large online platforms (VLOPs) like X. These obligations include:

  1. Risk Assessment: VLOPs are required to systematically assess and mitigate systemic risks arising from their services,including the spread of illegal content.
  2. Content Moderation: Platforms must implement effective content moderation systems to remove illegal content promptly.
  3. Clarity: vlops must be clear about their content moderation policies and practices.
  4. User Reporting Mechanisms: Easy-to-use and effective mechanisms for users to report illegal content are mandatory.
  5. Cooperation with Authorities: Platforms must cooperate with EU authorities and provide access to data for investigations.

The EU believes X has failed to adequately address the systemic risks associated with the spread of AI-generated deepfakes on its platform, potentially breaching its obligations under the DSA.

Specific Concerns Raised by the EU Commission

The Commission’s investigation focuses on several key areas:

* Insufficient Response to Notices: Concerns that X has been slow to respond to notices regarding illegal content, including deepfakes.

* Lack of Transparency: Criticism regarding the lack of transparency surrounding X’s content moderation policies and their application to deepfakes.

* Inadequate Content Moderation Systems: Doubts about the effectiveness of X’s systems in detecting and removing deepfakes.

* Algorithmic Amplification: Allegations that X’s algorithms might potentially be inadvertently amplifying the reach of harmful deepfake content.

* Verification of Users: Questions surrounding the platform’s user verification processes and their ability to prevent the creation of fake accounts used to distribute deepfakes.

Potential consequences for X

If found in violation of the DSA, X could face significant penalties:

* Fines: Fines of up to 6% of the company’s global annual revenue.

* Periodic Penalty Payments: Ongoing financial penalties for continued non-compliance.

* Suspension of Services: In extreme cases, the EU could order the temporary suspension of X’s services within the EU.

* Increased Scrutiny: Heightened regulatory oversight and more frequent audits.

Real-World Examples & Previous Actions

this investigation builds upon a growing trend of regulatory action against social media platforms regarding harmful content. In late 2025, several European countries began pursuing individual legal cases against X related to deepfake abuse. Furthermore, advocacy groups have consistently highlighted the platform’s shortcomings in addressing this issue, publishing reports detailing the ease with which deepfakes can be created and disseminated on X.

A notable case in Germany involved a politician targeted by a deepfake video shortly before an election, prompting calls for stricter regulations and platform accountability. This incident underscored the potential for deepfakes to interfere with democratic processes.

What This Means for Users & Other Platforms

This EU investigation sends a clear message to all social media platforms: proactive measures to combat the spread of illegal and harmful content, particularly AI-generated deepfakes, are no longer optional.

* Increased Platform Responsibility: platforms will be expected to invest in more sophisticated content moderation technologies and processes.

* Enhanced User Protections: Users can anticipate stronger protections against online abuse and more effective mechanisms for reporting harmful content.

* Focus on AI Regulation: This case will likely accelerate the progress of broader regulations governing the use of AI technologies.

* Importance of Digital Literacy: Raising public awareness about deepfakes and promoting digital literacy skills will be crucial in combating their spread.

Practical Tips for Identifying Deepfakes

While technology to detect deepfakes is improving,users can take steps to protect themselves:

* Look for inconsistencies: Pay attention to unnatural blinking,lip syncing issues,or strange lighting.

* Check the source: Verify the credibility of the source sharing the content.

* Reverse image search: Use tools like Google Images to see if the image or video has been altered.

* Be skeptical: If something seems too good (or too bad) to be true, it probably is.

* Report suspicious content: Report any suspected deepfakes to the platform.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.