Home » Technology » X Adds Option to Block Grok Image Edits Amid AI Deepfake Concerns

X Adds Option to Block Grok Image Edits Amid AI Deepfake Concerns

by Sophie Lin - Technology Editor

Users of X, formerly Twitter, now have a limited ability to prevent the platform’s AI chatbot, Grok, from altering images they upload. The new feature, discovered within the iOS app’s image/video upload menu, allows users to block Grok from creating modifications of their content, but experts say it’s a narrow fix to a much larger problem.

The addition of this control comes amid growing scrutiny of Grok’s image generation capabilities, which have been exploited to create a significant volume of inappropriate content. The situation escalated at the beginning of 2026 when the introduction of image generation tools led to the creation of approximately 3 million sexualized or nudified images, according to reports. An estimated 23,000 of those images depicted sexualized images of children, raising serious concerns about exploitation and abuse, as detailed by the Center for Countering Digital Hate.

Whereas X and its parent company, xAI, haven’t publicly announced the new blocking feature, its arrival suggests a response to mounting pressure and two separate investigations launched by regulators in the European Union regarding Grok’s handling of generated images. The investigations center on the proliferation of harmful content and potential violations of data privacy regulations.

The new feature is a simple toggle within the app’s interface, making it easily accessible to users. However, its limitations are significant. As The Verge reported, the block primarily prevents tagging Grok in replies to create image edits. This means determined individuals can still circumvent the restriction through other methods, continuing to utilize generative AI for non-consensual and harmful purposes.

Grok’s Troubled History with Image Generation

This isn’t the first time xAI has attempted to address concerns about Grok’s image generation. In January, the company implemented restrictions on image editing, blocking users in certain locations from generating revealing images of people, particularly where such content is illegal. However, xAI did not specify which countries were subject to these restrictions and the effectiveness of the measures has been questioned.

The current situation highlights the challenges of moderating AI-generated content at scale. Grok, built by Elon Musk’s xAI, is positioned as an advanced AI chatbot with capabilities including voice chat, image and video generation, real-time search, and advanced reasoning, as described on xAI’s website. The chatbot is available through grok.com, iOS, and Android, and is offered to SuperGrok and Premium+ subscribers, as well as through the xAI API.

A Limited Solution to a Complex Problem

Experts suggest that the current blocking feature is a “token gesture” that doesn’t address the fundamental issues with Grok’s capabilities. While preventing direct alterations to uploaded files is a step in the right direction, it doesn’t prevent the creation of harmful images based on other prompts or data sources. The core problem remains: Grok can still be used to generate non-consensual intimate imagery.

xAI could potentially halt image generation altogether until more robust safeguards are in place, a solution the company considered previously. However, doing so would likely impact a key feature driving subscriptions to Grok’s premium tiers, including SuperGrok Heavy, which provides access to Grok Heavy and higher rate limits. Grok 4, described as the most intelligent model in the world, includes native tool use and real-time search integration and is available to SuperGrok and Premium+ subscribers, as well as through the xAI API, according to xAI’s company page.

If xAI continues to offer only limited solutions, its claims of maintaining a “zero-tolerance space for nonconsensual nudity” will likely be viewed with skepticism. The company recently unveiled Grok 4, touted as the world’s most powerful AI model, but its ability to effectively prevent misuse remains a critical concern.

The ongoing investigations in the EU and the continued emergence of workarounds to Grok’s limitations underscore the need for more comprehensive and proactive measures to address the risks associated with generative AI. The future of Grok, and similar AI tools, will depend on the ability of developers to balance innovation with responsible development and robust safety protocols.

What steps will xAI take to address the underlying issues with Grok’s image generation capabilities? Share your thoughts in the comments below.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.