A partnership between OpenAI and the U.S. Department of Defense, announced on February 28, 2026, triggered an immediate backlash from a segment of ChatGPT users, resulting in a surge of app uninstalls and negative reviews.
Within days, the mobile application of the popular AI assistant experienced a dramatic increase in uninstallations, jumping 295% on February 28th compared to the previous day, according to data from Sensor Tower relayed by TechCrunch. The surge in negative sentiment was equally pronounced, with one-star reviews increasing by 775% on the same day and continuing to climb another 100% on March 1st. Simultaneously, five-star ratings plummeted by half.
U.S. Downloads of the application also reversed course, falling 13% on Saturday and 5% on Sunday, reversing a previously upward trend. Users organized opposition on social media around the hashtag “#QuitGPT,” encouraging others to delete the application or cancel subscriptions in protest of the collaboration with the U.S. Military.
The controversy provided an immediate boost to Claude, a competing AI platform developed by Anthropic, which had declined a similar partnership with the U.S. Army days earlier. Sensor Tower data showed U.S. Downloads of Claude increased 37% on February 27th, then 51% on February 28th. Estimates suggest an 88% increase on Saturday alone, briefly allowing Anthropic’s application to surpass ChatGPT in daily download volume in the United States. Claude briefly reached the number one position in Apple’s App Store rankings for free applications on iPhone, climbing from around the 40th position in early February.
Despite this surge, usage levels remain significantly lower. Similarweb data indicates ChatGPT attracts over 30 million weekly web users compared to approximately 3 million for Claude. On mobile, the gap is comparable, with ChatGPT seeing over 20 million daily users versus fewer than 2 million for Anthropic’s application.
The dispute highlights a structural rivalry between OpenAI and Anthropic. Founded in 2021 by former OpenAI researchers, including Dario Amodei, Anthropic prioritizes AI safety and governance. The company developed an internal “constitution” of approximately 30,000 words to guide its assistant’s behavior. OpenAI, in contrast, has adopted a more pragmatic approach to collaboration with public institutions and governments.
OpenAI CEO Sam Altman acknowledged the communication surrounding the partnership was rushed, stating, “We shouldn’t have rushed to ship that on Friday.” He also clarified that the agreement includes contractual guarantees prohibiting domestic surveillance or the utilize of personal data to track American citizens.
Margherita Pagani, a professor of AI for Business at SKEMA Business School and director of the school’s Centre for Artificial Intelligence, compared the dynamic to the early days of social media, when Facebook held a near-monopolistic position before the emergence of modern competitors. “Today, OpenAI finds itself in a comparable position with ChatGPT, but competition is clearly intensifying,” she said.
Pagani emphasized that ethical considerations are becoming a key differentiator. “In this highly technological market, innovation is often comparable between players, the ethical dimension becomes a marketing and competitive argument in its own right.”
The episode underscores the growing strategic importance of artificial intelligence for governments. AI models are already used to analyze large datasets from intelligence sources, including satellite imagery, intercepted communications, and field reports, accelerating strategic analysis and operational planning. This raises a central question for technology companies: how far to collaborate with public authorities when their technologies could be used in a military context.
The Department of War’s agreement with OpenAI, as reported by Google News, has also coincided with a separate dispute between the Defense Department and Anthropic over AI safety protocols, according to The New York Times.