OpenAI, the creator of ChatGPT, is facing mounting criticism for stepping into a void left by Anthropic, a rival AI firm that refused to compromise its restrictions on using artificial intelligence for surveillance and autonomous weapons systems. The decision has sparked protests from users and employees alike, with reports indicating a nearly 300% increase in ChatGPT uninstalls following the announcement of the deal. OpenAI CEO Sam Altman acknowledged the initial agreement was “opportunistic and sloppy,” subsequently publishing an internal memo outlining amendments intended to address concerns.
However, experts warn that these amendments, while appearing to impose limitations, are riddled with ambiguity and offer little genuine protection against government overreach. The core issue lies in the government’s historically lax interpretation of “applicable laws” regarding surveillance, often prioritizing expansive data collection over individual privacy rights. This raises serious questions about whether OpenAI’s partnership with the Pentagon will truly prevent the use of its AI technology for mass surveillance, or simply repackage it under a veneer of legal compliance.
The amended contract states that the AI system “shall not be intentionally used for domestic surveillance of U.S. Persons and nationals,” consistent with laws like the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence Surveillance Act (FISA) of 1978. But critics point out that the term “intentionally” is doing significant operate in this sentence. For years, the U.S. Government has maintained that mass surveillance of citizens occurs “incidentally” – a byproduct of programs designed to collect communications outside the country, even if those communications involve U.S. Persons. DefenseScoop reported on the Pentagon’s plans to integrate ChatGPT into its GenAI.mil system, already used by over a million personnel.
Further muddying the waters, the contract includes language prohibiting “deliberate” tracking or monitoring of U.S. Persons, but acknowledges the government’s reliance on commercially acquired data to circumvent stronger privacy protections. Similarly, a clause forbidding “unconstrained monitoring” of private information leaves the definition of “unconstrained” open to interpretation. These “weasel words,” as legal experts call them, create ambiguity that shields the government from accountability. This approach mirrors the negotiations with Anthropic, where the Pentagon sought to adhere to red lines “as appropriate,” retaining flexibility in practice.
The Promise and Peril of AI in Defense
OpenAI has stated that the Pentagon has committed to preventing the National Security Agency (NSA) from accessing its tools without a separate agreement, and that its system architecture will help verify compliance with these restrictions. However, past experience demonstrates that secret agreements and technical assurances are insufficient safeguards against surveillance agencies. Strong, enforceable legal limits and transparency are essential, yet conspicuously absent from this arrangement.
The situation highlights a broader trend: consumer-facing companies balancing public assurances about ethical AI practices with lucrative government contracts. OpenAI promises to “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but enabling mass surveillance arguably does both. This contradiction raises fundamental questions about the role of private companies in shaping the boundaries of privacy and civil liberties.
The Defense Department announced the incorporation of ChatGPT into GenAI.mil, its existing generative AI platform, in February 2026, according to the U.S. Department of Defense. This move follows a $200 million contract awarded to OpenAI, alongside xAI, Google, and Anthropic, for “frontier AI” projects. The Pentagon intends to make OpenAI’s large language models available to all 3 million Department personnel, aiming to enhance mission execution and readiness.
A History of Shifting Interpretations
The government’s willingness to embrace broad interpretations of “applicable laws” is not recent. Throughout history, extreme and legally questionable actions have been justified under existing legal frameworks. This pattern underscores the danger of relying on companies to self-regulate or enforce ethical boundaries. The public shouldn’t depend on a minor group of individuals – be they CEOs or Pentagon officials – to protect fundamental rights.
Recent backlash against OpenAI, including a surge in subscription cancellations, demonstrates growing public concern about the ethical implications of AI-powered surveillance. BTimesOnline reported on the “Cancel ChatGPT” movement, fueled by fears that the technology could be used for domestic surveillance.
What’s Next?
The OpenAI-Pentagon partnership sets a concerning precedent, signaling a willingness to prioritize government contracts over robust privacy protections. While OpenAI has attempted to address concerns through contractual amendments, the inherent ambiguity of the language and the government’s history of expansive surveillance practices raise serious doubts about the effectiveness of these measures. The coming months will be critical in observing how the government implements this technology and whether it adheres to the stated limitations, or exploits loopholes to expand its surveillance capabilities. Continued scrutiny and advocacy from privacy organizations and the public will be essential to holding both OpenAI and the Pentagon accountable.
What are your thoughts on the ethical implications of AI in defense? Share your perspective in the comments below.