The Pentagon is escalating pressure on artificial intelligence firm Anthropic, threatening to effectively blacklist the company if it doesn’t allow its AI tools to be used for the development of autonomous drone attacks and mass surveillance technologies. The move, revealed in reports following a Tuesday meeting, underscores the Department of Defense’s growing reliance on AI and its willingness to push boundaries despite ethical concerns raised by the technology’s developers.
Secretary of Defense Pete Hegseth reportedly issued an ultimatum to Anthropic, demanding the company lift safety restrictions on its AI by Friday at 5:01 pm. Failure to comply, officials warned, could result in Anthropic being designated a “supply chain risk,” cutting off access to lucrative defense contracts. Paradoxically, the Pentagon as well indicated it could invoke the Defense Production Act to force Anthropic’s compliance, a move that raises complex legal questions.
Concerns Over Autonomous Weapons and Privacy
Anthropic CEO Dario Amodei has repeatedly voiced concerns about the potential misuse of AI, particularly in the realm of autonomous weapons systems. “I am worried about the autonomous drone swarm, right? The constitutional protections in our military structures depend on the idea that We find humans who would, we hope, disobey illegal orders. With fully autonomous weapons, we don’t really have those protections,” Amodei stated in a recent interview with podcaster Wes Roth. He also expressed apprehension about AI’s capacity to analyze private data, potentially violating Fourth Amendment rights. Specifically, Amodei worried AI could process private conversations captured by smart home devices to politically label individuals.
Representatives from Anthropic reportedly communicated these safety concerns during the meeting with Hegseth, highlighting the risks associated with unreliable AI control of weapons and the lack of clear regulations governing AI-powered mass surveillance. These concerns appear to have been largely dismissed by the Pentagon, which is prioritizing the rapid integration of AI into its military capabilities.
Policy Shift Raises Suspicions
Shortly after the meeting, Anthropic announced it was dropping a central safety policy designed to mitigate societal risks posed by its AI development. Although the company has not explicitly linked this decision to the Pentagon’s demands, the timing has fueled speculation that Anthropic yielded to pressure. Legal experts are currently debating whether the Trump administration has the authority to compel Anthropic’s cooperation through the Defense Production Act. Lawfare provides analysis of the legal complexities surrounding the potential invocation of the act.
Anthropic is currently negotiating a contract with the Pentagon, having previously indicated a willingness to allow its AI systems to be used for missile and cyber defense. However, the Pentagon is now demanding unrestricted access to Anthropic’s technology for all military purposes. This shift signals a broader ambition to leverage AI across the full spectrum of defense operations.
Reported Prior Use of Anthropic’s AI in Military Operations
A Wall Street Journal report, citing sources familiar with the matter, alleges that Anthropic’s AI model, Claude, was utilized by the Pentagon during a 2020 operation targeting Venezuelan President Nicolás Maduro. The operation, which involved a bombardment of Caracas and an attempted abduction of Maduro, resulted in 83 deaths, including civilians. The report indicates Claude was accessed through Anthropic’s partnership with Palantir, a data analytics firm with existing contracts with the U.S. Government.
A Pentagon official, in a statement, claimed Hegseth’s demands are unrelated to mass surveillance or autonomous weapons. However, this assertion contradicts Hegseth’s own January address at SpaceX headquarters, where he stated, “We will not employ AI models that won’t allow you to fight wars.” He further emphasized a departure from “equitable AI, and other DEI and social justice infusions that constrain and confuse our employment of this technology.”
Risks of AI in Warfare
Experts continue to warn about the inherent dangers of deploying AI in warfare. A recent study, detailed in research published on arXiv, simulated 21 war scenarios using ChatGPT, Claude, and Gemini. The study found that one of the models escalated to nuclear weapon deployment in 95 percent of the simulations, highlighting the potential for catastrophic miscalculation.
The unfolding situation between the Pentagon and Anthropic represents a critical juncture in the development and deployment of AI. It raises fundamental questions about the balance between national security, technological innovation, and ethical responsibility. The outcome of this dispute will likely set a precedent for how the U.S. Military engages with AI developers in the future.
The next few days will be crucial as Anthropic responds to the Pentagon’s ultimatum. Observers will be closely watching to see whether the company prioritizes its ethical principles or yields to the pressure of a powerful government agency. The implications of this decision will extend far beyond the realm of artificial intelligence, impacting the future of warfare and the protection of civil liberties.
What are your thoughts on the ethical implications of AI in military applications? Share your perspective in the comments below.