The Pentagon is requesting leading U.S. Defense contractors, including Boeing and Lockheed Martin, to detail their dependence on artificial intelligence services provided by Anthropic, a move that could lead to the AI firm being designated a “supply chain risk.” The assessment, reported by Reuters on Wednesday, comes amid a dispute over the military’s access to and control over Anthropic’s AI technology, known as Claude.
This potential designation, typically reserved for companies linked to adversarial nations, would be an unprecedented step for a prominent American technology company, particularly one with its software integrated into classified military systems. A spokesperson for Lockheed Martin confirmed to Axios that the company had been contacted by the Defense Department regarding an examination of its exposure and reliance on Anthropic. The Pentagon’s actions signal escalating concerns about the influence and potential vulnerabilities associated with relying on a single AI provider for critical defense functions.
The escalating tension between the Pentagon and Anthropic centers on the terms of use for Claude, a powerful AI chatbot. Defense Secretary Pete Hegseth reportedly issued an ultimatum to Anthropic this week: grant the U.S. Military unrestricted access to its AI technology or face a ban from all government contracts, according to CBS News. Anthropic, however, is seeking assurances regarding the ethical and responsible deployment of its technology, specifically regarding autonomous weapons systems and mass surveillance.
Pentagon’s AI Expansion and Anthropic’s Concerns
The Pentagon has been aggressively pursuing the integration of AI into its operations, awarding $200 million contracts to Anthropic, OpenAI, Google, and xAI last year to develop AI capabilities that advance U.S. National security. Anthropic currently stands alone as the only AI company with its model deployed on the Pentagon’s classified networks, a partnership facilitated by data analytics firm Palantir. The Pentagon aims to leverage AI to “rapidly convert intelligence data” and enhance the effectiveness of its military personnel, as stated in a recent announcement.
However, Anthropic has repeatedly requested specific “guardrails” to govern the use of Claude, including restrictions on mass surveillance of American citizens and preventing the Pentagon from using the AI for final targeting decisions in military operations without human oversight, sources told CBS News. These concerns were reportedly raised during a meeting between Anthropic CEO Dario Amodei and Secretary Hegseth on Tuesday. Anthropic, in a statement, said it “continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
Dispute Origins and Contractor Exposure
The current standoff reportedly originated following the U.S. Military’s use of Claude during an operation to capture former Venezuelan President Nicolás Maduro in January. Anthropic maintains it “has not discussed the use of Claude for specific operations with the Department of War,” according to a company spokesperson. The Pentagon, however, disputes the core of Anthropic’s concerns, stating that legality is its responsibility as the finish user, according to a Pentagon official who spoke with ABC News.
The Pentagon’s inquiry extends beyond Boeing and Lockheed Martin, with plans to assess the reliance of other major defense contractors on Anthropic’s Claude AI. This move, as reported by Livemint, aims to determine the extent to which Claude is integrated into the workflows of companies responsible for supplying critical military hardware like fighter aircraft and missile systems. Elon Musk’s xAI, through its Grok AI model, is reportedly on board with being used in classified settings, even as other companies are nearing similar agreements.
Anthropic has until Friday to respond to the Pentagon’s demands. The outcome of these negotiations will likely set a precedent for how the U.S. Military engages with private AI developers and establishes the boundaries for responsible AI deployment in national security applications. The situation highlights the growing tension between the desire to rapidly integrate cutting-edge AI technologies and the need to address ethical concerns and maintain control over sensitive military operations.
The coming days will be crucial as the Pentagon and Anthropic attempt to resolve their differences. The potential designation of Anthropic as a supply chain risk could have significant implications for the future of AI integration within the U.S. Defense industry and could prompt other AI companies to reassess their partnerships with the military. Readers are encouraged to share their thoughts and perspectives on this developing story in the comments below.