Microsoft is backing Anthropic in its legal battle with the Pentagon, urging a federal judge to temporarily block the Defense Department’s designation of the artificial intelligence company as a supply chain risk. The tech giant’s motion for an amicus brief, filed Monday, argues that halting the designation would allow for a more orderly transition and prevent disruption to the military’s use of AI, according to reports from CNBC and Business Insider.
The dispute centers on Anthropic’s refusal to grant the Pentagon unrestricted authority over how its AI models are used. Pentagon officials demanded the company agree that the military could utilize its models in “all lawful use cases,” a condition Anthropic resisted. Anthropic sought contractual language prohibiting the use of its AI in autonomous weapons systems and for mass domestic surveillance, a position Defense Secretary Pete Hegseth publicly criticized on X, stating Anthropic’s “true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.”
Microsoft’s filing suggests a concern that the Pentagon’s designation could force immediate alterations to product configurations and contracts. A Microsoft spokesperson told Reuters the company believes all parties share common goals and that “we need time and a process to find common ground.”
The legal clash follows a February 27th White House directive instructing federal agencies to cease using Anthropic’s AI products. Anthropic initiated the lawsuit Monday seeking to prevent the supply chain risk designation, which would effectively bar companies working with the military from also doing business with Anthropic.
Microsoft’s significant investment in Anthropic underscores the stakes of the dispute. The company has become a top customer of Anthropic and is on track to spend approximately $500 million annually to integrate Anthropic’s AI into its products. In November, Microsoft announced a collaboration with Anthropic, including a $30 billion commitment to purchase Azure compute capacity and a potential investment of up to $5 billion in the AI firm. This partnership aims to scale Anthropic’s Claude AI model on Microsoft Azure.
The Pentagon’s position reflects a desire to maintain control over the application of AI technologies, whereas Anthropic is seeking to establish ethical boundaries for its AI models’ deployment. The outcome of the legal battle will likely set a precedent for how the U.S. Government regulates the use of AI by both the military and private companies.