Hours after President Trump ordered federal agencies to cease using artificial intelligence technology developed by Anthropic, rival firm OpenAI announced a deal with the Department of Defense to provide its AI technology for classified networks, a move that has ignited debate over the military’s reliance on private sector AI development.
The rapid sequence of events began Friday when Trump, via a post on Truth Social, directed all federal agencies to stop using Anthropic’s products, declaring the company had made a “DISASTROUS MISTAKE” by attempting to impose restrictions on how its AI tools could be used. The administration simultaneously moved to designate Anthropic a national security risk, capping a dispute over whether the company could prohibit its technology from being used in mass surveillance or autonomous weapon systems as part of a potential $200 million contract.
OpenAI’s subsequent agreement with the Pentagon, announced February 28, allows the military to utilize its technologies in classified settings. CEO Sam Altman acknowledged the negotiations were “definitely rushed,” but emphasized the company had not conceded to unrestricted access. OpenAI stated its agreement includes safeguards against the use of its technology for autonomous weapons and mass domestic surveillance.
The contrasting outcomes of the two companies reflect differing approaches to engagement with the government, according to Altman. Speaking at the Morgan Stanley Technology, Media & Telecom Conference on Thursday, Altman stated Anthropic focused on “specific prohibitions in the contract,” while OpenAI prioritized adherence to existing laws. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” Altman said. He further asserted that “the government is supposed to be more powerful than private companies,” a sentiment that drew criticism from Anthropic CEO Dario Amodei, who reportedly accused Altman of offering “dictator-style praise to Trump” in a memo to employees.
The deal has prompted scrutiny, with some observers questioning the timing of OpenAI’s announcement so soon after Anthropic’s blacklisting, characterizing it as “opportunistic and sloppy.” But, OpenAI has defended its decision as a necessary step to ensure the U.S. Maintains a technological edge.
The situation highlights a growing tension between AI developers’ ethical concerns and the Pentagon’s desire for advanced capabilities, particularly as the U.S. Military engages in strikes on Iran. A recent report from the Technology Review suggests OpenAI settled for softer legal boundaries than Anthropic, which pursued a more stringent moral approach. The report raises concerns about whether OpenAI can effectively implement the safety precautions it promises given the Pentagon’s accelerated AI strategy.
Anthropic, founded by individuals who left OpenAI over safety issues, had previously been the sole large commercial AI provider with models approved for Pentagon use. The company’s rejection of the Pentagon’s terms centered on its desire to prevent the use of its AI in potentially harmful applications. OpenAI’s willingness to compromise, while securing a lucrative contract, has raised questions about the future of responsible AI development within the defense sector.
As of March 5, OpenAI had not responded to requests for additional information regarding the specifics of its agreement with the Pentagon, and the Department of Defense has not publicly detailed the terms of the contract or its plans for deploying OpenAI’s technology.