Home » US DoD Wants Broad AI Use: Legal vs Restricted Access Debate

US DoD Wants Broad AI Use: Legal vs Restricted Access Debate

by

Anthropic, the artificial intelligence company, filed a lawsuit against the U.S. Department of Defense and 17 other federal agencies on March 9th, seeking to overturn its designation as a supply chain risk. The lawsuit, filed in the U.S. District Court for the Northern District of California, names Defense Secretary Lloyd Austin and other high-ranking Biden administration officials as defendants, according to reporting from the Electronic Times.

The core of Anthropic’s argument centers on the unprecedented nature of the designation. The company contends that being labeled a supply chain risk—a classification typically reserved for entities posing a threat to national security—is a violation of its First Amendment rights. Anthropic asserts the U.S. Constitution does not permit the government to wield substantial power to penalize protected speech by American companies. This marks the first time a U.S. Company has been designated as a supply chain risk, a practice previously applied to entities perceived as threats from adversarial nations seeking to compromise U.S. Information systems.

The dispute escalated after the Department of Defense, under the direction of President Trump, issued guidance prohibiting federal agencies from using AI technologies from Anthropic. The lawsuit also challenges this directive as unconstitutional. According to the Chosun Ilbo, the conflict began to surface in late February 2026, with the Trump administration’s broad directive impacting multiple tech companies.

Adding another layer of complexity, Anthropic alleges a contradiction in the government’s actions. The company claims that despite designating it as a risk, the Department of Defense continued to utilize its services for six months. Anthropic alleges it received a threat from the Department of Defense to invoke the Defense Production Act to compel the company to hand over its technology. This, Anthropic argues, undermines the claim that its AI technology, including its chatbot “Claude,” poses a security threat.

The conflict between the Pentagon and Anthropic mirrors a broader debate over the appropriate utilize of AI in military applications. The JoongAng Ilbo reported that Defense Secretary Lloyd Austin recently met with Dario Amodei, Anthropic’s CEO, and demanded the company agree to allow the use of “Claude” in legitimate military operations, threatening exclusion from future defense contracts if Anthropic refused. The company, however, has maintained its stance against the use of its AI in lethal weaponry or surveillance applications.

OpenAI, a competitor to Anthropic, recently disclosed the terms of its agreement with the Department of Defense regarding the deployment of its AI models to secure networks, stating its contract included more safeguards than Anthropic’s previous agreement with the department, according to AI Times. The U.S. Department of Defense’s Chief Technology Officer recently criticized Anthropic for restricting the use of its AI chatbot ‘Claude’ in military systems, citing concerns about falling behind competitors and potential security risks, as reported by MyDailyByte.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.