Home » News » Hegseth Threatens Anthropic Over AI Restrictions: Killer Drones & US Spying?

Hegseth Threatens Anthropic Over AI Restrictions: Killer Drones & US Spying?

by James Carter Senior News Editor

Defense Secretary Pete Hegseth is pressing artificial intelligence firm Anthropic to grant the Pentagon broad access to its AI technology, Claude, even if it means potentially deploying the system for controversial applications like autonomous weapons systems and mass surveillance. The escalating dispute highlights a fundamental clash between the military’s desire for cutting-edge capabilities and the AI company’s commitment to responsible development and ethical safeguards.

The standoff centers on a $200 million contract awarded to Anthropic last year to develop AI capabilities for the Department of Defense. Claude is currently the only AI model authorized to handle classified military data, giving it a unique position within the defense landscape. However, Anthropic’s policies explicitly prohibit the use of its technology for mass surveillance and the creation of autonomous weapons – restrictions the Pentagon now seeks to remove.

According to multiple reports, Hegseth delivered an ultimatum to Anthropic CEO Dario Amodei during a meeting at the Pentagon on Tuesday. Axios reported that Hegseth demanded “unfettered” access to Claude by 5:01 pm on Friday, or face severe consequences. These consequences include potential blacklisting from future government contracts or the invocation of the Defense Production Act, which would compel Anthropic to comply with military demands.

The Pentagon’s push for unrestricted access is outlined in its recently released “AI Strategy” memo, which calls for an “AI-first warfighting force” and allows for deployment of AI technology for “any lawful use,” free from ethical constraints. This approach has drawn criticism from experts concerned about the potential for misuse and the lack of regulatory oversight.

Pentagon Threatens to Invoke Defense Production Act

The threat to invoke the Defense Production Act (DPA) is particularly noteworthy. CNN reported that Hegseth indicated he would use the DPA to force Anthropic’s compliance, a move that would effectively override the company’s objections. The DPA, historically used for pandemic-related supply chain issues, grants the government broad authority to influence businesses in the interest of national defense.

Jessica Tillipman, associate dean for government procurement law studies at George Washington University, described the threat of declaring Anthropic a “supply chain risk” as “deeply problematic,” noting that it’s typically reserved for products posing security risks, not ethical disagreements. Elizabeth Nolan Brown of Reason wrote that such a designation would effectively force any company working with the US military to sever ties with Anthropic, potentially crippling the AI firm’s business.

Amodei has publicly voiced concerns about the potential for AI to be used for harmful purposes, including “autonomous drone swarms” and mass surveillance. In a recent essay, he warned about the dangers of “AI-enabled autocracies” using the technology to repress citizens and wage war, stating that a “swarm of millions or billions of fully automated armed drones” could grow an “unbeatable army.”

Past Operations Raise Legal Questions

The dispute with Anthropic comes amid scrutiny of the Pentagon’s recent use of AI in sensitive operations. The military reportedly used Claude during the operation to kidnap Venezuelan President Nicolás Maduro last month, an action that resulted in at least 83 deaths during bombing raids across Caracas. It remains unclear how the AI model was utilized during the operation, which has been described as legally questionable. CBS News reported that Anthropic maintains it has not discussed specific operations with the Department of War.

Senator Ruben Gallego (D-Ariz.) suggested Hegseth’s demands amount to telling Anthropic, “Let us use your AI for mass surveillance, or we’ll pull your contract.” He further noted that under the Trump administration, “corporations are punished for refusing to spy on American citizens.”

The Pentagon maintains that its orders are lawful and that It’s the military’s responsibility to use the technology legally, according to the Associated Press. However, the question of legality is contested, particularly in light of reports that Hegseth previously ordered actions potentially violating international law, and the Pentagon is currently fighting to reduce the retirement pay of Senator Mark Kelly (D-Ariz.) for reminding troops of their duty to disobey illegal orders.

The situation with Anthropic underscores the growing tension between the rapid advancement of AI technology and the demand for ethical guidelines and legal frameworks to govern its use, particularly in the context of national security. The outcome of this dispute will likely set a precedent for future interactions between the Pentagon and AI developers.

As of Friday evening, Anthropic has not publicly announced its decision. The coming days will reveal whether the company will concede to the Pentagon’s demands, potentially paving the way for broader, less restricted use of AI in military operations, or whether it will stand firm on its ethical principles, risking a significant setback for its government contracts.

What are your thoughts on the ethical implications of AI in warfare? Share your perspective in the comments below.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.