Pentagon vs Anthropic: Why is Trump’s War Department fighting with the AI ​​company over Claude usage? explained

Pentagon and Anthropic are locked in an unusual controversy as Defense Secretary Pete Hegseth has warned the AI ​​company that it will remove it from his agency’s supply chain if it did not meet certain demands.

The Pentagon has already reportedly taken the first step to blacklist Anthropic as it tapped its defense contractors to assess their reliance on the AI ​​company.

The Department of Defense has been engaged in a months-long dispute with Anthropic over its use of Claude AI, which Reuters reported has no intention of easing its usage restrictions for military purposes.

A meeting between Hegseth and Anthropic CEO Dario Amodei has already taken place, after which talks are continuing.

During the meeting, Hegseth said if Anthropic did not comply, the Pentagon would take action against it, with options including labeling it a supply-chain risk or invoking a law that would force Anthropic to change its rules, as per a report by Reuters.

Also Read | Anthropic loosens AI safety policy amid growing competition—what changes?

“We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” Anthropic said in a statement after the meeting.

Anthropic has time until Friday, 5 pm, to respond, according to the report.

But why are Anthropic and Pentagon fighting? Here is what you need to know.

Why US Military is fighting Anthropic over how to use Claude AI

At the center of the issue is the question of who controls Claude AI — the Pentagon or Anthropic CEO.

According to a report by CBS, the standoff started when the US military used Anthropic’s Claude AI during the operation to capture former Venezuelan President Nicolás Maduro last month.

A spokesperson for Anthropic said in a statement that the AI ​​startup “has not discussed the use of Claude for specific operations with the Department of War.”

As per CBS citing people with the knowledge of the matter, Anthropic has repeatedly asked the Pentagon to uphold certain guardrails including a restriction on using Claude for mass surveillance of US citizens.

Also Read | Anthropic engineer says AI will take over most internet-based jobs
Also Read | Pentagon taps Boeing, Lockheed Martin in first step to blacklist Anthropic

The Pentagon has pushed big AI companies including Anthropic and OpenAI to make their AI tools available on classified networks without many of the standard restrictions that the companies apply to users, as per a Reuters report.

However, Anthropic does not want the US military to use Claude “for final targeting decisions in military operations without any human involvement,” as per CBS.

Earlier this month Mrinank Sharma, a senior safety researcher, said he was leaving Anthropic. “I continuously find myself reckoning with our situation,” he wrote in a letter to colleagues that he posted to X.

“The world is in peril,” he wrote. “And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”

The ultimatum from Pentagon marks an escalation in a growing dispute between the Defense Department and the AI ​​startup over the company’s insistence on guardrails for use of its Claude AI tool. If carried out, the Pentagon’s threat would put at risk up to $200 million in work that Anthropic had agreed to do for the military.

Key Takeaways

  • The Pentagon is pressuring AI firms for unrestricted access to their tools for military use.
  • Anthropic prioritizes ethical guidelines in AI applications, particularly concerning military operations.
  • The outcome of this dispute could have significant implications for AI usage in national security and military ethics.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *