Pentagon sends ‘best and final offer’ to Anthropic on military use of Claude AI: Report

Pentagon officials on Wednesday night sent Anthropic their “best and final offer” in negotiations over the military’s use of the company’s artificial intelligence technology, CBS News has reported citing to sources familiar with the discussions.

The news outlet stated it was not immediately clear whether the new proposal substantially altered the government’s prior demands, or whether Anthropic had agreed to the terms.

The talks come ahead of a government-imposed deadline set by Defense Secretary Pete Hegseth, who has given the AI ​​startup until Friday evening to grant what the Pentagon describes as full lawful use of its technology — or risk losing its business with the US military.

Threat of contract loss and “supply chain risk” label

If Anthropic does not comply, the consequences could extend beyond the loss of its Pentagon contract.

A senior Pentagon official told CBS News that the company would face “not just the loss of business but being labeled a supply chain risk.”

The news outlet citing sources said Pentagon officials are considering invoking the Defense Production Act to compel Anthropic to adhere to the military’s demands. The move would potentially give the government expanded authority to require cooperation in the interest of national security.

Anthropic was awarded a $200 million contract by the Pentagon in July to develop artificial intelligence capabilities intended to advance US national security objectives.

Dispute centers on quadrrails for AI use

At the center of the negotiations is Anthropic’s AI model, Claude.

The outlet citing sources also stated that Anthropic has repeatedly asked defense officials to agree to specific guardrails that would restrict Claude from conducting mass surveillance of Americans. Trump administration officials countered that such surveillance is illegal and that the Pentagon operates within the bounds of the law.

Anthropic CEO Dario Amodei has also pushed to ensure that Claude is not used to make final targeting decisions in military operations without human oversight, according to a source familiar with the negotiations, the outlet said.

The source noted that Claude, like other AI systems, is not immune from hallucinations and may not be reliable enough to avoid potentially lethal mistakes, including unintended escalation or mission failure without human judgment.

Deadline set for Friday

During a meeting at the Pentagon on Tuesday morning, Hegseth gave Amodei until Friday to provide a signed document granting full access to Anthropic’s AI model.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *