Anthropic says no to Pentagon: CEO Dario Amodei refuses unrestricted AI use — ‘Threats do not change…’

AI company Anthropic on Thursday rejected the Pentagon’s latest offer to settle a standoff over conditions that Trump’s Department of War has sought from the company in order for it to keep working with the government. Anthropic said it will not give the US Defense Department unrestricted use of its Claude AI despite threats from the Pentagon.

“These threats do not change our position: we cannot in good conscience accede to their request,” Anthropic CEO Dario Amodei said in a statement.

The confrontation effectively jeopardizes the company’s long standing relationship with the government.

The dispute between Pentagon and Anthropic stems from the refusal of the AI ​​startup to put down certain guardrails that would allow the US military to autonomously use targeted weapons and conduct mass surveillance in the United States.

“To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date,” Amodei argued in the statement.

Anthropic’s statement comes just a day ahead of the deadline given to it by Pentagon and Defense Secretary Pete Hegseth, with whom Amodei met earlier this week.

The Defense Department had given Anthropic an ultimatum to agree to unconditional military use of its technology, even if it violates ethical standards at the company, or face being forced to comply under emergency federal powers.

Why is Anthropic refusing to give in to Pentagon’s demands?

Anthropic, which is backed by Amazon and Google, has a contract worth up to $200 million with the US Department of Defense. However, Amodei on Thursday said his company will draw an ethical line regarding its use for mass surveillance of US citizens and fully-autonomous weapons even if it means that the contract is lost.

Also Read | Pentagon sends ‘final offer’ to Anthropic on military use of Claude AI: Report
Also Read | Explained | Why is Pentagon fighting with Anthropic over Claude usage?
Also Read | Anthropic loosens AI safety policy amid growing competition—what changes?

The department has said it will contract only with AI companies that accede to “any lawful use” and remove safeguards, Amodei said in his statement. “Using these systems for mass domestic surveillance is incompatible with democratic values.”

He said leading AI systems are not yet reliable enough to be trusted with the power to launch deadly weapons without any human intervention.

“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” Amodei said.

Anthropic vs Pentagon

After meeting with Anthropic earlier this week, the Pentagon delivered a stark ultimatum: agree to unrestricted military use of its technology by 5:01 pm (22:01 GMT) Friday or face being forced to comply under the Defense Production Act.

Earlier on Thursday, Pentagon spokesperson Sean Parnell said on X that the department has no interest in using AI to conduct mass surveillance of Americans nor does it want to use AI to develop autonomous weapons that operate without human involvement.

“Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell said.

The Pentagon also threatened to label Anthropic a supply chain risk, a designation usually reserved for firms from adversary countries that could severely damage the company’s ability to work with the US government and reputation.

However, Anthropic has refused to move from its position. “It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei said in his statement.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *