Defense Secretary Pete Hegseth has reportedly given Anthropic chief executive Dario Amodei until Friday (27 February) evening to grant the US military unfettered access to its flagship AI model, Claude, or face severe consequences, according to a report by axios.
The ultimatum, delivered during what the Axios report stated citing officials in the know described as a tense meeting on Tuesday (24 February), underscores a widening rift over the boundaries of AI safeguards in national security operations. At stake is the Pentagon’s continued access to Claude — currently the only AI model embedded in its most sensitive classified systems.
Pete Hegseth established Friday deadline introduces a critical inflection point in the relationship between Silicon Valley and the US national security state. The Pentagon appears determined to assert operational supremacy over AI deployment, while Anthropic continues to defend guardrails designed to limit certain forms of military use.
Pentagon pressures Anthropic over AI safeguards
According to the axios report, Hegseth warned that the Department of Defense could either sever ties with Anthropic and formally designate the company a “supply chain risk,” or invoke the Defense Production Act (DPA) to compel the firm to tailor its model to military requirements.
“The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good,” a Defense official told. axios ahead of the meeting
The warning marks one of the most direct confrontations to date between Washington and a private AI developer over the permissible scope of military AI use. While Anthropic has signaled a willingness to adjust its usage policies for defense applications, it has refused to allow Claude to be deployed for the mass surveillance of Americans or for weapons systems operating without human involvement.
Classified systems and operational dependency on Claude AI
Claude’s integration into classified Pentagon systems has created a strategic dependency that complicates any threat to terminate the relationship. The model is reportedly used in both highly sensitive operational contexts and a wide range of bureaucratic military functions.
One source familiar with the discussions indicated that, at present, Claude appears to lead competing models in several applications relevant to military planning, including offensive cyber capabilities.
The Pentagon is said to be accelerating discussions with OpenAI and Google to transition their models — already used in unclassified settings — into classified environments. Gemini has emerged as a potential alternative, though such an arrangement would require Google to permit the Pentagon to use its system for “all lawful purposes,” terms that Anthropic has declined.
Elon Musk’s xAI has recently secured a contract to introduce Grok into classified settings, but it remains unclear whether that system could fully replace Claude’s current functions.
The Defense Production Act: A rare adversarial tool
The Defense Production Act grants the president authority to compel private companies to accept and prioritize contracts deemed necessary for national defense. It was notably used during the Covid-19 pandemic to expand production of vaccines and ventilators.
However, deploying the law in a coercive manner against a technology company over AI safeguards would represent an unusual and adversarial application. A senior Defense official suggested the objective would be to compel Anthropic to adapt its model to Pentagon requirements without additional guardrails.
Anthropic could challenge such action in court, arguing that its product constitutes specialized, custom-built software for sensitive government use rather than a commercially available good subject to DPA compulsion, according to one defense consultant cited in the report.
Dispute over Venezuela operation deepens friction
Tensions were further inflamed by a dispute surrounding Claude’s alleged use during a Venezuela operation conducted through Anthropic’s partnership with Palantir.
Hegseth reportedly referenced the Pentagon’s claim that Anthropic raised concerns to Palantir regarding the model’s deployment during the Maduro raid. Amodei denied the allegation.
Amodei denied that Anthropic raised any such concerns or even broached the topic with Palantir beyond standard operating conversations.
He reiterated that the company’s red lines have never prevented the Pentagon from doing its work or posed an issue for anyone operating in the field.
Sources differed in their characterization of Tuesday’s meeting. A senior Defense official described it as “not warm and fuzzy at all.” Another source said it remained “cordial” with no raised voices and that Hegseth praised Claude directly to Amodei.
Hegseth made clear that he would not allow any private company to dictate the terms under which the Pentagon makes operational decisions or objects to individual use cases.
Supply chain risk designation looms
Should the Pentagon sever its contract and label Anthropic a supply chain risk, the repercussions would extend beyond the company itself. Other defense contractors would likely be required to certify that Claude is not embedded within their workflows — a complex undertaking given the model’s current integration across systems.
Anthropic maintained a conciliatory tone after the meeting.
“During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service,” an Anthropic spokesperson said.
“We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”

