WASHINGTON: United States Defense Secretary Pete Hegseth has given artificial intelligence firm Anthropic until Friday evening to provide the military with expanded access to its AI model Claude, escalating a dispute over usage safeguards and national security applications. According to reports, Hegseth conveyed during a meeting with Anthropic chief executive Dario Amodei that the Pentagon could either sever ties with the company and designate it a supply chain risk or invoke the Defense Production Act to compel compliance with military requirements.
The standoff reflects growing friction between technology developers and defense authorities over how advanced AI systems should be deployed in sensitive environments. Anthropic has maintained that while it is willing to adjust usage policies to support national security functions, it will not permit its models to be used for mass surveillance of American citizens or for weapons systems that operate without human involvement. Claude is currently the only AI model deployed within the military’s most sensitive classified systems, making the dispute particularly consequential. Officials cited concerns that restricting access could disrupt ongoing operations, even as they weigh punitive options. One defense official acknowledged the model’s capabilities, noting that its performance has made it central to certain high priority applications.
The meeting reportedly included senior Pentagon officials, underscoring the seriousness of the matter. Hegseth is said to have emphasised that no private company would be allowed to dictate the terms under which the Department of Defense makes operational decisions. A point of contention involved the alleged raising of concerns by Anthropic to its partner Palantir Technologies regarding the use of Claude in a Venezuela related operation. Amodei denied that the company had objected beyond routine operational discussions and reiterated that its policy limits had not interfered with field activities. Following the meeting, Anthropic adopted a conciliatory tone, stating that discussions were conducted in good faith and that it remained committed to supporting the government’s national security mission within responsible operational boundaries.
At the centre of the debate is the potential use of the Defense Production Act, a law historically employed to prioritise contracts and expand industrial output during national emergencies, including during the Covid 19 pandemic. The Pentagon is reportedly considering whether the statute could be applied to require Anthropic to tailor its software to defense specifications. Legal experts suggest that such a move could face challenges if the company argues that its customised AI systems do not constitute commercially available products under the law. Meanwhile, alternative providers are being assessed. Elon Musk led xAI has recently secured a contract to introduce its Grok model into classified environments, while discussions are ongoing with OpenAI and Google regarding expanded use of their systems. Google’s Gemini model is viewed by some officials as a possible substitute, though any agreement would require acceptance of broad lawful use terms similar to those rejected by Anthropic.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.