Anthropic has said it cannot comply with a demand from the United States Department of Defense to remove safety safeguards from its artificial intelligence model, Claude, setting up a high stakes confrontation between one of the world’s leading artificial intelligence firms and the Pentagon. The dispute centres on a 200 million dollar contract under which Anthropic has been supplying artificial intelligence capabilities to the United States military. According to statements released on Thursday, the Department of Defense had threatened to cancel the agreement and classify the company as a supply chain risk if it did not grant unrestricted access to its model by the stipulated deadline.
Chief executive Dario Amodei stated that the company could not “in good conscience” allow the removal of safety guardrails that limit how Claude may be used. He indicated that while Anthropic remained willing to support national security objectives, it would not permit applications involving mass domestic surveillance or autonomous weapons systems capable of lethal action without human oversight. United States Defense Secretary Pete Hegseth had reportedly set a deadline for compliance, warning of punitive measures should the company refuse. The designation of supply chain risk, typically reserved for foreign adversaries, could have wide ranging financial consequences by restricting other military contractors from working with the company’s products.
The disagreement highlights broader tensions between government agencies and artificial intelligence developers over the permissible scope of advanced machine learning systems in military operations. Anthropic has positioned itself as a safety focused firm within the rapidly expanding artificial intelligence sector, advocating regulatory oversight and structured deployment frameworks. At the same time, the Department of Defense has increasingly awarded large contracts to technology companies to integrate artificial intelligence into defence infrastructure. In recent years, several firms including Google and OpenAI have secured significant defence related agreements alongside Anthropic.
Anthropic’s Claude model had until recently been the only artificial intelligence system authorised for use within certain classified military environments, though xAI has also reached an agreement to operate in such systems. The episode underscores the intensifying debate over the ethical boundaries of artificial intelligence in warfare, particularly as autonomous systems and advanced surveillance tools become more sophisticated. Whether the Department of Defense proceeds with its warning or seeks a negotiated resolution remains to be seen, but the outcome is likely to influence how artificial intelligence companies engage with military institutions in the years ahead.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.