Published: 27 February 2026. The English Chronicle Desk. The English Chronicle Online
The chief executive of Anthropic, one of the world’s leading artificial intelligence firms, has publicly rejected a Pentagon demand to remove key safety safeguards from the company’s AI systems, setting up a high‑profile clash between AI industry ethics and U.S. military requirements. The move comes amid escalating tensions between the Department of Defense and private tech firms over how advanced AI models should be used in defence contexts.
Anthropic’s CEO, Dario Amodei, said in a detailed statement this week that the company “cannot in good conscience” accede to the Pentagon’s request that its flagship AI model — known as Claude — be made available without restrictions for all “lawful purposes,” including scenarios the company believes pose ethical or safety risks. At the forefront of Anthropic’s objections are two specific red lines: preventing the use of its AI for mass domestic surveillance of U.S. citizens and fully autonomous weapons operations without human oversight.
The Pentagon has insisted that it needs AI models that are free from usage constraints for classified and battlefield scenarios, and officials have framed their demand as part of broader efforts to build an “AI‑first” military capability. Defense Secretary Pete Hegseth issued an ultimatum requiring Anthropic to agree to the terms by Friday or risk losing its government contract, which is valued at up to $200 million and covers use of Claude across federal agencies. He also warned that the company could be designated a supply chain risk — a classification normally used only for foreign entities tied to adversaries — and that the Defense Production Act might be invoked to compel compliance.
Anthropic’s stance marks a rare instance of an AI company resisting full Pentagon integration. Other major labs, including OpenAI, Google and Musk‑backed xAI, have reportedly accepted broader terms for military use of their technologies. Amodei emphasised that the company still wants to work with the U.S. government and supports the use of AI to strengthen national security, but he said that ethical safeguards are necessary to protect democratic values and ensure that technology is used responsibly.
The dispute reflects growing debate over the dual‑use nature of advanced AI — its potential to benefit defence, cybersecurity and intelligence efforts, and its capacity to cause harm if deployed without safeguards against misuse. Legal experts warn that the Pentagon’s approach, including threats to label a U.S. tech firm a security risk, could prompt legal challenges and policy controversy in Congress and among civil liberties advocates.
As the deadline passes, industry observers are closely watching whether Anthropic will be removed from Pentagon systems, negotiate a compromise, or face broader repercussions — including possible effects on its relationships with defence partners and its position in the global AI market.


























































































