
AI firm Anthropic, known for its emphasis on "helpful and harmless" artificial intelligence, has come under scrutiny following its significant engagement with U.S. defense agencies, including a $200 million prototype agreement with the Department of Defense (DOD). Critics, such as Sam Biddle, have highlighted a perceived contradiction between the company's public commitment to minimizing harm and the use of its Claude models by military entities. "More than most other AI firms, Anthropic spends a lot of time talking about the measures it takes to minimize harm, reduce harm, turning its models to 'make them more helpful and harmless.' Claude's use by a major weapons company implicated in war crimes seems at odds with this," Biddle stated in a recent tweet.
In July 2025, the DOD's Chief Digital and Artificial Intelligence Office (CDAO) awarded Anthropic a two-year prototype agreement with a $200 million ceiling to advance U.S. national security through frontier AI capabilities. This builds on earlier partnerships, including a November 2024 announcement with data analytics firm Palantir and Amazon Web Services (AWS) to make Claude models available to U.S. intelligence and defense agencies. These integrations leverage Palantir's platform and AWS's secure, Impact Level 6 (IL6) certified infrastructure, critical for handling classified national security data.
A key aspect of Anthropic's defense strategy is the introduction of "Claude Gov" models, specifically designed for national security customers. These models are tailored to handle classified materials and "refuse less" when engaging with such sensitive information, a notable deviation from the stricter guardrails in consumer-facing Claude versions. Thiyagu Ramasamy, Anthropic's Head of Public Sector, emphasized the company's commitment to supporting U.S. national security and deepening collaboration to solve critical mission challenges.
Despite Anthropic's assurances of rigorous safety testing for Claude Gov models, the move has drawn criticism from AI ethics advocates. Former Google co-head of AI ethics Timnit Gebru remarked on the perceived irony, while others question how "safety" is defined when models are engineered to be less restrictive with classified military data. Anthropic's usage policy includes contractual exceptions for government entities, allowing for "legally authorized foreign intelligence analysis" and "providing warning in advance of potential military activities," while still prohibiting weapon design or malicious cyber operations.
Anthropic is not alone in this pivot; other major AI developers like OpenAI and Meta have also forged partnerships with the U.S. defense sector, signaling a broader industry trend. This growing intersection between advanced AI and national security operations underscores the evolving landscape where AI companies balance ethical considerations with the strategic and financial opportunities presented by government contracts.