Anthropic CEO Dario Amodei said Thursday that he “cannot in good conscience accede to [the Pentagon’s] request” to give the military unrestricted access to its AI systems. “Anthropic understands that the Department of War, not private companies, makes military decisions,” Amodei wrote in a statement. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.” The two cases are: mass surveillance of Americans and fully autonomous weapons with no human in the loop. The Pentagon believes it should be able to use Anthropic’s model for all lawful purposes, and that its uses shouldn’t be dictated by a private company. This isn’t about Anthropic or the specific conditions at issue. It’s about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use—which… https://t.co/VHbtzWujDA— Senior Official Jeremy Lewin (@UnderSecretaryF) February 27, 2026 Amodei’s statement comes less than 24 hours ahead of the Friday 5:01 p.m. deadline Defense Secretary Pete Hegseth has given Anthropic to either acquiesce to his demands, or face the consequences. An Anthropic spokesperson told TechCrunch Amodei’s statement does not mean the firm is walking away from negotiations and is continuing to engage in good faith with the Department going forward. “The contract language we received overnight from the Department of War made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons,” an Anthropic spokesperson told TechCrunch. “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW’s recent public statements, these narrow safeguards have been the crux of our negotiations for months.” The Department of Defense has attempted to force Amodei’s hand by either labeling Anthropic a supply chain risk — a designation reserved for foreign adversaries — or invoke the Defense Production Act and effectively force the firm to do its bidding. The DPA gives the president the authority to force companies to prioritize or expand production for national defense. Amodei pointed out the contradiction in those two threats. “One labels us a security risk; the other labels Claude as essential to national security.” Techcrunch event Boston, MA | June 9, 2026 He added that it’s the Department’s right to choose contractors most aligned with its vision, “but given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.” Anthropic is currently the only frontier AI lab that has classified-ready systems for the military, though the DOD is reportedly getting xAI ready for the job. “Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place,” Amodei said. “Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.” TLDR, he’s saying: “We can just part ways. There’s no need to be nasty about it.” This article has been updated with a statement from an Anthropic spokesperson. Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications. You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal. View Bio