A top Pentagon official said Anthropic's dispute with the government over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in President Donald Trump's future Golden Dome missile defense program, which aims to put U.S. weapons in space.
U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer, said he came to view the AI company's ethical restrictions on the use of its chatbot Claude as an irrational obstacle as the U.S. military pursues giving greater autonomy to swarms of armed drones, underwater vehicles and other machines to compete with rivals like China that could do the same.
“I need a reliable, steady partner that gives me something, that’ll work with me on autonomous, because someday it’ll be real and we’re starting to see earlier versions of that," Michael said in a podcast aired Friday. "I need someone who’s not going to wig out in the middle.”
The comments came after the Pentagon formally designated San Francisco-based Anthropic a supply chain risk, cutting off its defense work using a rule designed to prevent foreign adversaries from harming national security systems.
Anthropic has vowed to sue over the designation, which affects its business partnerships with other military contractors.
Trump has also ordered federal agencies to immediately stop using Claude, though the Republican president gave the Pentagon six months to phase out a product that's deeply embedded in classified military systems, including those used in the Iran war.
Anthropic said it only sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans or fully autonomous weapons.
Michael, a former Uber executive, revealed his side of months-long talks with Anthropic CEO Dario Amodei in a lengthy conversation with Silicon Valley venture capitalists Jason Calacanis, David Friedberg and Chamath Palihapitiya, co-hosts of the “All-In" podcast.
A fourth co-host, former PayPal executive David Sacks, is now Trump's AI czar and was not present for the episode but has been a vocal critic of Anthropic, including for its hiring of former Biden administration officials shortly after Trump returned to the White House last year.
As talks hit an impasse last week, Michael lashed out at Amodei on social media, saying he “has a God-complex” and “wants nothing more than to try to personally control" the military. In the podcast, however, he positioned the dispute as part of a broader military shift toward using AI.
Michael said the military is developing procedures for enabling different levels of autonomy in warfare depending on the risk posed.
“This is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome,” Michael said, sharing a hypothetical scenario of the U.S. having only 90 seconds to respond to a Chinese hypersonic missile.
A human anti-missile operator “may not be able to discriminate with their own eyes what they’re going after,” but an autonomous counterattack would be a low risk “because it’s in space and you’re just trying to hit something that’s trying to get you.”
In another scenario, he said, “who could oppose if you have a military base, you have a bunch of soldiers sleeping, that you have a laser that can take down drones autonomously?”
In response to the podcast comments, Anthropic pointed to an earlier Amodei statement saying “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”
Michael, the defense undersecretary for research and engineering, was sworn in last May and said he took over the military's “AI portfolio” in August. That's when he said he began scrutinizing Anthropic's contracts — some of which dated from President Joe Biden's Democratic administration. Michael said he questioned Anthropic over terms of use that he deemed too restrictive.
“I need to have the terms of service be rational relative to our mission set,” he said. “So we started these negotiations. It took three months and I had to sort of give them scenarios, like this Chinese hypersonic missile example. They’re like, ‘OK, we’ll give you an exception for that.’ Well, how about this drone swarm? ‘We’ll give an exception for that.’ And I was like, exceptions doesn’t work. I can’t predict for the next 20 years what (are) all the things we might use AI for.”
That's when the Pentagon began insisting Anthropic and other AI companies allow for “all lawful use” of their technology, Michael said.
Anthropic resisted that change, arguing that today's leading AI systems "are simply not reliable enough to power fully autonomous weapons."
Its competitors — Google, OpenAI and Elon Musk's xAI — agreed to the Pentagon's terms, though some still have to get their infrastructure prepared for classified military work, Michael said. The other sticking point for Anthropic was not allowing any mass surveillance of Americans.
“They didn’t want us to bulk-collect public information on people using their AI system,” Michael said, describing the negotiations as “interminable.”
Anthropic has disputed parts of Michael's version of the talks and emphasized that the protections it sought were narrow and not based on existing uses of Claude. The next stage of the dispute will likely happen in court.
Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.








