Over the last week, tensions between the Pentagon and artificial intelligence giant Anthropic have reached a boiling point.
Anthropic, the creator of the Claude chatbot system and a frontier AI company with a defense contract worth up to $200 million, has built its brand around the promotion of AI safety, touting red lines the company says it won’t cross.
Now, the Pentagon appears to be pushing those boundaries.
Hints of a possible rift between Anthropic and the Defense Department, now rebranded the Department of War, began to intensify after The Wall Street Journal and Axios reported the use of Anthropic products in the operation to capture Venezuelan President Nicolás Maduro.
It is unclear how Anthropic’s Claude was used.
Anthropic has not raised or found any violations of its policies in the wake of the Maduro operation, according to two people familiar with the matter, who asked to remain anonymous in order to discuss sensitive topics. They said that the company has high visibility into how its AI tool Claude is used, such as in data analysis operations.
Anthropic was the first AI company allowed to offer services on classified networks, via Palantir, which partnered with it in 2024. Palantir said in an announcement of the partnership that Claude could be used “to support government operations such as processing vast amounts of complex data rapidly” and “helping U.S. officials to make more informed decisions in time-sensitive situations.”
Palantir is one of the military’s favored data and software contractors, for example collecting data from space sensors to provide better strike targeting for soldiers. It has also attracted scrutiny for its work under the Trump administration and law enforcement agencies.
Though Anthropic has maintained that it does not and will not allow its AI systems to be directly used in lethal autonomous weapons or for domestic surveillance, the reported use of its technology related to the Venezuela raid through the contract with Palantir allegedly raised concerns from an Anthropic employee.
Semafor reported Tuesday that, during a routine meeting between Anthropic and Palantir, a Palantir executive was worried that an Anthropic employee did not seem to agree with how its systems might have been used in the operation, leading to “a rupture in Anthropic’s relationship with the Pentagon.”
A senior Pentagon official told NBC News that “a senior executive from Anthropic communicated with a senior Palantir executive, inquiring as to whether their software was used for the Maduro raid.”
According to the Pentagon official, the Palantir executive “was alarmed that the question was raised in such a way to imply that Anthropic might disapprove of their software being used during that raid.”
Citing the classified nature of military operations, an Anthropic spokesperson would neither confirm nor deny that its Claude chatbot systems had been used in the Maduro operation: “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” the spokesperson told NBC News in a statement.
The spokesperson pushed back on the idea that the incident had caused notable fallout, telling NBC News the company had not held out-of-the-ordinary discussions about Claude usage with partners or shared any mission-related disagreements with the military.
“Anthropic has not discussed the use of Claude for specific operations with the Department of War,” the spokesperson said. “We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”
Palantir did not reply to a request for comment.
The core tension between Anthropic and the Defense Department appears to be rooted in a broader clash over the military’s future use of Anthropic’s systems. The Defense Department has recently emphasized its desire to be able to use all available AI systems for any purpose allowed by law, while Anthropic says it wants to maintain its own guardrails.
Chief spokesman for the Pentagon Sean Parnell told NBC News that “The Department of War’s relationship with Anthropic is being reviewed.”
“Our nation requires that our partners be willing to help our warfighters win in any fight,” he said in a statement.
“Ultimately, this is about our troops and the safety of the American people.” On Tuesday, Undersecretary of Defense Emil Michael said that the department’s negotiations with Anthropic had hit a snag over a disagreement over potential uses of its systems, according to CNBC.
In early January, Defense Secretary Pete Hegseth released a new AI strategy document that called for any contracts with AI companies to eliminate company-specific guardrails or constraints on how the military can use companies’ AI systems, newly allowing “any lawful use” of AI for Defense Department purposes.
The document called for defense officials to incorporate this language into any Defense Department AI contract within 180 days, which would implicate Anthropic’s dealings with the military.
While Anthropic has broadly supported the use of its services for national security purposes, it has maintained that its systems not be used for domestic surveillance or in fully autonomous weapons.
The Defense Department has balked at Anthropic’s insistence on these two issues and applied increasing pressure to the company.
“Claude is used for a wide variety of intelligence-related use cases across the government, including the Department of War, in line with our Usage Policy,” the Anthropic spokesperson said. “We are having productive conversations, in good faith, with the Department of War on how to continue that work and get these complex issues right.”
Relative to other AI companies, Anthropic has prioritized enterprise and national security applications of its AI systems. In August 2025, Anthropic formed a national security and public sector advisory council composed of former senior defense and intelligence officials and last week added Chris Lidell, President Donald Trump’s former deputy chief of staff, to its board of directors.
Anthropic has partnered with Palantir since late 2024 to provide U.S. defense and intelligence agencies with access to various Claude systems. At the time, Anthropic’s head of sales and partnerships, Kate Earle Jensen, said the company was “proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations.”
Anthropic, along with other leading American AI companies such as OpenAI and Google, signed individual two-year contracts with the Defense Department in July 2025, each worth up to $200 million to help “prototype frontier AI capabilities that advance U.S. national security.”
“Anthropic is committed to using frontier AI in support of US national security,” the Anthropic spokesperson told NBC News in a statement. “We were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers.”
Anthropic CEO Dario Amodei has routinely emphasized Anthropic’s commitment to using its AI services for national security purposes. In an essay published in late January, Amodei wrote that “democracies have a legitimate interest in some AI-powered military and geopolitical tools,” and that “we should arm democracies with AI, but we should do so carefully and within limits.”
Michael Horowitz, who led AI and emerging technology policy in the Pentagon and is now a professor of political science at the University of Pennsylvania, said that any concerns about use of Anthropic systems for active engagement in lethal autonomous weapons would likely be irrelevant to current negotiations given the type of systems Anthropic is developing.
“I would be surprised if Anthropic models were the right ones to use for lethal autonomous weapon systems right now, since the algorithms for that will be more bespoke than Claude’s,” Horowitz told NBC News.
“My sense is that Anthropic wants to increase the depth and scope of their work with the Pentagon. Based on what we know, this sounds like a dispute more over theoretical possibilities than real-world use cases on the table.”

Leave a Reply