The Pentagon and artificial intelligence startup Anthropic are locked in a standoff over how the military can use Claude AI technology. The disagreement centers on a contract worth up to $200 million awarded last summer.
Anthropic has raised concerns about the Pentagon using its AI tools for domestic surveillance and autonomous weapons systems. The company wants safeguards that require human oversight for weapons targeting. Pentagon officials have pushed back against these restrictions.
The Defense Department argues it should deploy commercial AI technology however it wants as long as it follows U.S. law. This position aligns with a January 9 department memo on AI strategy. Officials say company usage policies should not limit military operations.
The contract was meant to integrate Anthropic’s Claude models into defense operations. Tensions began almost immediately after the deal was signed. Anthropic’s terms and conditions block Claude from being used for domestic surveillance activities.
This restriction limits how agencies like Immigration and Customs Enforcement and the FBI could use the technology. Some administration officials are frustrated that Anthropic is dictating usage terms for legal activities.
Defense Secretary Pete Hegseth addressed the issue at an event announcing the Pentagon’s work with Elon Musk’s xAI. He said the agency would not use AI models that limit military capabilities. People familiar with the matter confirmed he was referring to Anthropic.
Anthropic CEO Dario Amodei has publicly outlined concerns about AI use in mass surveillance and fully autonomous weapons. He wrote in a recent essay that AI should support national defense except in ways that make the U.S. more like autocratic adversaries.
The Pentagon likely needs Anthropic’s cooperation to move forward with the contract. Claude models are trained to avoid actions that might cause harm. Anthropic staff would need to modify the AI for Pentagon use.
Anthropic is one of several major AI developers with Pentagon contracts. Google, OpenAI, and xAI also have agreements with the Defense Department. The company has spent resources courting national security business and shaping government AI policy.
The San Francisco startup is preparing for a future public offering. It is currently in talks to raise billions at a $350 billion valuation. The company’s latest models and coding tools have gained popularity in recent weeks.
The dispute puts Anthropic’s Pentagon business at risk. One person familiar with the matter said the contract could be cancelled. An Anthropic spokesman said Claude is used extensively for national security missions and that discussions with the Department of War are productive.
Amodei has criticized some Trump administration policies. He spoke out against allowing exports of Nvidia AI chips to China, calling it a national security risk. He also condemned fatal shootings of citizens protesting immigration enforcement in Minneapolis.
The CEO has clashed with White House AI czar David Sacks over AI regulation. Sacks has accused Anthropic of being “AI doomers” focused on slowing competitors. Anthropic denies these claims and says it has a good relationship with the administration overall.
The company stated it is committed to protecting America’s lead in AI and helping counter foreign threats. It wants to give warfighters access to advanced AI capabilities. The outcome of negotiations will determine whether Anthropic continues its Pentagon work.
The post AI Startup Anthropic Blocks Pentagon From Using Claude for Weapons and Surveillance appeared first on CoinCentral.


