March 17, 2026 — The U.S. Department of Defense is actively engineering its own large language models to replace technology from Anthropic, according to a report from Bloomberg. This development follows the collapse of a major contract between the two parties over ethical use restrictions.
Contract Dispute Leads to Separation
The Pentagon’s move comes after the breakdown of Anthropic’s $200 million contract with the Department of Defense. Negotiations failed in recent weeks when the two sides could not agree on terms governing military access to Anthropic’s artificial intelligence systems.
Anthropic sought contractual provisions that would prohibit the Pentagon from using its AI for mass surveillance of American citizens or for deploying autonomous weapons systems that could fire without human intervention. Defense officials declined to accept these limitations.
“The Department is actively pursuing multiple LLMs into the appropriate government-owned environments,” said Cameron Stanley, the Pentagon’s chief digital and AI officer, in the Bloomberg interview. “Engineering work has begun on these LLMs, and we expect to have them available for operational use very soon.”
Strategic Shift Toward Government-Owned AI
The development of in-house alternatives represents a significant strategic shift for military AI procurement. Rather than relying exclusively on commercial providers, the Defense Department is now building proprietary systems it can control directly.
This approach follows agreements the Pentagon has already established with other AI companies. OpenAI has made its own arrangement with the defense agency after the Anthropic negotiations collapsed. The Department of Defense also signed an agreement with Elon Musk’s xAI to use its Grok system in classified environments.
Industry analysts note that controlling the underlying AI infrastructure provides the military with greater operational security and flexibility. Government-owned models could be more easily customized for specific defense applications and integrated with classified systems.
Anthropic Designated as Supply Chain Risk
The separation has grown more contentious with the Pentagon’s formal designation of Anthropic as a supply chain risk. Defense Secretary Pete Hegseth applied this classification, which is typically reserved for foreign adversaries.
This designation effectively bars companies that work with the Department of Defense from also working with Anthropic. The AI firm is challenging the classification in federal court, arguing it lacks justification and harms their business.
The supply chain risk designation represents an unusually severe measure against a domestic technology company. It suggests defense officials view complete separation from Anthropic as a strategic necessity rather than merely a contractual disagreement.
What Comes Next for Defense AI
The Pentagon’s accelerated development of proprietary language models indicates it plans to proceed without Anthropic’s participation. While some reports suggested potential reconciliation remained possible, the current engineering efforts point toward a permanent separation.
Military technology experts observe that the defense establishment appears committed to developing AI capabilities that operate without external ethical constraints. The ability to deploy these systems across various applications, including potentially controversial ones, seems to be a priority for defense planners.
Anthropic’s court challenge to its supply chain risk designation continues, with legal filings available through federal court records. Meanwhile, the Department of Defense maintains its position on developing independent AI capabilities, as outlined in its official AI strategy documents.
This article was produced with AI assistance and reviewed by our editorial team for accuracy and quality.