“`html
The U.S. administration has executed extraordinary measures against the domestic AI enterprise Anthropic, instructing every federal department to promptly halt the usage of its AI model Claude and formally classifying the organization as a supply chain threat to national security, a label typically reserved for foreign threats like Huawei.
The confrontation escalated on February 28, 2026, when President Donald Trump revealed on Truth Social that every federal agency must “IMMEDIATELY DISCONTINUE all usage of Anthropic’s technology.” He permitted a six-month transition period for divisions such as the Department of War (DoW), which had already been extensively associated with the company’s services.

Shortly thereafter, Defense Secretary Pete Hegseth issued his own statement on X, officially labeling Anthropic a Supply-Chain Risk to National Security and proclaiming that “no contractor, supplier, or associate that collaborates with the United States military may engage in any commercial dealings with Anthropic.”
The conflict revolves around two specific exemptions Anthropic requested concerning the lawful application of Claude: widespread domestic surveillance of Americans and entirely autonomous weaponry.
The Pentagon demanded complete, unrestricted access to Claude for “all lawful objectives,” but Anthropic’s CEO Dario Amodei declined, asserting the company “cannot in good faith comply” with those requirements.
Anthropic had previously been the pioneering frontier AI company to implement models on the U.S. government’s classified networks, functioning under a $200 million DoW contract since June 2024. For several months, both sides were engaged in private discussions that ultimately collapsed. In a final effort, the Pentagon presented Anthropic with an ultimatum: comply by 5:01 PM ET on Friday or confront being blacklisted.
Anthropic contends that a Pentagon contract proposal characterized as a compromise “was accompanied by legal jargon that would permit those protections to be disregarded at will.”
Amodei maintained that current frontier AI models lack the reliability necessary for fully autonomous weapon systems — a stance he argues safeguards American soldiers and civilians, emphasizing that mass surveillance would constitute a fundamental breach of Americans’ civil liberties.
Anthropic has pledged to contest any supply chain risk designation in court, arguing that the action is legally unsound under 10 USC 3252, which confines the designation’s applicability strictly to Department of War contract use, not more extensive commercial interactions. This indicates that individual customers, API users, and non-DoW contractors remain entirely unaffected by the designation.
Nonetheless, the wider industry ramifications could be profound. Anthropic relies on cloud computing resources from Amazon, Microsoft, and Google — all of which hold defense contracts.
A strict interpretation of Hegseth’s remarks prohibiting any entity that “does business with the military” from collaborating with Anthropic could, in theory, jeopardize those cloud partnerships. Legal authorities have cautioned that the designation establishes a “dangerous precedent,” highlighting that it dilutes a tool historically meant for entities associated with foreign governments.
Trump cautioned Anthropic of “serious civil and criminal repercussions” if it does not comply during the transition phase. Anthropic has affirmed its commitment to support lawful national security applications and will endeavor to ensure a seamless transition for U.S. forces and ongoing military operations.
The organization asserts that no amount of governmental pressure will alter its stance on autonomous weapons or domestic surveillance.
“`