The AI system used by millions for writing and research was quietly involved in a major military operation in the Middle East. US Military Used Claude AI to Strike Iran in the 2026 War, using the model to analyse intelligence, process satellite data, and assist in identifying potential targets during the conflict.
WHO WAS INVOLVED AND WHAT HAPPENED ON FEBRUARY 28, 2026
Who Used Claude and How
On the morning of February 28, 2026, U.S. Air Force jets were already en route to targets in Iran when President Donald Trump posted on Truth Social, ordering all federal agencies to immediately stop using Anthropic's AI products, including Claude. By then, it was too late to matter operationally. According to multiple sources familiar with the matter, as reported by the Wall Street Journal, Axios, and CBS News, U.S. Central Command had already deployed Claude to conduct intelligence assessments, identify targets, and simulate battlefield scenarios in real time as part of the joint U.S.-Israeli military campaign against Iran.
The Washington Post reported on March 4, 2026, that in order to strike 1,000 targets within the first 24 hours of the operation, the U.S. military relied on the most advanced artificial intelligence it had ever used in combat. That tool was Claude. According to WION News and reporting confirmed by multiple outlets, Claude processed vast amounts of data including satellite imagery, communications intercepts, and signals intelligence to generate threat evaluations and situational summaries for military planners at CENTCOM in the Middle East.
Claude was not functioning as an autonomous weapon. It was functioning as an analytical and decision-support tool, helping human operators process complex battlefield data at a speed and scale that would have been impossible without AI assistance.
Who Made the Decision to Ban Anthropic
President Donald Trump made the decision to designate Anthropic a national security concern on February 28, 2026. In his Truth Social post, Trump described Anthropic as a "Radical Left AI company" and directed every federal agency to immediately cease use of its products. Defense Secretary Pete Hegseth followed with his own statement on X, formally designating Anthropic a "supply chain risk to national security," a classification normally reserved for companies connected to foreign adversaries. Hegseth added that no contractor, supplier, or partner that does business with the U.S. military could conduct any commercial activity with Anthropic, effective immediately.
The decision came after Anthropic's CEO Dario Amodei announced on Thursday, February 26, 2026, that the company would not comply with the Pentagon's demands. Amodei stated publicly that Anthropic could not "in good conscience" accede to the request, citing concerns about democratic values and the technical limitations of current AI systems.
Who Is Anthropic and Who Built Claude
Anthropic is an AI safety company founded in 2021 by former members of OpenAI, including Dario Amodei and his sister Daniela Amodei, who serve as CEO and President respectively. The company is headquartered in San Francisco and is reportedly on track to generate at least $18 billion in revenue in 2026. Anthropic developed Claude as its flagship large language model, emphasizing what the company calls "Constitutional AI," a method of embedding ethical principles and democratic values directly into how the model reasons and responds.
Claude became the first AI model deployed on the Pentagon's classified networks. Anthropic began meaningful engagement with U.S. defense and intelligence agencies in late 2024. In November 2024, it partnered with Palantir and Amazon Web Services to supply Claude to defense and intelligence systems, including classified environments. In June 2025, Anthropic introduced Claude Gov, a version tailored specifically for government and national security workflows.
WHAT CLAUDE ACTUALLY DID IN THE IRAN STRIKES
What Intelligence Assessment Means in Practice
When military officials say Claude was used for "intelligence assessment," the phrase covers a wide range of analytical tasks that previously required teams of human analysts working over extended periods of time. In the Iran operation, Claude analyzed data feeds to evaluate threats, assess enemy positions, and build situational awareness for commanders at CENTCOM. According to WION's reporting, the model processed intercepts, satellite imagery, and signals intelligence simultaneously, generating summaries and threat evaluations that allowed military planners to prioritize targets at unprecedented speed.
In simple terms, Claude was doing what a very large, very fast, expert human analyst team would do: reading everything, finding patterns, ranking threats, and presenting conclusions. The difference is that Claude could do this across thousands of data points at once, in real time, during an active operation.
What Target Identification Means in Practice
Target identification is exactly what it sounds like. Claude was used to help identify which Iranian sites and assets met the criteria for inclusion on the strike list. According to reporting by the Wall Street Journal and confirmed by Futurism on March 2, 2026, Claude's tasks specifically included identifying military targets, helping military leaders plan attacks. This does not mean Claude was firing weapons or making final targeting decisions independently. Human commanders retained final authority. Claude was generating and prioritizing target recommendations for human review.
This distinction matters enormously, and it is precisely where the ethical dispute with the Pentagon focused. Anthropic's position was that Claude could be used to assist human decision-makers, but not to replace them. The Pentagon wanted the freedom to use Claude for any lawful purpose, including scenarios in which the degree of human oversight could be reduced.
What Battle Simulation Means in Practice
In addition to intelligence assessment and target identification, Claude was used to simulate battle scenarios. This means running predictive models of how a strike might unfold, how an adversary might respond, what second-order consequences might follow, and how different strike sequences might compare in terms of effectiveness and risk. This is a function that has historically required extensive human planning teams and significant lead time. AI-enabled simulation compresses that timeline dramatically and allows commanders to test many more scenarios before committing to a course of action.
WHO OPPOSED WHOM AND WHY THE CONTRACT DISPUTE EXPLODED
What the Pentagon Demanded
The dispute between the Pentagon and Anthropic did not erupt overnight. It had been building since January 2026, when Defense Secretary Pete Hegseth issued an AI Strategy Memorandum directing all Department of Defense AI contracts to adopt standard "any lawful use" language within 180 days. This directly contradicted Anthropic's existing contract, which included specific restrictions on how Claude could be used.
On a date prior to February 27, 2026, the Pentagon set a 5:00 PM deadline on February 27, 2026, for Anthropic to agree to the new terms. The core demand was that Anthropic allow Claude to be used for "all lawful purposes," without carve-outs or conditions imposed by the company. Pentagon Chief Technology Officer Emil Michael told CBS News that "at some level, you have to trust your military to do the right thing." He argued that the safeguards Anthropic was insisting on were unworkable because they could jeopardize military operations in fast-moving situations where prior approval from a private company is not feasible.
What Anthropic Refused to Remove
Anthropic had two specific restrictions it refused to remove. The first was a prohibition on using Claude for the mass surveillance of American citizens. The second was a prohibition on using Claude in fully autonomous weapons systems, meaning systems in which AI makes the final decision to use lethal force without human involvement.
Amodei stated in his February 26, 2026, statement that these two conditions were not ideological positions but technical realities. He argued that frontier AI systems are not reliable enough to power fully autonomous weapons, and that autonomous weapons "cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day." He also said he was concerned that AI systems could pose a surveillance risk by piecing together "scattered, individually innocuous data into a comprehensive picture of any person's life."
Anthropic stated publicly that the contract language it received from the Pentagon "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," and that the Pentagon's proposed compromise was "paired with legalese that would allow those safeguards to be disregarded at will."
Who Backed Anthropic and Who Backed the Pentagon
The dispute prompted clear lines of support on both sides. Senator Mark Warner and other congressional leaders wrote to both the Pentagon and Anthropic, endorsing the Pentagon's stated position that it does not intend to use AI for mass surveillance or autonomous weapons, but also calling for "additional work by all stakeholders" on the question of lawful use. The letter implicitly acknowledged that neither side had fully resolved the core governance question.
Inside Anthropic, multiple employees publicly supported Amodei's stance. Trenton Bricken, a member of Anthropic's technical alignment team, posted on X that the company had consistently stood by its values in ways often invisible from the outside, and described the public dispute as a visible instance of that commitment.
Outside observers were divided. Technology analyst Tara Chklovski, CEO of Technovation, told CNBC that Anthropic was the most deliberate model creator when it came to building systems for the military, and that any alternative supplier the government chose would be less safe. Others noted that the defense tech ecosystem was not waiting for the dispute to be resolved, with ten portfolio companies in the J2 Ventures defense portfolio immediately moving to replace Claude following the February 28, 2026, designation.
WHAT HAPPENED AFTER THE BAN: OPENAI AND THE COMPETITIVE FALLOUT
Who Replaced Anthropic
Within hours of Anthropic being designated a supply chain risk on February 28, 2026, OpenAI announced it had finalized a classified deployment deal with the Pentagon. CEO Sam Altman said the agreement allowed OpenAI's tools, including ChatGPT, to operate within the military's classified network infrastructure. The deal included three stated restrictions: no mass domestic surveillance, no autonomous lethal weapons, and no high-stakes automated decisions without human oversight. OpenAI described the agreement as having "more guardrails than any previous agreement for classified AI deployments, including Anthropic's."
The positioning drew immediate criticism. OpenAI employees had signed an open letter on February 27, 2026, supporting Anthropic's stance. Less than 24 hours later, OpenAI had signed a competing deal. Altman later admitted to CNBC that the decision "looked opportunistic and sloppy." Separately, Elon Musk's xAI had also signed a deal enabling its Grok model to operate within secure U.S. military environments, aligning Grok with the Pentagon's standard "any lawful use" terms.
Legal and policy analysts questioned whether OpenAI's contractual protections were as firm as Anthropic's, noting that OpenAI's language ties its restrictions to existing law and Department of Defense policies, which the government can change unilaterally, rather than the absolute restrictions Anthropic had been insisting on.
Who Is Still Using Claude
Despite the ban, Claude was still being used in active military operations in Iran as of March 4, 2026, according to TechCrunch. President Trump directed civilian agencies to discontinue use of Anthropic products immediately, but the Department of Defense was given six months to phase out Claude to prevent operational disruption. The result, as TechCrunch noted on March 4, 2026, was that as the U.S. continued its aerial attack on Iran, Anthropic models were being used for many targeting decisions. Hegseth acknowledged the operational complexity of removing a system deeply embedded across military workflows, and said Anthropic would continue supplying services for up to six months to ensure a seamless transition.
Defense One reported that replacing Claude could take at least three months, and that operators would need to reconfigure data inputs, re-examine how to share data in real time with the intelligence community, and re-validate that replacement models were functioning as expected. Analysts at Piper Sandler described Anthropic as "heavily embedded in the Military and the Intelligence community" and warned that moving off the company's technology could "pose some short-term disruptions" to Palantir's operations, given that Palantir counts on the government for close to 60 percent of its U.S. revenue.
WHAT THIS MEANS FOR THE FUTURE OF AI IN WARFARE
Who Now Controls How AI Is Used in Combat
The central question the Anthropic-Pentagon dispute exposed is one that no institution has yet answered clearly: who decides what limits apply to AI when national security is involved? As of March 5, 2026, that question remains entirely open. Companies set their own limits through contract terms. The Pentagon sets its own limits through policy. Congress has largely been silent. International frameworks are nonexistent. And courts have not yet been asked to adjudicate.
Anthropic filed a statement on February 28, 2026, arguing that Hegseth lacked the legal authority to restrict companies that work with Anthropic from doing business with the government, citing a federal statute enacted by Congress. The company said it would challenge any supply chain risk designation in court. As of March 5, 2026, no formal legal action had been filed because no official designation had yet been made through the required process.
What AI Battlefield Integration Now Looks Like
The scale of AI integration in the February 28, 2026, operation was without precedent in U.S. military history. The Washington Post reported on March 4, 2026, that the U.S. relied on AI to strike 1,000 targets in the first 24 hours of the operation. Striking 1,000 targets in 24 hours using traditional intelligence-analysis methods would have required many months of planning and human analyst time. AI compressed that timeline to hours.
The use of AI on the battlefield is not new, but its scope in 2026 is qualitatively different from anything before it. The U.S. military employed the DART logistics system during the Gulf War in the 1990s. Project Maven, launched in 2017, was designed to identify objects in drone video footage. Israel's "Habsora" and "Lavender" targeting systems, widely reported on during the Gaza conflict, processed large volumes of data to generate target banks. What makes Claude different from all of these is that it is a general-purpose large language model capable of performing across many analytical tasks simultaneously, including synthesis, prioritization, simulation, and recommendation, within a single integrated platform.
What Risks Are That Anthropic Was Warning About
The dispute was not, in the end, about whether AI should be used in military operations. Amodei stated clearly on multiple occasions that Anthropic supports lawful use of AI for national security. The dispute was about two specific categories of use that Anthropic assessed as either technically unreliable or incompatible with democratic governance.
On fully autonomous weapons, Amodei's argument was straightforward: current AI systems are not reliable enough to make lethal decisions without human judgment, and deploying them in that role would endanger American service members rather than protect them. Anthropic offered to work with the Department of War on the research and development required to build the reliability standards that would make AI safe enough for expanded autonomous roles in the future. The Pentagon declined that offer.
On mass surveillance, the concern was about the combinatorial power of large language models. A system like Claude can be prompted to connect individually harmless pieces of data, a phone record here, a location ping there, a financial transaction elsewhere, and construct a comprehensive profile of a person's behavior, associations, and movements. Anthropic assessed that permitting this use without explicit contractual prohibition would leave the door open to a domestic surveillance capability far more powerful than anything previously deployed.
The Conversation noted on March 4, 2026, that the real risk is a process going from sensor data to AI interpretation, target selection, and weapon activation with minimal to no human control or awareness. It described generative AI in 2026 as qualitatively more powerful than earlier systems because today's models can be used for many tasks simultaneously, meaning the spillover risk from any single permissive deployment is much larger than it would have been with a single-purpose system.


