In late February and early March 2026, the world witnessed a striking convergence of advanced artificial intelligence, high-stakes geopolitics, and domestic U.S. policy clashes. Major news outlets—including The Wall Street Journal, Axios, Reuters, The Guardian, and others—reported that the United States military incorporated Anthropic’s Claude AI model to support its large-scale military operations against Iran. This occurred amid—and seemingly in direct contradiction to—a dramatic public fallout between the Trump administration and Anthropic, the San Francisco-based company behind Claude.
The episode underscores the growing entanglement of commercial AI in modern warfare, the challenges of regulating powerful technologies, and the messy realities of transitioning away from deeply integrated tools during active conflicts.
The Military Campaign: Operation Epic Fury
Beginning around February 28–March 1, 2026, the United States—alongside Israel—launched Operation Epic Fury, a major joint air campaign targeting Iranian military infrastructure. The strikes focused on:
- Ballistic missile facilities (including hardened underground sites),
- Air defense systems,
- Command-and-control nodes,
- Naval assets (with reports of 9–11 Iranian warships sunk and the navy’s headquarters largely destroyed),
- Other strategic targets across Iran, including in Tehran.
U.S. Central Command (CENTCOM) deployed an array of advanced systems, such as:
- B-2 Spirit stealth bombers delivering 2,000-pound munitions against fortified missile sites,
- Tomahawk cruise missiles,
- F-18 and F-35 fighter jets,
- Low-cost one-way attack (suicide) drones,
- Stealth fighters and other platforms.
The operation achieved significant objectives, including air superiority over Tehran (claimed by the Israeli Defense Forces) and the reported killing of Iran’s Supreme Leader Ayatollah Ali Khamenei in early strikes. President Trump described the campaign as potentially lasting four to five weeks or longer, aimed at destroying Iran’s missile capabilities, navy, nuclear ambitions, and proxy support networks (e.g., Hezbollah).
Claude’s Reported Role in the Strikes
Multiple credible sources indicate that Claude—Anthropic’s flagship large language model—was actively used by U.S. military commands, particularly CENTCOM, during the operation. Tasks reportedly included:
- Intelligence analysis and assessments,
- Target identification and selection,
- Battlefield planning,
- Simulating battle scenarios (potentially including modeling collateral damage, digital twins, or operational contingencies).
This integration stemmed from prior contracts and deployments: Claude had been embedded in classified U.S. military and intelligence workflows for some time, including mission-critical applications like cyber operations, modeling, and planning. Anthropic had positioned itself as a partner to national security agencies, even customizing models for government use—while insisting on certain safeguards.
The Escalating Political and Contractual Clash
Just hours (or a day) before the strikes intensified, on or around February 27, 2026, President Donald Trump publicly directed all federal agencies to immediately cease using Anthropic’s technology. Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk to national security”—a label typically reserved for foreign adversaries or entities posing direct threats, not domestic American firms.
The root cause was a breakdown in negotiations over a major Pentagon contract (valued up to $200 million). The military demanded unrestricted “any lawful use” access to Claude, including for sensitive defense tasks. Anthropic refused to remove two key safeguards:
- No use for mass domestic surveillance of U.S. citizens,
- No support for fully autonomous (lethal) weapons without human oversight.
Anthropic’s CEO Dario Amodei publicly stated the company could not “in good conscience” accede, citing ethical and safety concerns. The Pentagon issued ultimatums, threatened invocation of the Defense Production Act, and ultimately imposed the ban and risk designation—with a six-month transition period for the military to phase out Claude.
Despite the public rhetoric labeling Anthropic a risk, Claude remained in use during the Iran operation—likely due to its deep entrenchment in systems, where abrupt removal could disrupt ongoing national security efforts.
Debates and Speculation: Political Theater or Practical Necessity?
Analysts, tech experts, and commentators have highlighted the apparent contradiction:
- Some view the timing and harsh rhetoric (Trump calling Anthropic “radical left, woke” and “leftwing nut jobs”) as political theater—potentially aimed at pressuring the company, signaling toughness on “woke AI,” or damaging its reputation amid broader administration goals for accelerated, unrestricted military AI adoption.
- Others emphasize practical realities: Claude’s integration into classified workflows made immediate discontinuation infeasible without risking operational effectiveness. The transition period explicitly acknowledged this dependency.
No conclusive public evidence has emerged proving the designation was purely performative or malicious. Anthropic has vowed to challenge it legally, arguing the move is unprecedented against a U.S. company and internally contradictory (labeling something essential yet a risk). Legal experts note the designation’s statutory limits and question its enforceability beyond DoD contracts.
The episode also spotlighted alternatives: Shortly after the ban, rival OpenAI announced a Pentagon deal, reportedly with similar (though possibly looser) safeguards.
Potential Implications for Mauritius and Beyond
Mauritius remains geographically removed from the Middle East conflict, with no direct involvement. However, ripple effects could emerge:
- Energy and economic shocks — Escalation in the region often spikes global oil prices and disrupts shipping through key routes like the Strait of Hormuz or alternatives via the Indian Ocean, raising fuel, import, and living costs for island nations reliant on seaborne trade.
- Maritime trade vulnerabilities — Broader disruptions (e.g., Suez Canal issues or insurance hikes) could affect supply chains for goods, tourism, and exports.
- Diplomatic positioning — As a member of international bodies (UN, African Union, Commonwealth), Mauritius may face pressure to adopt positions on resolutions, sanctions, or ceasefires—potentially influencing foreign aid, bilateral ties (e.g., with the U.S., EU, or China), and regional stability in the Indian Ocean.
These remain speculative and contingent on the conflict’s trajectory.
Key Takeaways and Lingering Questions
This saga crystallizes several realities in the AI-defense nexus:
- Commercial AI is now indispensable to military operations—even as governments clash with providers over control and ethics.
- The Pentagon’s simultaneous ban/risk label and continued operational reliance exposes the tension between public policy posturing and battlefield necessities.
- No definitive proof exists that the designation was engineered solely to discredit Anthropic, though the optics fuel skepticism.
- It forces reflection on how democracies reconcile AI safety (corporate red lines), corporate independence, and military imperatives in an era of accelerating tech adoption.
This episode—blending AI ethics, wartime urgency, and political drama—will likely be dissected for years in technology policy, national security, and international relations studies. As the situation evolves, transparency, legal challenges, and the war’s outcome will shape its legacy.