An AI-Guided Strike Killed 150 Children. Anthropic Took the Pentagon to Court

A strike near an Iranian military barracks reportedly killed 150+ children, intensifying scrutiny of AI-assisted targeting. Anthropic reportedly sued the Pentagon over refusing to remove safety limits on Claude.

AI War Ethics Crisis: From the Minab School Attack to Anthropic Suing the Pentagon

The Core Crisis

2026 saw AI military applications cross from experimental to battlefield deployment, triggering an unprecedented ethics crisis. Two events crystallized the danger.

Event 1: The Minab Girls' School Strike (Feb 28, 2026)

A missile destroyed the Shajareh Tayyebeh girls' elementary school in Minab, Hormozgan province, Iran, killing 165-180 people (mostly children aged 7-12) and injuring 95. Initial investigations suggest outdated intelligence likely led the US to strike the school mistakenly. House Democrats specifically questioned the role of AI targeting systems, including "Maven Smart System," in the decision chain.

Event 2: Anthropic vs. The Pentagon

Anthropic filed two lawsuits against the Department of Defense challenging its "supply chain risk" designation—unprecedented for a domestic US company. The designation followed Anthropic's refusal to remove Claude's safety limits against mass domestic surveillance and autonomous lethal weapons without human oversight. Microsoft, Google, and Amazon filed amicus briefs in support.

The Paradox: Despite the dispute and a Trump administration ban, reports indicate US military continued using Claude in the Iran war, revealing regulatory gray zones.

L3Harris × Shield AI: Successfully demonstrated autonomous detection and response to electromagnetic threats without human intervention—further eroding the "meaningful human control" principle.

The Erosion of Human-in-the-Loop

  • Time compression: AI processes targets at millisecond speeds; humans become rubber stamps
  • Accountability vacuum: Software developers, commanders, political leaders all can deflect blame
  • International governance: UN CCW talks continue but binding treaties remain distant; EU AI Act explicitly exempts military applications

In-Depth Analysis and Industry Outlook

From a broader perspective, this development reflects the accelerating trend of AI technology transitioning from laboratories to industrial applications. Industry analysts widely agree that 2026 will be a pivotal year for AI commercialization. On the technical front, large model inference efficiency continues to improve while deployment costs decline, enabling more SMEs to access advanced AI capabilities. On the market front, enterprise expectations for AI investment returns are shifting from long-term strategic value to short-term quantifiable gains.

However, the rapid proliferation of AI also brings new challenges: increasing complexity of data privacy protection, growing demands for AI decision transparency, and difficulties in cross-border AI governance coordination. Regulatory authorities across multiple countries are closely monitoring these developments, attempting to balance innovation promotion with risk prevention. For investors, identifying AI companies with truly sustainable competitive advantages has become increasingly critical as the market transitions from hype to value validation.