Fact Check: Did the Pentagon use AI to target the Iranian school strike?
WASHINGTON, DC: Questions are rising over whether the Pentagon used AI to select targets following a deadly strike on a school in Iran. The US-Israel military operation hit the Shajareh Tayyebeh school in Minab in the early hours of February 28, killing more than 170 people. In response, over 120 Democratic lawmakers have formally asked the Pentagon for answers. Here’s a closer look at the claims and the facts surrounding the attack.
Claim: Pentagon used AI to target the Iranian school
A claim that the Pentagon used AI to target the Iranian school strike came after the New York Times reported that the February 28 strike on Shajareh Tayyebeh Elementary School, which killed at least 175 people, including 168 children, was likely a targeting mistake by the US military.
A letter was sent to Defense Secretary Pete Hegseth asking for details on how the US military limits civilian casualties in Iran and what role AI plays in choosing targets.
The letter asks: “If artificial intelligence is used, is it checked by humans and when? Was AI, including the Maven Smart System, used to identify Shajareh Tayyebeh school as a target? If so, did a human verify it?”
According to people briefed on the investigation, US Central Command officers created the strike coordinates using outdated data from the Defense Intelligence Agency.
Fact Check: False, Pentagon did not use AI to target Iranian school
The claim that the Pentagon used AI to target the Iranian school strike has been questioned. On the morning of March 12, Adm Brad Cooper said in a video released by CENTCOM that while the US military uses AI tools, humans always make the final decisions. The AI systems are designed to assist in analyzing information faster, not to autonomously choose targets.
CENTCOM Commander Brad Cooper:
— The Daily News (@DailyNewsJustIn) March 11, 2026
Our warfighters are leveraging a variety of advanced AI tools.
These systems help us sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.
Humans will… pic.twitter.com/TYtrrKLfwS
He explained, “Our war fighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.”
He concluded, "Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes that used to take hours and sometimes even days into seconds."
According to a New York Times analysis, the school had been part of the Revolutionary Guards’ naval base in 2013, but by September 2016, it was separated and no longer connected to the base. This suggests the building was not an active military target, weakening the claim that AI deliberately targeted it.
A report by Quartz noted that while targeting mistakes can occur, the use of generative AI in targeting is new and not fully reliable, as it can misread images, hallucinate facts, and make errors even in low-risk commercial settings. This indicates it is highly unlikely that AI alone accurately selected the school as a target.
Finally, there is no verified evidence that the Pentagon specifically used AI to choose the Iranian school as a target.