A widely shared satellite image purporting to show a destroyed US radar installation in Qatar was an AI-altered fabrication, researchers and media reports say, highlighting how generative tools are being used to produce highly persuasive wartime disinformation. According to NDTV and the Economic Times, the image, circulated by Tehran Times on X, presented a "before vs after" comparison that drew millions of views across social platforms. (NDTV, Economic Times).

Analysts traced the manipulated picture to an earlier Google Earth photograph of a US facility in Bahrain, noting subtle artefacts that revealed the deception. Industry observers pointed to identical rows of parked cars and other repeating details as giveaways that the scene had been edited or generated rather than recorded in real time, according to NDTV and the Economic Times. (NDTV, Economic Times).

The volume and reach of synthetic war visuals has grown rapidly; one tracker reported that AI-generated videos and doctored satellite clips related to the US-Israel–Iran confrontation have amassed hundreds of millions of views since late February 2026, underscoring how quickly fabricated content can shape public perceptions during a crisis. Dataconomy and CBS highlighted examples ranging from implausible missile strikes to images of iconic landmarks in flames. (Dataconomy, CBS).

Open-source intelligence practitioners warn the phenomenon is not new but has intensified with improved generative models. Brady Africk, an open-source intelligence researcher, said there has been an "increase in manipulated satellite imagery" and that "Many of these manipulated images have the hallmarks of imperfect AI-generation: odd angles, blurred details, and hallucinated features that don't align with reality". He added that some fakes are achieved by manual editing, often by "superimposing indicators of damage" onto otherwise authentic satellite shots, according to NDTV and Yahoo. (NDTV, Yahoo).

Information warfare analysts caution that the erosion of trust in publicly available imagery risks undermining a key tool for independent verification. Tal Hagin noted the double effect: "Due to the fog of war, it can be very difficult to determine the success of an adversary's strikes. OSINT came as a solution, using public satellite imagery to circumvent the censorship" inside closed media environments, "But it's now being preyed upon by disinformation agents," as reported by NDTV and corroborated by other outlets. (NDTV, DW).

Commercial satellite firms and specialised verifiers are increasingly called on to distinguish authentic acquisitions from fabricated material. The Niger airport episode, where Vantor used its own imagery to show that online photos claiming a terminal was ablaze were almost certainly AI-generated, illustrates how timely, authenticated satellite data can debunk falsehoods and restore factual clarity, according to Dataconomy and CBS. (Dataconomy, CBS).

Researchers and academics say defensive steps must combine better technical provenance with public education. "When a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events," Bo Zhao of the University of Washington said, and he urged viewers to maintain scepticism as synthetic visuals improve. Alongside detection tools such as invisible watermarks and forensics, experts call for more rapid access to high-resolution, time-stamped imagery so decision-makers and journalists can verify claims before they harden into accepted reality, as reported by the Economic Times and DW. (Economic Times, DW).

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services