Artificial intelligence is reshaping modern conflict, but not in the clean, decisive way its champions once promised. Instead of eliminating the uncertainty that has always shadowed warfare, it is creating a new kind of confusion: one driven not by a shortage of information, but by an excess of machine-generated certainty. That shift is now visible in conflicts from Ukraine and Gaza to the recent fighting involving Iran, where AI has been used to speed targeting, coordinate defences and compress decision-making into ever shorter windows. According to recent commentary from Carnegie, the central question for Europe is no longer whether AI will enter war, but whether democracies can impose limits before automated judgement becomes routine.

The military appeal is obvious. AI can sift huge volumes of sensor data, support air defence against incoming missiles and help commanders move faster than an adversary. Reporting from Kiplinger said the U.S. military has embraced systems for mission planning, threat detection, logistics and cyber defence, reflecting a broader doctrine that speed itself is a battlefield advantage. That logic has also shaped the current wave of defence innovation, from the Pentagon’s experimentation with AI-assisted systems to the use of commercial tools in intelligence analysis. Yet the same sources warn that rapid adoption is running ahead of ethical safeguards, raising questions about civilian protection and accountability.

Recent wars have made those concerns concrete. Chatham House said the U.S.-Israeli campaign against Iran showed how AI-supported targeting systems are increasingly woven into live operations, while Brookings described the deployment of generative AI in Operation Epic Fury as part of a broader shift toward machine-assisted strike planning. Al Jazeera reported that U.S. Central Command acknowledged using advanced AI tools to process vast quantities of data in the war against Iran. In parallel, Time has documented how Israel’s use of systems such as Lavender, The Gospel and Where’s Daddy? in Gaza enabled extremely rapid targeting, while also fuelling worries about civilian harm and automation bias.

The deeper problem is that AI does not merely accelerate the old fog of war; it can manufacture an illusion of clarity. Probabilistic targeting lists, algorithmic scores and automated recommendations may look authoritative, but they can also outpace the ability of human operators to challenge them meaningfully. Carnegie argued that this creates a new accountability gap, with responsibility diffused across developers, data specialists, procurement officials and commanders until no single actor fully owns the decision to strike. The result is a human presence in the loop that may be legally visible but operationally hollow.

For Europe, the strategic dilemma is urgent. On one hand, it wants to build a defence industrial base that can compete with the United States and China, and Brussels has begun laying groundwork through initiatives aimed at drones, counter-drone systems and wider military innovation. On the other, it risks copying a model in which speed eclipses judgement. The more ambitious path, Carnegie suggested, would be to make deliberation itself a feature of military design: slowing some targeting cycles, hard-wiring human review into the AI lifecycle and setting enforceable red lines on autonomous weapons and mass surveillance. Whether Europe does so may determine not only how it fights, but what kind of warfare it is willing to normalise.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services