There was a lot of talk about Artificial Intelligence (AI) in the war in Gaza a few months ago, following the publication of the investigation by investigative journalism sites +972 Magazine e Locall Call 1 about the IDF's use of AI systems. The report, drawn up with direct testimonies from within the intelligence and Israeli military, talks about the use of 3 AI systems dedicated to targeting, i.e. suitable for packaging killing lists through the processing of a huge mass of data in order to geolocate military leaders (Lavender e Where's Daddy?), or even just affiliated with Hamas and Islamic Jihad (JI), and any civil infrastructure (Gospel) used as hideouts and/or weapons depots. Data collected for years through Israeli mass surveillance systems2 (facial recognition and biometric surveillance), then extracted and analyzed by military technicians, private start-ups and global technology conglomerates (Google Image and Meta)3, in a close collaboration between public and private according to a global logic of Jewish security which, however, and not due to limitations of internal and military intelligence, failed to avoid the massacres of October 7th.
The advantages of the three systems highlighted by the journalistic investigation, i.e Lavender, Gospel e Where's Daddy?, were clear right from the start: the speed in processing an enormous mass of personal data (telephones and social media), hence the assignment of a danger score from 1 to 100 to every single Palestinian male; the exact physical identification of the subject through wiretaps, triangulation of cell phones, drones and sensors; which was followed by the speed in authorizing the attack, i.e. in no more than 20 seconds from the identification of the target and its location, with a minimum error rate (officially recognized as 10%), understood as the risk of target an "insignificant" subject in the operational scale or even "without direct connection" to Hamas or Islamic Jihad. Pure algorithmic calculation albeit with high collateral damage (killing of civilians, especially wives and children, or subjects unrelated to terrorist organizations) because in the use of Lavender & co., the parameters of designation of “human objective” have been deliberately lowered, to the point that the AI, initially used, automatically flagged around 37 Palestinian males as suspected “Hamas militants”. And once automaticity is triggered, through cell phone triangulations, friendships and acquaintances, in addition to the information also detected by surveillance drones, the compilation of objectives for the AI "goes crazy", literally, generating and multiplying more objectives per day than human personnel can produce in an entire year. In practice, from 50 targets per year normally previously identified by Jewish military intelligence, with the 3 new systems targeting AI, it went to 100 per day. All, however, concerted with human decisions, because it is only on them that the bar of who is a terrorist agent of Hamas or JI depends.
Furthermore, the decision of who to strike and to use weapons that are not always "intelligent" (i.e. non-precision missiles, often for a simple economic issue) was also allowed to IDF officers of lower rank than those who acted in the past, when Only leading members of Hamas and JI were targeted, selected through a long and complex "incrimination" process managed by the high command. The problem identified in recent months is, in fact, given by what AI has allowed lower-ranking soldiers to do, that is, it has provided them with a logic to be more inattentive and to affirm an operational agenda that they already had or were looking for a pretext to justify. Practically, the human-in-the-loop, i.e. the insurance that every decision in the field of war that involves the possible killing of the enemy is taken by a human being, which has already been in crisis previously in Ukraine due to the widespread use of autonomous weapons4, in the Israel-Hamas war, in particular, it was largely overcome or at least pushed into a corner.
The introduction of AI for the IDF is however not new to the operation currently underway in Gaza: already after the 11-day conflict in May 2021, Israeli military officials had claimed to have fought the “first AI war ” (AI war), with the use of machine learning and advanced computing; a fact later confirmed also in public events on the use of AI and the IDF's new information strategies5. The digital transformation of wars, moreover, has for some years been the fulcrum of Israel's multi-year plan (2020-2024), the so-called Tnufa (Momentum, in English) 6, introduced precisely for the fight against terrorist groups, including Hezbollah. A reform plan that starts from the belief that data (personal and physical from the battlefield) and AI have an important role in winning future conflicts, thanks to the processing of a mass of data obtained from various sensors, transformed into intelligible information and delivered where it needs to go, namely to military intelligence.
Tnufa has thus given emphasis to the rapid sending of information to front line units, to the creation and development of new operational units, such as the Multidimensional one, Unit 888 (or “Ghost”) and the incorporation of weapon systems (rifles7 and explosive devices8) developed by Israeli defense companies and platforms (the Carmel Programme)9, all with the use of AI, more closely integrating naval, land, air, IT and above all intelligence resources. In this context, AI plays a strategic role in coordination between Jewish units even now on the ground in Gaza, in addition to the undeniable fact that, as stated by Gen. Y. Grossman, “friction creates data”10. In short, a continuation of war that facilitates greater knowledge of the adversary and, therefore, with the use of AI, allows the design of new and more advanced operational and intelligence systems.
But even before Israel and Ukraine, it was Turkey in Libya, in 2020, with the drone Kargu-2 (photo), completely autonomous, to carry out the first operation conducted by an AI system during a modern conflict: the innovation was given by the fact that this drone is capable of identifying and suppressing human targets on its own. It was one of the very first examples of this hyperwar (Gen. J. Allen) o algorithmic warfare (R. Work, US Deputy Secretary of Defense) which is taking shape on the battlefields, controlled increasingly by AI and decidedly less by human decision-making processes due to a (presumed) high precision, an established cost-effectiveness in the use of men and means but above all for an undisputed speed of data acquisition and transmission.
The entire operational process is therefore being engineered to the point that what N. Mulchandani, Chief technology officer of the CIA, he defined software defined warfare, that is, a scenario in which software will be a dominant part of next-generation combat systems.
In short, a decisive contribution of AI to the intelligence, reconnaissance and surveillance of the C4ISR of fifth generation wars, where advanced C4ISR capabilities provide an undoubted strategic advantage through a more complex situational awareness, knowledge of the adversary and of the environment, with relative reduction in the time between detection of targets and armed response.
In practice, there is underway - and the experiences in Ukraine (following photo) and Gaza bear witness to this - not only a return to more traditional kinetic wars but also a new revolution in military affairs which sees teamwork between man and machine and a algorithmic competition, where however the risk is given by an information process exclusively via AI. Furthermore, what lies ahead in the very near future is the combination Manned-Unmanned Teaming, MUM-T, which summarizes the idea of cooperation between human-guided and self-driving systems, in a coordinated action, where AI will count as the human one.
It is also an established fact in military history that when an innovative operating system becomes part of the battlefield it can only be perfected, but it is certainly no longer set aside: and after gunpowder and nuclear power, AI represents the new great revolution in military affairs of this century, where the speed of data processing and transmission, but above all the potential for hyperconnection and the energy independence of the devices supplied to soldiers, determine the decisive asymmetry between opposing forces.
Hyperconnection, therefore, and powerful energy self-sufficiency: these are the real challenges of the future war environment, with what this entails for the States involved, in terms of satellite control, positioning and security of submarine cables (detection and data transmission), so as well as energy sources (fundamental for batteries), with all that entails in terms of future geopolitics, in this case, digital.
Ultimately, after the industrialization and computerization of the battlefields of the past centuries and decades, what lies ahead for the 2st century, and already revealed with the wars in Libya (Kargu-XNUMX), Ukraine (swarms of drones) and now in Gaza with Lavender and other systems targeting as primordial witnesses, it is a form of intelligentization of the war operational environment, at the center of which we find Artificial Intelligence, with its processes of cognitive automation, extension of human cognitive faculties and their integration with cyber-physical systems. The next step, in fact, not so far away because it is already in progress, is given by cognitive wars, with the contribution of neuroscience and neurotechnologies designed and projected towards the human brain of the adversary understood as the most modern battlefield11 and, therefore, the driving force of a new era of warfare, although this is, at the moment, a completely different story.
2 Not least Hamas has spied on the Palestinian population in order to classify them for political activities, posts on social media and even sentimental choices.
https://www.nytimes.com/2024/05/13/world/europe/secret-hamas-files-pales...
4 A. Deruda, Digital geopolitics. The global competition for control of the Internet, Carocci ed., 2024, p. 116.
11https://mwi.westpoint.edu/mwi-video-brain-battlefield-future-dr-james-gi...
Photo: IDF / STM Defense / Ukraine MoD