With the term "Offset"1, literally clearing, means in the US military doctrine an operational capacity that allows to balance an existing gap with a competitor.
The nuclear weapon was considered the "First Offset". During the Cold War, the vast availability of carriers and the ability to react immediately to ensure the Mutual Assured Distruction (MAD), allowed Washington to counterbalance Moscow's advantage in terms of deployed forces and the available armaments.
The "Second Offset" was formed, in the 80-90 years of the last century, by the implementation of the Information Technology (IT), which ensured the US, in the multilateral phase that opened up to the collapse of the Soviet Union, the necessary technological superiority to balance multiform threats, most of which are of an asymmetrical nature, coming from state and non-state actors.
The disorientation of Saddam's armored divisions in the aftermath of the first Gulf War in the 1991, in front of the Allied maneuvering units and able to engage more targets without being seen, still today provides scholars and observers with the plastic representation of American tactical superiority obtained thanks to the use of IT, which in a few days addressed the outcomes of the conflict.
In recent years, however, a further characterization of the term has been consolidated in US doctrine Offset, this time linked to the progressive emergence in many fields of military application of artificial intelligence (IA).
"Third Offset", notoriously, today indicates the complex of autonomous systems2 command and control (C2) equipped with IA and the network of sensors and combat assets connected to them - such as aerial and terrestrial drones, drone swarms, killer robots -, able to carry out actions independently kinetic against the enemy.
The AI â€‹â€‹has become in the last decade the most promising research ground in the military field with China and the USA competing for leadership, followed by Russia3 and Israel.
It is a "Game changer": a technology that will inexorably change the nature of war, no longer an archetype of a violent confrontation between opposing (human) wills, but - as if we were in a video game -, succession of actions, reactions and counter-reactions, conceived and conducted by systems autonomous, with an unthinkable speed and brutality, without human intervention.
This is not the first time that this has happened in military history.
It had already happened, for example, with the adoption of the rifled rifle musket or with the combined use of tanks and aviation in the Second World War.
The IA, considered in this perspective, represents a point of no return for military affairs, which irreversibly modifies the way of fighting.
Not surprisingly, his advent was hailed as the seventh military revolution, later: the taxation to support the army, in the Westphalian era; the universal lever during the French revolution; serial production ensured by the industrial revolution in the 1800; the combined and lightning-fast use of wagons and aviation during the two world wars; nuclear bombs at the time of the cold war; and the advent, finally, of Information Technology (IT) at the end of the last century.
In each of those situations, every novelty introduced a competitive advantage equal to what the IA assures today with the introduction of C2 systems able to analyze huge amounts of data on the enemy and the environment (collected by numerous sensors placed on the ground), which, subsequently analyzed by algorithms, they result in orders for actions, even kinetic, conducted by assets without a pilot or a weapon chief.
These systems will form the basis of the future war, facilitated by the ability to learn from their mistakes thanks to "machine learning", ie the use of algorithms designed to exploit data from the external environment and modify actions in the most suitable way to perform the given task.
General John Allen4, former commander of ISAF and current president of the Brookings Institution5 baptized her "Iperguerra", above all in reference to the speed with which it will be fought; in it an instantaneous flow of situational data will prevail, which will involve decisions real-time of "algorithmic nature" far beyond the possibilities of human development.
In this regard, the current debate still focuses on what the role reserved for men will be: he will continue to be central to all the decisions to be made, including those at the lowest tactical levels (in the loop); or will he simply supervise the processes by delegating a large degree of autonomy to the involved AI systems (on the loop)?
At the moment, the first option appears to be that prevalent in the US, very careful as they are to the ethical consequences arising from the possibility that any C2 or weapon system, in complete autonomy of judgment, decides to launch an attack against humans .
A position different from that of Beijing, whose doctrinal developments on the subject, probably by virtue of a natural tendency to favor the centralization of decision-making power, tend to give drones and robots greater autonomy starting from the lowest order levels.
At any rate, it is now established that in the not-so-distant future, drones and robots will partially replace traditional ones, if at all boots on the ground.
It is estimated that by the 2030 about a quarter of the US combat force will be represented by autonomous systems capable of carrying out a multiplicity of missions in the most disparate operating environments (in the 2016 the USA already deployed 12.000 unmanned land vehicles - UGV6 in operations).
On the other hand, the "unmanned" systems have an advantage not just: they are easily expendable, and always cheaper due to mass production; they are trainable and updatable in very fast times: those necessary, measurable in seconds, for theupload of the programs that make them operate.
Il cloud in fact, it provides the possibility to instantly train an infinite number of robotic fighters, based on the experiences coming from in-progress combat, whose valorization will no longer have to go through the long times of the lessons learned and those necessary to recalibrate the training .
A possible loss in action, finally, will be equivalent to that of a vehicle that stops in the street, very far from the political and social cost caused by the death of a single soldier.
2 A system is called "automated" when it acts primarily in a deterministic manner, always reacting the same way when subjected to the same inputs. An "autonomous" system, on the other hand, reasons on a probabilistic basis: received a series of inputs, elaborates the best answers. Unlike what happens with automated systems, an autonomous system, with the same input, can produce different answers.
3 The Chief of Staff of the Defense of the Russian Federation, gen. Vasilyevich Gerasimov, stated in the 2013 that "In the near future it can be assumed that there will be deployed departments composed entirely of robots, capable of carrying out combat activities independently".
6 Unmanned Ground Vehicles
Photo: US DoD / US Navy / US Air Force