The morality of autonomous drones

(To Andrea Mucedola)
20/05/24

As technology evolves, drones are becoming more and more autonomous the prospect of a robotic fighter is becoming an increasingly real possibility. Aerial and ground robotic systems have been used during the recent wars in Afghanistan and Iraq and recently in Ukraine and in the Israel-Palestinian conflict.

Although official policy states that these drones maintain human control, at least for the most lethal decisions, I fear that the operational concept of using these means could change as soon as their reliability in fully autonomous actions from a decision-making point of view is demonstrated . Before getting to this point, we should ask ourselves whether the use of these drones should be limited by rules linked to moral and ethical implications that should be characteristic of a human being.

It should be noted that many of the international conventions1 for the use of armaments were signed before the construction of ENIAC, the first computer, and do not consider the use of autonomous means. In fact, the military scientific environment, always careful to stay ahead of the times to maintain technological superiority, over the past 60 years has evolved beyond expectations without any politician ever raising a moral bar on the potential risk of employing robotic systems in military operations. This despite the fact that their potential danger had been raised by successful books and films.

What is the real risk?

The issue reminds me of a strenuous outcry at a NATO conference in which a representative of a country, after having listened to the development program of future totally autonomous combat aerial drones, raised at the table the need to apply the first law of Robotics2. The discussion was only academic and had no conclusions. Asimov's Laws, born in the early 1940s, were actually examined in anticipation of the (now imminent) development of autonomous robots for war use but fears that these systems could cause injury or death to human beings could become a reality .

For example, the development of the Korean active border surveillance system Samsung SGR-1 has attracted considerable controversy and questions arise as to whether it is ethical to leave a machine the autonomous choice to kill individuals discovered in its control area solely on the basis of your personal assessment.

Science fiction? We must not go so far as to imagine style scenarios Terminator o Battlestar Galactica, fortunately still far away, but nowadays there is the possibility that autonomous drones could decide to kill humans based on previous orders without considering factors such as ethics and morality.

Reducing uncertainty in the decision-making process

To understand the mechanism that leads to making decisions at a human level, we need to analyze what is technically defined decision making, decision making.

The most important problem in any field, even human, is to decide well with the minimum risk of error while obtaining maximum effectiveness. As we will see shortly, the quality of the collection and analysis of information on an operational situation improves understanding of the environment and favors correct decision-making. In simple words, it derives partly from the quality and performance of external surveillance sensors (viewers, radar, sonar, etc.), and partly from the experience, intuition and judgment of the human operator. This process is very complex and various factors intervene in the mechanism which effectively "filter" the information, making it very subjective. We could imagine the process as a strange double-cone telescope (Miller et Shattuck, US Navy Postgraduate School, Monterey) in which the first is aimed at the real world and observes, depending on its width, only a part of the total information. The lenses placed inside are fundamental because they represent the levels of information filtering. The greater the performance of the lens (sensor), the better the quality and quantity of the information collected.

The cone, which we could imagine as a data collector, as it approaches the observer undergoes a quantitative reduction in information mainly linked to the filtering of potentially erroneous data (technically we can imagine them as filters pass high o low pass) which reduces the possibility of misjudgment but also its ability to identify suspicious anomalous behavior. At this point the operator receives a mass of data on his representation system and, unknowingly, carries out a second filtering of the information linked to personal factors such as:

  • attention (physical state and motivation of the operator at the moment of the analysis);
  • the level of experience (gained following the training received both individually and in team work);
  • intuition (non-measurable personal gift relating to one's ability to see beyond the visible);
  • human prejudices (related to gender, religion, social or political beliefs, motivation, etc.).

In simple words, the sensor only shows us a percentage of information linked to its technological analysis accuracy; the amount of data is filtered both by the machine (due to the limits of the sensors and data management algorithms) and by the operator who can discard or give importance to some data received depending on the moment. Ultimately, the decision maker (the one who has to make the decision) receives a filtered situation - often compared with that coming from other operators - so his level of decisional uncertainty increases further. This human limit would therefore lead us to lean towards robotic management, not affected by limits and prejudices but... other ethical and moral factors come into play. 

What can be done to reduce decision uncertainty?

The easiest answer could be to use increasingly powerful systems, capable of analyzing large quantities of data, qualitatively and quantitatively increasingly performing, making them available to operators with increasingly intelligent decision support algorithms. All this in the shortest possible time as the information collected is in any case perishable and the adversary could take new actions by changing the initial situation. Although this process is not dissimilar in many work environments, in the military it is particularly sensitive because reaction time can save lives.

Technically we are talking about Command, Control and Communications (C3), to define that continuous and cyclical process through which a commander makes decisions and exercises, through his means of communication, his authority over subordinate commanders.

One of the best-known decision-making methods is the OODA loop, an acronym to describe a decision-making process, i.e.: Observe-Orient-Decide-Act.

As we have said, the first step towards understanding an event is the collection and processing of the data collected by the sensors, carried out by system architectures (computers) which, through the use of programs, organize the data, filter them (eliminating inconsistent information), and generate graphs, drawing or representing the information (Note). At this point the operator begins to orient himself through the analysis and correlation of the data, evaluating them in terms of reliability, relevance and importance (Orient) and transforms knowledge into understanding. Simple to say, but always very subjective as all operators have their own ability based on previous experiences and training but limited by the physical state of the moment (tiredness, tension, resentment,...), as well as personal prejudices (I am better than he/she, ... doesn't understand, who makes me do it ...) and rules set by higher authorities (a practical example are the deliveries for operators which are drawn up on the basis of higher directives, such as the rules of engagement - ROE - which are in turn written and approved at a political level, often far from the reality of the moment). This means that the understanding of an event is always different from operator to operator because it is based on different technical and human factors (the so-called human factor).

But it's not over... operators must therefore share their opinions or relate to higher decision-making levels. This area, according to Metcalfe's law3, is even more complex due to the increase in the number of connections that are generated. Furthermore, the differences in the analysis of the operators must be considered as they are affected by a different origin; the classic case is when they come from different working environments (military, civilian) but also from different countries and cultures). A classic example is an operator, trained in a Western environment, who has to collaborate with an Asian colleague: In the first case his analysis process is Euclidean, that is, problem, possible solutions, analysis and decision, in the second the problem must be seen in its entirety together (holistic vision).

In summary, we could say that errors grow proportionally as relational complexity increases. The second cone then opens, opposite to the other, which widens as evaluations from different sources are added.

This explains the fact that in dynamic environments, pre-planned responses are often applied to reduce time and maximize effort (theoretically reducing errors).

Humans vs Computers

In fact, in an increasingly fast technological society, generational divides are emerging, where decision-making power is often assigned to personnel who come from experiences gained in an analogue era who must however deal with and guide staff of people born in an era digital where the flow of information is multidimensional and requires flexibility and "multicultural" understanding. The temptation to entrust the management of our future to robotic processes could therefore be an attractive prospect. Intelligent machines, which can operate on big data following assigned rules without human weaknesses.

But are we sure that technology can be the panacea for all human limitations or whether it is instead necessary to continue to maintain human control over machines? Who would be responsible for the side effects?

The increase in decision-making autonomy is a natural progression of technology and, in fact, there would be nothing intrinsically amoral in the fact that a computer could make decisions even for Man... if it weren't for the fact that, following some decisions they could follow lethal actions.

The laws that regulate conflicts (for example the San Remo Manual) impose very specific rules that limit which are the only valid targets or those that can potentially inflict damage. To date, a decision to take a lethal action is generally always linked to the decision of Man and involves a complex analysis that takes into account "consequence management". Some supporters of the construction of autonomous fighting machines believe that there would be strategic advantages with a reduction in risk to humans, but in fact the effectiveness of a weapon is not always a justification for its use.

At this point we return to the main question, namely whether it is legitimate to leave the decision to kill a human being to a machine.

According to Sidney Axinn, professor at the Philosophy Department at the University of South Florida and co-authored the paper The Morality of Autonomous Robots, the decision to take a lethal action must remain human and it is unethical to allow a machine to make such a critical choice. According to Arkin (2010) robots could do a better job than humans in making targeted decisions because they have no motive for revenge… but it is also true that, despite their ability, they cannot realize the severity of killing the person” wrong".

In summary, although the decision-making process is supported by increasingly high-performance machines, the final decision should always be left to Man who, with all his weaknesses, has the ability to discern between good and evil, at least as long as he maintains his humanity (which is not always a given). The use of autonomous military drones should therefore be strictly regulated like that of other weapons such as mines, biological and chemical weapons, to prevent the "servant" from turning against the "master", not out of malice (a machine has no feelings) but for our imperfection. In an era in which technical evolution offers amazing tools we should learn to use them to guarantee a future for our species... and prevent increasingly intelligent machines from understanding our weaknesses, which make us human, and destroying us. On the other hand, one of the many little truths of the famous Murphy's laws states that “Undetectable errors are infinite in variety, in contrast to detectable errors, which by definition are limited“ (Undetectable errors are infinitely varied, unlike detectable errors which, by definition, are bounded.)

Footnotes

1. I remember among the many conventions:

– The Hague Convention relating to the Laws and Customs of War on Land and its Annex, Netherlands, 18 October 1907

– The Geneva Conventions, Switzerland, 12 August 1949.

– The Convention on Certain Conventional Weapons (CCW), 10 April 1981

– The San Remo Manual on International Law Applicable to Armed Conflicts at Sea, 31 December 1995, San Remo, Italy

– The Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Personal Mines and on their Destruction of Ottawa, Canada, 3 December 1997

2. Isaac Asimov's First Law, “A robot may not injure a human being or, through inaction, allow a human being to come to harm” a human being to harm himself."

3. The Metcalfe's law states that the complexity of a telecommunications network is proportional to the square of the number of connected users minus the number itself (see image). In practice, as the number of nodes increases, complexity increases. The law is named after Robert Metcalfe and was first proposed in 1980, although not in terms of users, but rather “compatible communications devices”. In the figure (from Wikipedia) we see that two phones can only make one connection between them, five phones can make a total of twenty different ones, twelve phones can make 132, but 1.000 devices connected to each other reach 999.000. Therefore, by increasing the number of nodes (telephones), the complexity increases according to the n2-n law where n is the number of nodes (devices).

4. In the decision making the different vision of the Western and Eastern world is anything but a cultural interpretation, being based on two different decision-making concepts: the Western world thinks analytically (problem, analysis, possible solutions, final choice, according to a linear decision-making logic) , the eastern one observes every aspect of life (feng shui) in a circular manner, that is, the analysis of a problem is done from different angles to reach a possible solution. This means that two individuals with different backgrounds called upon to decide on a problem may have different times and choices. The problem is sensitive when these people work on the same staff. An interesting reference to read on the topic is “The geography of thought” by Richard E. Nisbett.

References
Johnson, Aaron M., and Sidney Axinn. “The Morality of autonomous robots,” Journal of Military Ethics 12.2 (2013): 129–141. Web. https://www.academia.edu/10088893/The_Morality_of_Autonomous_Robots
Arkin, R. C. “The Case for Ethical Autonomy in Unmanned Systems,” Journal of Military Ethics, Vol. 9, Issue 4, pp. 332-341, 2010
Lawrence G. Shattuck and Nita Lewis Miller. Extending Naturalistic Decision Making to Complex Organizations: A Dynamic Model of Situated Cognition 05_Shattuck_27_7_paged.pdf (nps.edu) https://faculty.nps.edu/nlmiller/docs/05_Shattuck_27_7_paged.pdf, 2006
Richard E. Nisbett “The geography of thought, how Westerners think differently and why”, Nicholas Brealey Publishing, London, 2003 
Bandioli Marco, Artificial intelligence for battlefields and security, Online Defense, 2019

Images: Lallo Mari / web

(article originally published on https://www.ocean4future.org)