Is it true that we will soon have a robot judge?

(To Enrico Priolo)
21/02/22

Man and machine. A combination to which, by now, we are used to and which frightens every time it is pronounced. The basic dilemma is to understand to what extent man must give space to maneuver to the machine.

One of the fields of knowledge in which there is more discussion is undoubtedly that of law, in particular of the so-called automated judicial decisions.

It is true that we will have a robot judge very soon? To answer, first identify what is meant, in general, by prediction and prediction.

They exist at least four situations in which the law and the operators (jurists and legislators) measure themselves with the "forecast" or with the need / ability to see and evaluate in advance what will happen in the future. Let's see them.

1) The normative provision. In the lexicon of jurists the expression "normative prediction" often appears to indicate the abstract situation that the legislator imagines and to whose existence the arising of certain consequences is reconnected. In certain contexts it coincides with the so-called "abstract case".

The concept of prediction is therefore inherent to that of the norm: the latter's task is to prefigure a possible situation in the future. When we interpret a normative sentence we are led, on the one hand, to imagine the factual circumstances in which it can be applied and, on the other, to ask ourselves the reason for that provision, trying to identify the reasons that prompted the legislator to do or not to make certain choices.

2) The predictability / predictability of the legal system response: legal certainty.

The prediction of the outcome of a dispute is placed in a perspective connected to what has just been said.

The sentence marks the passage from the abstract "normative provision" to the justice of the individual case to which that provision is applied. It is the moment in which the concrete case is perfectly adapted to the abstract case according to a syllogistic model of reasoning. The idea of ​​a "calculable right" rests on the belief that the outcome of any dispute must be "predictable". Precisely this assumption gives substance to one of the pillars of our juridical civilization: that of "legal certainty". The compulsory legal system in relation to a given problem must always provide the same answer. Because certain is only what is predictable. 

3) The prediction of the effects of regulation.

Assuming the viewpoint of regulators / legislators (and of the jurists who collaborate with them) it must be remembered that, for some years, ever greater emphasis has been placed on the need to "foresee" the effects of rules and regulations: rules must be issue only if, at the end of an adequate investigation, it is reasonably certain that they will have the desired and expected effects.

It is therefore necessary to be reasonably able to "predict":

a) how the associates will react to the new rules (whether or not they will maintain the desired and / or imposed behaviors);

b) if the effects produced by the new rules will really lead to the achievement of the desired objectives.

4) The prediction / predictivity of artificial intelligence.

The new frontier is represented by the predictive capabilities of artificial intelligence, even if it would be better to say the "data science" and "data mining" applied to the world of law ("legal analytics"). Leaving aside the well-known US Loomis case (in which the COMPAS software seemed to have been delegated the ability to predict Mr. Loomis' aptitude for recidivism), here we mean the ability to elaborate predictions by means of a probabilistic calculation carried out by algorithms operating on a simply statistical basis. or on a logical basis.

Legal analytics can be used to predict the outcome of a judgment. 

In 2016, for example, a study was carried out which, thanks to advances in natural language processing and machine learning, aimed to build predictive models useful for unraveling the patterns that guide judicial decisions. The work predicted the outcome of the cases analyzed by the European Court of Human Rights based on their textual content: the forecast was successful in 79% of cases. And, more generally, it can be used to predict the behavior of all actors in the legal system. Lex Machine, an emanation of Lexis-Nexis, combines data and software to create datasets on judges, lawyers, parties and litigation subjects, analyzing millions of pages of dispute information. With this data, lawyers can predict the behaviors and outcomes that the various possible legal strategies will produce.

La "legal analytics" aims to predict the outcomes of processes: not on the basis of a rigorous and mechanical legal reasoning, but in the light of sophisticated algorithmic / statistical analysis of enormous amounts of data (big data).

It is one thing to hypothesize possible orientations of a court, judges, operators. It is quite another thing to predict with certainty the outcome of a single judgment. To achieve this we should have algorithms capable of governing uncertainty and unpredictability. And, in any case, there would remain the ethical problem regarding the legitimacy of entrusting a legal decision to this type of algorithm.

Regarding this last aspect, it is necessary to recall the work done by European Commission for the efficiency of justice (CEPEJ), which has adopted the so-called European Ethics Charter on the use of artificial intelligence in judicial systems and related areas. The Charter, issued in 2018, established five key principles on the use of Artificial Intelligence in the "justice" system.

Meanwhile, see what Europe means by artificial intelligence

Set of scientific methods, theories and techniques aimed at reproducing the cognitive abilities of human beings by means of machines. Current developments aim at making machines perform complex tasks previously performed by humans. However, the expression "artificial intelligence" is criticized by experts, who distinguish between "strong" artificial intelligences (capable of contextualizing specialized problems of various kinds in a completely autonomous way) and "weak" or "moderate" artificial intelligences (high performance in their training area). Some experts argue that "strong" artificial intelligences, in order to be able to model the world in its entirety, would require significant advancements in basic research and not just simple improvements in the performance of existing systems. The tools mentioned in this document are developed using machine learning methods, ie "weak" artificial intelligences.

And what does it mean by Predictive justice (Predictive Justice)

Predictive justice means the analysis of a large number of judicial decisions using artificial intelligence technologies in order to formulate predictions on the outcome of certain types of specialist disputes (for example, those relating to severance payments or maintenance payments) . The term "predictive" used by the companies of cool tech is taken from the branches of science (mainly statistics) that allow predicting future results thanks to inductive analysis. Judicial decisions are processed in order to discover correlations between the input data (criteria established by law, facts of the case, motivation) and the output data (formal decision relating, for example, to the amount of compensation). Correlations that are deemed relevant allow for the creation of models which, when used with new input data (new facts or clarifications introduced in the form of parameters, such as the duration of the contractual relationship), produce according to their developers a forecast of the decision.

Some authors have criticized this approach both formally and substantially, arguing that, in general, the mathematical modeling of certain social phenomena is not a task comparable to other more easily quantifiable activities (isolating the truly causative factors of a judicial decision is a infinitely more complex task to play, for example, a game of Go or recognize an image): the risk of false correlations is much higher. Furthermore, in doctrine, two contradictory decisions can prove valid if the legal reasoning is well founded. Consequently, the formulation of forecasts would constitute an exercise of a purely indicative nature and without any prescriptive claim.

Once the deadlines are set, let's find out what are the basic principles established by the CEPEJ

1) PRINCIPLE OF RESPECT FOR FUNDAMENTAL RIGHTS:

ensure the development and implementation of artificial intelligence tools and services that are compatible with fundamental rights. When artificial intelligence tools are used to settle a dispute, to provide support in judicial decision-making, or to orient the public, it is essential to ensure that they do not undermine the guarantees of the right of access to a judge and the right to a fair trial. (equality of arms and respect for the adversarial).

This means that, right from the development and learning stages, there should be full provisions prohibiting direct or indirect violation of the fundamental values ​​protected by supranational Conventions.

Human rights by design.

2) PRINCIPLE OF NON-DISCRIMINATION:

specifically prevent the development or intensification of any discrimination between persons or groups of persons. Given the ability of these processing methods to reveal existing discrimination, through the grouping or classification of data relating to individuals or groups of people, public and private actors must ensure that the methodologies do not reproduce or aggravate such discrimination and that they do not lead to deterministic analyzes or uses.

The method must be NON-discriminatory.

3) PRINCIPLE OF QUALITY AND SAFETY:

with regard to the processing of judicial decisions and data, use certified sources and intangible data with models developed multidisciplinary, in a secure technological environment. Machine learning model makers should be able to draw widely on the expertise of relevant justice system professionals and researchers in the fields of law and social sciences. The establishment of mixed project teams, for short processing cycles, in order to produce functional models is one of the organizational methods that allow to obtain the best from this multidisciplinary approach.

The more we design, the better.

4) PRINCIPLE OF TRANSPARENCY, IMPARTIALITY AND FAIRNESS:

make data processing methods accessible and understandable, authorize external audits. A balance must be struck between the intellectual property of some processing methodologies and the need for transparency (access to the creative process), impartiality (absence of bias), equity and intellectual integrity (privileging the interests of justice) when using tools that they can have legal consequences, or that can significantly affect people's lives. It should be understood that such measures apply to the whole creative process, as well as to the operational chain, as the selection methodology and the quality and organization of the data directly influence the learning phase.

Artificial Intelligence must be able to be verified by third parties.

5) PRINCIPLE OF "CONTROL BY THE USER":

preclude a prescriptive approach and ensure that users are informed actors and are in control of their choices. The use of artificial intelligence tools and services must strengthen and not limit the autonomy of the user. The user must be informed in clear and understandable language of the binding nature or not of the solutions proposed by the artificial intelligence tools, of the different possibilities available, and of his right to receive legal assistance and to access a court. You must also be clearly informed of any previous handling of a case using artificial intelligence, before or in the course of a judicial proceeding, and must have the right to object, in order to have your case judged directly by a court of law. pursuant to article 6 of the ECHR.

Be properly informed to check your choices.

Conclusions:

On closer inspection, the principles dictated by the CEPEJ show us a way, which can be summarized (adapting it to the judicial context) with a notion developed during the international debate that developed within the UN on autonomous weapons. In the impossibility of determining the computational state of the artificial intelligence tool and, therefore, a complete control over the execution of the predictive algorithm, to remedy the alteration of the "correctness and equality of the dispute between the parties and between them and the judge "the request should be reinforced that the predictive decision is made without using only the purely probabilistic results obtained, not only because its fulfillment is not always adequately verifiable.

We refer to the doctrinal suggestion according to which it should be sanctioned that the use of the machine in court is subject to a significant human control represented by the following essential conditions:

1) that its operation is made public and evaluated in accordance with the criteria of peer review;

2) that the potential error rate is known;

3) that adequate explanations translate the "technical formula" constituting the algorithm into the legal rule, so as to make it legible and understandable by the judge, the parties and their defenders;

4) that the cross-examination on the choice of archived elements, on their groupings and on the correlations of the data processed by the artificial intelligence apparatus is safeguarded, particularly in relation to the subject of the dispute;

5) that their acceptance by the judge is justified in the light of what emerged in the court and for the factual circumstances assessed according to the principle of free conviction.

   

Sitography:

https://rm.coe.int/carta-etica-europea-sull-utilizzo-dell-intelligenza-artificiale-nei-si/1680993348

https://teseo.unitn.it/biolaw/article/view/1353

https://www.agendadigitale.eu/cultura-digitale/predire-il-futuro-fra-machine-learning-e-magia/

https://archiviodpc.dirittopenaleuomo.org/d/6735-sistema-penale-e-intelligenza-artificiale-molte-speranze-e-qualche-equivoco

S. QUATTROCOLO, Equity of the criminal trial and automated evidence in the light of the European convention of human rights, in Rev. italo-española der. proc., 2019, available on http://www.revistasmarcialpons.es/rivitsproc/

https://www.europarl.europa.eu/RegData/etudes/STUD/2020/656295/IPOL_STU(2020)656295_EN.pdf

https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_it.pdf

https://archiviodpc.dirittopenaleuomo.org/upload/3089-basile2019.pdf

REFERENCES:

Artificial intelligence. A Modern Approach, Russell and Norvig, Pearson.

Yuval Noah Harari, Homo Deus: a brief history of tomorrow, Vintage publishing.

GWF HEGEL, Outlines of philosophy of law, 1821, trans. it., Bari, 1996.

G. ROMANO, Law, robotics and game theory: reflections on a synergy, in G. Alpa (edited by), Law and artificial intelligence, Pisa, 2020

G. TAMBURRINI, Ethics of machines. Moral dilemmas for robotics and artificial intelligence, Rome, 2020

U. RUFFOLO - A. AMIDEI, Artificial Intelligence, human enhancement and human rights, in Artificial Intelligence. Law, rights, ethics, already anticipated in U. RUFFOLO - A. AMIDEI, Artificial Intelligence and human rights: the frontiers of "transhumanism", in Giur. It., 2019

J. LASSÈGUE, Digital Justice. Révolution graphique et rupture anthropologique, Paris, 2018

J. NIEVA-FENOLL, Artificial intelligence and process, 2018, trans. it., Turin, 2019

S. SIGNORATO, Criminal justice and artificial intelligence. Considerations on the subject of predictive algorithm, in Riv. dir. proc., 2020

Photos: www.golegal.co.za