Artificial Intelligence and Law Enforcement: technology and ethics of the AIDA project

(To Maris Matteucci)

More and more frequently, Artificial Intelligence (AI) is used by law enforcement agencies to combat crimes that are also very different from each other, including cybercrime (cybercrime) and terrorism. But what is the perception of citizens with respect to this technology, promising but not perfect, when used in the security and defense sector? 

The objectives of "AIDA"

The issue of crime prevention is particularly dear to AIDA which, however, points to something even more subtle with its European research project entitled Artificial Intelligence and advanced Data Analytics for Law Enforcement Agencies (Artificial Intelligence and Advanced Data Analysis for Law Enforcement - AIDA): the prediction of crimes. Something that may sound familiar to fans of the movie "Minority Report", with a very young Tom Cruise as Captain John Anderton and a system, the "Precrime", which was so perfect (apparently) that the Police could avoid the crimes before they actually happened.

In short, an ambitious goal, that of wanting to prevent crimes by predicting them through sophisticated algorithms of Machine Learning and Artificial intelligence able to fathom and analyze information thanks to a descriptive and predictive data analysis platform. But to achieve the result, which is anything but dictated by the mere development of technologies, albeit innovative, AIDA has decided to "work" on a multitude of aspects. One of these focuses on the social component. Or what would people think of such an important use of Artificial Intelligence in the service of the Police Forces?

The importance of the stake prompted the European Union to finance the European research project AIDA with almost eight million euros within the Horizon 2020 program (grant agreement n ° 883596), which officially started in September 2020 and it will end in September 2022, exactly 24 months later.

In Italy the project is led by Engineering Computer Engineering and sees Pluribus One, based in Cagliari, at the helm of Work Packages focused on the generation of AI systems for the management and acquisition of information and for the simultaneous analysis of criminal groups. 

The AIDA project aims to develop a data analysis platform and a set of ad hoc tools to effectively combat criminal activities. However, without neglecting the social component, that is the impact that such a massive use of artificial intelligence by the guardians of public order could have on citizens. 

For this reason, the project contemplated a first quantitative survey followed by a qualitative one that went to investigate more deeply into people's thinking, to try to focus on ideas, fears, doubts and everything that can come to mind. when Artificial Intelligence is brought into play in the security field. 

La first partial analysis of quantitative surveys (which we told you about in a previous article) which were compiled by 2850 people in 11 different European countries highlighted, for example, that especially in Estonia, Spain and England there is total trust in the police forces and in their work by the citizens. The results were presented by the partner CENTRIC, a center of excellence in crime research and counter terrorism, based at the University of Sheffield Hallam (UK), at the IKE'21 conference (20th Int'l Conf on Information & Knowledge Engineering)1.

AIDA's goal is to make a system and tools useful for increasing world security available to law enforcement agencies. But not without considering the impact that massive use of AI could have on ordinary people. 

That is why the development of the platform and related tools will be organized in respect for privacy and will take into account the ethical component which should never be ignored.

The qualitative survey

The survey on the AI ​​used by the Police Forces involved ten nations in Europe, for a total of about 140 interviews with ordinary citizens. 

Germany, Greece, Holland, England, Spain, Portugal, Italy, Czech Republic, Estonia and Romania have therefore actively participated in this precious part of research that will have an important weight in understanding what kind of perception people have about a technology. increasingly disruptive. 

Video cameras, drones, online monitoring are now part of everyday life. But are these technologies that make citizens uncomfortable? Or, conversely, do they feel safer when these technologies are used? And the fact that the police can use them represents an advantage or a disadvantage if you think about privacy?

The questions were addressed to the participants who were thus able to explain their feelings, also expressing doubts and possible perplexities about a use of Artificial Intelligence that may not be free from critical issues. 

The data that emerged from the block of interviews in Italian conducted among the Over 65s highlighted some interesting aspects. That is, there are different nuances in the perception that people have with respect to the Artificial Intelligence used by the Police Forces. We go from those who trust and would trust blindly even in the case of a more “stringent” use of this technology, to those who instead harbor more than one perplexity and bring up decidedly strong terms such as that of a police scenario. Because he would trust the police, yes, but with caution. 

And again, there are those who already feel annoyed by a certain type of interference in everyday life (video surveillance, online monitoring) and instead those who feel somehow safer precisely because they are "spied on". 

The data, collected nation by nation, will be analyzed and processed by CENTRIC and disseminated by EUROPOL, the European Police Office, project partner and responsible for the dissemination of the research results obtained within AIDA.

The issue of crime prediction is therefore more topical than ever but there is still a lot to work in this direction. With all due respect to the fans of "Minority Report" who will probably have to wait a little longer before seeing the materialization of a "Precrime" style platform.

1 The scientific article (PS Bayerl, B. Akhgar, E. La Mattina, B. Pirillo, I.Cotoi, D. Ariu, M.Mauri, J. Garcìa, D. Kavallieros, A. Kardara, K. Karagiorgou, " Strategies to counter Artificial Intellingence in Law Enforcement: Cross-Country Comparison of Citizens in Greece, Italy and Spain ") will be published on the Springer circuit as conference proceeding.