How the AI ​​Act will regulate Artificial Intelligence in the EU and what impact it will have

25/01/24

"When the results are announced, Lee Sedol's eyes well up with tears. AlphaGo, an artificial intelligence (AI) developed by Google's DeepMind, has just achieved a 4-1 victory in the game of Go. It is March 2016. Two decades earlier, chess grandmaster Garry Kasparov had lost to the machine Deep Blue, and now a computer program had won against the eighteenth world champion Lee Sedol in a complex game that was believed to be played only by humans, using their intuition and strategic thinking. The computer won not by following the rules given to it by programmers, but by machine learning based on millions of past Go games and playing against itself. In this case, programmers prepare the data sets and create the algorithms, but they cannot know which moves the program will propose. Artificial intelligence learns by itself. After a series of unusual and surprising moves, Lee had to resign himself."

This extract from the book "AI Ethics" by Mark Coeckelbergh, a Belgian philosopher and technologist, recalls in an explicit and suggestive way the encounter between man and his limits. After a certain number of victories obtained by Sedol, AlphaGo wins. It is not the first time that this has happened (the example cited is that of Deep Blue) and yet, after a series of unusual and surprising moves, AlphaGo does what it is designed for: it is the first time that it has happened in this way.

And it is here that we grasp the synthesis of the Artificial Intelligence (AI) phenomenon: it's the first time it's happened this way.

All the media have talked about the race towards Artificial Intelligence, especially after the "esplosione” ChatGPT. We have talked about it many times also in the context of Online Defense, highlighting the unprecedented evolutionary push that these solutions are encountering in their history.

Yet there is a parallel phenomenon, inferential and connected to the first, which risks going unnoticed. That of the standardization of these systems.

Regulating Artificial Intelligence is by no means a simple challenge, and without going into the technicalities (nor the numerous legislative ferments underway in Asian countries and the US on the topic) we can say that the European Union, after years of previous work, has put in place in December 2023 a semi-definitive version of what will represent, once the technical discussions and the community approval procedure have been completed, the first European Regulation on Artificial Intelligence: the AI ​​ACT.

What is this?

The "AI Act" represents one of the first and most structured legislative documents at a global level aimed at regulating the use of Artificial Intelligence and mitigating the potential associated risks. The final text of the law will still have to be drafted by the competent professionals and subjected to a final revision. It is expected that, if everything proceeds without obstacles, the entry into force will take place in the next two years.

The Regulation focuses mainly on safeguarding individual rights and freedoms, establishing the obligation for companies developing AI solutions to demonstrate that their products and their development process do not put people at risk or compromise their integrity. This regulatory act covers several spheres of application of Artificial Intelligence, but some of the main points include restrictions on biometric identification systems and the obligation of transparency regarding the technological systems used for chatbot like ChatGPT.

A fundamental element of the Regulation is the risk classification system, already adopted in other similar regulations (one above all the GDPR), which identifies certain AI systems with particular characteristics as "high risk", subjecting them to rigorous compliance actions. This provision will constitute a major challenge for companies and institutions involved in creating, commissioning or using such systems.

The examples of high-risk systems included in the Regulation include those used in crucial sectors such as healthcare, transport, justice and public safety. For such systems, high standards will be required in terms of security, transparency and reliability. In particular, for high-risk systems, the AI ​​Act establishes a number of stringent requirements, including:

  • Risk Assessment and Compliance: AI systems must be designed and developed taking into account a thorough risk assessment. This includes implementing measures to manage and minimize these risks.
  • Transparency and User Information: AI systems should be transparent. Users should be informed when they are interacting with an AI system, and given sufficient information about how the system works and makes decisions.
  • Human Supervision: The AI ​​Act emphasizes the importance of human oversight in AI systems, especially for high-risk ones. This could have significant implications for military use of AI, where automated decision making in critical situations may be limited or require explicit human supervision.
  • Data Quality: The AI ​​Act requires that data used by AI systems be managed to ensure the highest quality, reducing the risk of bias and ensuring that decisions made are accurate and reliable.
  • Safety and Robustness: AI systems must be safe and robust against attempts at manipulation or misuse (a particularly critical aspect, to put it mildly, in the military context).

A further requirement of particular interest is the one introduced from an ethical perspective: the AI ​​Act requires a real "ethical assessment" (Fundamental Rights Impact Assessment, FRIA) to be conducted for high-risk systems. There are also numerous interactions with the topic of security of personal data processing, to which the text of the Regulation refers several times with regards to the analysis of risks and impacts on the natural persons involved in the operation of these systems and on the risks for their data.

Furthermore, There are some prohibited practices. For example, the use of AI for the analysis of sensitive biometric data, such as recognizing people based on characteristics such as sexual orientation, ethnicity, religion or political ideas, will be prohibited. This provision aims to prevent discrimination and abuse. The indiscriminate use of images taken from the Internet or from CCTV cameras to train facial recognition systems will also be prohibited, in order to protect people's privacy and prevent mass surveillance.

Law enforcement agencies will be authorized to use biometric recognition systems (over which an important battle was fought within the EU, after the Clearview case) only in exceptional circumstances, such as the threat of an imminent terrorist attack or the search for individuals suspected of serious crimes or victims of serious crimes.

The AI ​​Act prohibits the use of AI for recognizing people's emotions in workplaces and schools, in order to preserve individuals' emotional freedom. Furthermore, it prohibits the practice of social scoring, or the use of AI to assign scores based on people's behavior or characteristics, resulting in restrictions or grants of civil rights. This measure aims to prevent manipulation of people's behavior explicitly or implicitly.

The Regulation establishes that high-impact AI systems, with high computing capabilities such as OpenAI's GPT-4, must guarantee transparency in the training process and share the technical documentation of the materials used before being put on the market. These models will be required to make the contents they generate recognisable, in order to prevent fraud and disinformation and to protect the copyright of the works created.

In terms of sanctions, companies that do not comply with these rules could be subject to fines of up to 7% of their global turnover.

What is undoubtedly very interesting is what is foreseen in the current text on the uses of Artificial Intelligence systems in the military field.

It is in fact expressly provided in the text (Art. 2, par. 3) that, in terms of impact on the military sector, the AI ​​Act does not apply directly to artificial intelligence systems developed or used for military purposes. However, the standards and regulations set by the AI ​​Act can indirectly influence how EU member states and companies operating in Europe develop and deploy AI systems for military applications. This could include aspects such as transparency, human oversight and system security. Furthermore, it is possible that the AI ​​Act could serve as a model for future military-specific regulations (or standards), both at the EU and global levels.

The increasing integration of artificial intelligence into defense requires a regulatory framework that balances innovation with ethical responsibility and security. Collaboration between countries, institutions and industries will be key to ensuring that the military use of AI develops responsibly and in accordance with international principles of humanitarian law.

The AI ​​Act will go through several application phases, the most massive of which is realistically expected for the second half of 2026. It is necessary to prepare significantly on the design of the systems currently under development, in order to design AI that complies with the requirements proposed by the the European Union in this sense.

Andrea Puligheddu (Lawyer and expert in law and new technologies)

Note: the article necessarily has an informative slant, and does not delve into the numerous (and complex) issues on the table in relation to the development and application of the AI ​​Act. For a more in-depth discussion of the topic, please write to me at: Puligheddu@studiolegaleprivacy.com

(www.studiolegaleprivacy.com)