Artificial Intelligence

Rédigé le 01/07/2023
Philippe BRNB


Artificial intelligence (or AI) is increasingly present in our daily lives, particularly through new products and services. However, it relies on data-intensive algorithms, often involving personal data, and its use requires certain precautions to be taken.



What is artificial intelligence?

For the European Parliament, artificial intelligence is any tool used by a machine to "reproduce human-related behaviors, such as reasoning, planning and creativity". This definition could be broadened to include behaviors that exceed human capabilities, since today's computers manage to surpass them in certain tasks (although the computer's competence generally stops at the execution of that task). For example, the AlphaGo AI system, capable of beating Go champion Lee Sedol, is very good at strategizing, but its abilities stop there. It will be unable to play chess or perform other tasks until it has been trained to do so. Any system that implements mechanisms close to those of human reasoning could thus be described as artificial intelligence.

Why is it important to recognize the presence of AI?
As with any new technology, systems based on artificial intelligence are still subject to failures and attacks, and can have unsuspected impacts on individuals and society. Without calling into question the advantages that these systems can offer, it is nonetheless vital to be aware of the risks to which they expose users.

Firstly, just like humans, they are prone to error, whether due to a fault or to discrimination built into the tool: this is known as bias. On this point, the General Data Protection Regulation states that: The data subject shall have the right not to be subject to a decision based solely on automated processing [...] producing legal effects concerning him or her or significantly affecting him or her in a similar way. - article 22 of the RGPD
In other words, the human must keep the upper hand, which was developed by the CNIL in a report on the ethical challenges of artificial intelligence where issues of autonomy and automated decision-making are addressed. All individuals have the right to object to certain automated processes that do not involve human intervention in the decision-making process.



Why does AI make mistakes?
Given the complexity of AI systems, there can be many sources of error.

System design errors
First and foremost, there are errors in system design, which can have several causes.
Lack of representativeness
If certain real-life cases have not been taken into account in the training data, we speak of a lack of representativeness.
For example, some facial recognition algorithms have been trained on datasets with insufficient numbers of people of certain ethnic origins.
An approximate hypothesis
As a mathematical abstraction, algorithms are based on assumptions, some of which may prove to be too approximate.
For example, the algorithms used to evaluate teacher performance in the USA have been the cause of many complaints, as the assumption that student grades were direct evidence of a teacher's performance was too simplistic.
The wrong criteria
When the algorithm is trained, it is evaluated on the completion of a task according to certain criteria, or metrics. The criteria and final threshold chosen have important consequences for the quality of the final system.
For example, a low threshold actually corresponds to a higher error rate deliberately accepted by the system designer. In the case of a medical diagnostic algorithm, for example, the main aim is to avoid false negatives, since in the event of a false positive, it is always possible to carry out more tests. We may therefore choose to use a low threshold for false positives (which will increase their number) if this allows us to have a higher threshold for false negatives (which will reduce their number).
Errors linked to conditions of use
Errors can also occur as a result of the conditions under which the AI system is used.
Poor data quality
The quality of the data supplied to the system during use affects its performance.
Example: this can be observed when using a voice assistant in a noisy environment: the quality of the assistant's comprehension is then diminished.
Hardware-related faults or constraints
When the system is dependent on physical components such as sensors, the quality of the system's output will depend on the state of these components.
For example, a video surveillance system for detecting incivilities may be more prone to errors if deployed on a fleet of cameras with insufficient resolution.
Other risks of failure
Finally, like all complex systems, artificial intelligence systems are not exempt from the classic failures of IT systems, which can occur in the physical infrastructures where calculations are performed, during information communication, or even as a result of human error.
Where artificial intelligence systems differ from more conventional computerized systems is in the difficulties involved in identifying the problem: this is known as explicability. In fact, particularly in so-called "deep" systems such as neural networks, the number of parameters used often makes it impossible to understand where the error comes from. To limit this risk, it is advisable to retain certain data useful to the system for an appropriate length of time: this is called traceability.



Where does CNIL come in?

CNIL support
CNIL is keeping a close eye on the development of these new tools. Firstly, as part of its support role, to provide useful advice to public authorities, researchers and companies. Secondly, through the inspections it carries out on systems that have actually been implemented. And lastly, through monitoring activities aimed, for example, at identifying new modes of attack or biases leading to illicit data processing.


New European frameworks to come
Several regulatory frameworks aimed at specifying the conditions of use of artificial intelligence are currently being drawn up at European level.
Firstly, the ePrivacy Regulation (an evolution of the current ePrivacy Directive) will specify which rules of the GDPR will be applicable in protecting citizens' online privacy. This text could have major consequences for artificial intelligence players offering electronic communication services.
The Digital Markets Act (DMA), the Digital Services Act (DSA), and the Digital Governance Act (DGA), will frame the market for large digital platforms. The DSA, in particular, aims to make platforms more transparent and accountable to users. It could also have consequences for platforms using recommendation algorithms. More recently introduced, the Data Act aims to facilitate data exchanges within Europe.

Finally, the Regulation on Artificial Intelligence (RIA) proposed by the European Commission in April 2021, will propose a risk-based approach to frame the uses of artificial intelligence systems and facilitate the emergence of innovative solutions that respect people's rights and freedoms. Together with its European counterparts, the CNIL has expressed its views on this text, and has positioned itself to assume the role of supervisory authority in charge of applying the regulation in France.