In recent decades, algorithms have become increasingly complex, particularly through the introduction of deep learning architectures. This has gone hand in hand with increasing difficulty in explaining their internal functioning, which has become an important issue, both legally and socially. Winston Maxwell, legal researcher, and Florence d’Alché-Buc, researcher in machine learning, who both work for Télécom Paris, describe the current challenges involved in the explainability of algorithms.
What skills are required to tackle the problem of algorithm explainability?
Winston Maxwell: In order to know how to explain algorithms, we must draw on different disciplines. Our multi-disciplinary team, AI Operational Ethics, focuses not only on mathematical, statistical and computational aspects, but also on sociological, economic and legal aspects. For example, we are working on an explainability system for image recognition algorithms used, among other things, for facial recognition in airports. Our work therefore encompasses these different disciplines.
Why are algorithms often difficult to understand?
Florence d’Alché-Buc: Initially, artificial intelligence used mainly symbolic approaches, i.e., it simulated the logic of human reasoning. Logical rules, called expert systems, allowed artificial intelligence to make a decision by exploiting observed facts. This symbolic framework made AI more easily explainable. Since the early 1990s, AI has increasingly relied on statistical learning, such as decision trees or neural networks, as these structures allow for better performance, learning flexibility and robustness.
This type of learning is based on statistical regularities and it is the machine that establishes the rules which allow their exploitation. The human provides input functions and an expected output, and the rest is determined by the machine. A neural network is a composition of functions. Even if we can understand the functions that compose it, their accumulation quickly becomes complex. So a black box is then created, in which it is difficult to know what the machine is calculating.
How can artificial intelligence be made more explainable?
FAB: Current research focuses on two main approaches. There is explainability by design where, for any new constitution of an algorithm, explanatory output functions are implemented which make it possible to progressively describe the steps carried out by the neural network. However, this is costly and impacts the performance of the algorithm, which is why it is not yet very widespread. In general, and this is the other approach, when an existing algorithm needs to be explained, it is an a posteriori approach that is taken, i.e., after an AI has established its calculation functions, we will try to dissect the different stages of its reasoning. For this there are several methods, which generally seek to break the entire complex model down into a set of local models that are less complicated to deal with individually.
Why do algorithms need to be explained?
WM: There are two main reasons why the law stipulates that there is a need for the explainability of algorithms. Firstly, individuals have the right to understand and to challenge an algorithmic decision. Secondly, it must be guaranteed that a supervisory institution such as the French Data Protection Authority (CNIL), or a court, can understand the operation of the algorithm, both as a whole and in a particular case, for example to make sure that there is no racial discrimination. There is therefore an individual aspect and an institutional aspect.
Does the format of the explanations need to be adapted to each case?
WM: The formats depend on the entity to which it needs to be explained: for example, some formats will be adapted to regulators such as the CNIL, others to experts and yet others to citizens. In 2015, an experimental service to deploy algorithms that detect possible terrorist activities in case of serious threats was introduced. For this to be properly regulated, an external control of the results must be easy to carry out, and therefore the algorithm must be sufficiently transparent and explainable.
Are there any particular difficulties in providing appropriate explanations?
WM: There are several things to bear in mind. For example, information fatigue: when the same explanation is provided systematically, humans will tend to ignore it. It is therefore important to use varying formats when presenting information. Studies have also shown that humans tend to follow a decision given by an algorithm without questioning it. This can be explained in particular by the fact that humans will consider from the outset that the algorithm is statistically wrong less often than themselves. This is what we call automation bias. This is why we want to provide explanations that allow the human agent to understand and take into consideration the context and the limits of algorithms. It is a real challenge to use algorithms to make humans more informed in their decisions, and not the other way around. Algorithms should be a decision aid, not a substitute for human beings.
What are the obstacles associated with the explainability of AI?
FAB: One aspect to be considered when we want to explain an algorithm is cyber security. We must be wary of the potential exploitation of explanations by hackers. There is therefore a triple balance to be found in the development of algorithms: performance, explainability and security.
Is this also an issue of industrial property protection?
WM: Yes, there is also the aspect of protecting business secrets: some developers may be reluctant to discuss their algorithms for fear of being copied. Another counterpart to this is the manipulation of scores: if individuals understand how a ranking algorithm, such as Google’s, works, then it would be possible for them to manipulate their position in the ranking. Manipulation is an important issue not only for search engines, but also for fraud or cyber-attack detection algorithms.
How do you think AI should evolve?
FAB: There are many issues associated with AI. In the coming decades, we will have to move away from the single objective of algorithm performance to multiple additional objectives such as explainability, but also equitability and reliability. All of these objectives will redefine machine learning. Algorithms have spread rapidly and have enormous effects on the evolution of society, but they are very rarely accompanied by instructions for their use. A set of adapted explanations must go hand in hand with their implementation in order to be able to control their place in society.
By Antonin Counillon
Also read on I’MTech: Restricting algorithms to limit their powers of discrimination