Digital innovations are paving the way for more accurate predictive medicine and a more resilient healthcare system. In order to establish themselves on the market and reduce their potential negative effects, these technologies must be responsible. Christine Balagué, a researcher in digital ethics at Institut Mines-Télécom Business School, presents the risks associated with innovations in the health sector and ways to avoid them.
“Until now, the company has approached technology development without looking at the environmental and social impacts of the digital innovations produced. The time has come to do something about this, especially when it comes to human lives in the health sector”, says Christine Balagué, a researcher at Institut Mines-Telecom Business School and co-holder of the Good in Tech Chair [1]. From databases and artificial intelligence for detecting and treating rare diseases, to connected objects for monitoring patients; the rapid emergence of tools for prediction, diagnosis and also business organization is making major changes in the healthcare sector. Similarly, the goal of a smarter hospital of the future is set to radically change the healthcare systems we know today. The focus is on building on medical knowledge, advancing medical research, and improving care.
However, for Christine Balagué, a distinction must be made between the notion of “tech for good” – which consists of developing systems for the benefit of society – and “good in tech”. She says “an innovation, however benevolent it may be, is not necessarily devoid of bias and negative effects. It’s important not to stop at the positive impacts but to also measure the potential negative effects in order to eliminate them.” The time has come for responsible innovation. In this sense, the Good in Tech chair, dedicated to responsibility and ethics in digital innovations and artificial intelligence, aims to measure the still underestimated environmental and societal impacts of technologies on various sectors, including health.
Digital innovations: what are the risks for healthcare systems?
In healthcare, it is clear: an algorithm that cannot be explained is unlikely to be commercialized, even if it is efficient. Indeed, the potential risks are too critical when human lives are at stake. However, a study published in 2019 in the journal Science on the use of commercial algorithms in the U.S. health care system demonstrated the presence of racial bias in the results of these tools. This discrimination between patients, or between different geographical areas, therefore gives rise to an initial risk of unequal access to care. “The more automated data processing becomes, the more inequalities are created,” says Christine Balagué. However, machine learning is increasingly being used in the solutions offered to healthcare professionals.
For example, French start-ups such as Aiintense, incubated at IMT Starter, and BrainTale use it for diagnostic purposes. Aiintense is developing decision support tools for all pathologies encountered in intensive care units. BrainTale is looking at the quantification of brain lesions. These two examples raise the question of possible discrimination by algorithms. “These cases are interesting because they are based on work carried out by researchers and have been recognized internationally by the scientific peer community, but they use deep learning models whose results are not entirely explainable. This therefore hinders their application by intensive care units, which need to understand how these algorithms work before making major decisions about patients,” says the researcher.
Furthermore, genome sequencing algorithms raise questions about the relationship between doctors and their patients. Indeed, the limitations of the algorithm, the presence of false positives or false negatives are rarely presented to patients. In some cases, this may lead to the implementation of unsuitable treatments or operations. It is also possible that an algorithm may be biased by the opinion of its designer. Finally, unconscious biases associated with the processing of data by humans can also lead to inequalities. Artificial intelligence in particular thus raises many ethical questions about its use in the healthcare setting.
What do we mean by a “responsible innovation”? It is not just a question of complying with data processing laws and improving the health care professional’s way of working. “We must go further. This is why we want to measure two criteria in new technologies: their environmental impact and their societal impact, distinguishing between the potential positive and negative effects for each. Innovations should then be developed according to predefined criteria aimed at limiting their negative effects,” says Christine Balagué.
Changing the way innovations are designed
Liability is not simply a layer of processing that can be added to an existing technology. Thinking about responsible innovation implies, on the contrary, changing the very manner in which innovations are designed. So how do we ensure they are responsible? Scientists are looking for precise indicators that could result in a “to do list” of criteria to be verified. This starts with the analysis of the data used for learning, but also by studying the interface developed for the users, through the architecture of the neural network that can potentially generate bias. On the other hand, existing environmental criteria must be refined by taking into account the design chain of a connected object and the energy consumption of the algorithms. “The criteria identified could be integrated into corporate social responsibility in order to measure changes over time,” says Christine Balagué.
In the framework of the Good In Tech chair, several research projects, including a thesis, are being carried out on our capacity to explain algorithms. Among them, Christine Balagué and Nesma Houmani (a researcher at Télécom SudParis) are interested in algorithms for electroencephalography (EEG) analysis. Their objective is to ensure that the tools use interfaces that can be explained to health care professionals, the future users of the system. “Our interviews show that explaining how an algorithm works to users is often something that designers aren’t interested in, and that making it explicit would be a source of change in the decision-making process,” says the researcher. The ability to explain and interpret results are therefore two key words guiding responsible innovation.
Ultimately, the researchers have identified four principles that an innovation in healthcare must follow. The first is anticipation in order to measure the potential benefits and risks upstream of the development phase. Then, a reflexive approach allows the designer to limit the negative effects and to integrate into the system itself an interface to explain how the technological innovation works to physicians. It must also be inclusive, i.e. reaching all patients throughout the territory. Finally, responsive innovation facilitates rapid adaptation to the changing context of healthcare systems. Christine Balagué concludes: “Our work shows that taking into account ethical criteria does not reduce the performance of algorithms. On the contrary, taking into account issues of responsibility helps to promote the acceptance of an innovation on the market”.
[1] The Chair is supported by the Institut Mines-Télécom Business School, the School of Management and Innovation at Sciences Po, and the Fondation du Risque, in partnership with Télécom Paris and Télécom SudParis.
Anaïs Culot
Also read on I’MTech :