Ethics, an overlooked aspect of algorithms?

We now encounter algorithms at every moment of the day. But this exposure can be dangerous. It has been proven to influence our political opinions, moods and choices. Far from being neutral, algorithms carry their developers’ value judgments, which are imposed on us without our noticing most of the time. It is now necessary to raise questions about the ethical aspects of algorithms and find solutions for the biases they impose on their users.

 

What exactly does Facebook do? Or Twitter? More generally, what do social media sites do? The overly-simplified but accurate answer is: they select the information which will be displayed on your wall in order to make you spend as much time as possible on the site. Behind this time-consuming “news feed” hides a selection of content, advertising or otherwise, optimized for each user through a great reliance on algorithms. Social networks use these algorithms to determine what will interest you the most. Without questioning the usefulness of these sites — this is most likely how you were directed to this article — the way in which they function does raise some serious ethical questions. To start with, are all users aware of algorithms’ influence on their perception of current events and on their opinions? And to take a step further, what impacts do algorithms have on our lives and decisions?

For Christine Balagué, a researcher at Télécom École de Management and member of CERNA (see text box at the end of the article), “personal data capturing is a well-known topic, but there is less awareness about the processing of this data by algorithms.” Although users are now more careful about what they share on social media, they have not necessarily considered how the service they use actually works. And this lack of awareness is not limited to Facebook or Twitter. Algorithms now permeate our lives and are used in all of the mobile applications and web services we use. All day long, from morning to night, we are confronted with choices, suggestions and information processed by algorithms: Netflix, Citymapper, Waze, Google, Uber, TripAdvisor, AirBnb, etc.

Are your trips determined by Citymapper? Or by Waze? Our mobility is increasingly dependent on algorithms. Illustration: Diane Rottner for IMTech

 

They control our lives,” says Christine Balagué. “A growing number of articles published by researchers in various fields have underscored the power algorithms have over individuals.” In 2015, Robert Epstein, a researcher at the American Institute for Behavioral Research, demonstrated how a search engine could influence election results. His study, carried out with over 4,000 individuals, demonstrated that candidates’ rankings in search results influenced at least 20 % of undecided voters. In another striking example, a study carried out by Facebook in 2012 on 700,000 of its users showed that people who had previously been exposed to negative posts posted predominantly negative content. Meanwhile, those who had previously been exposed to positive posts posted essentially positive content. This proves that algorithms are likely to manipulate individuals’ emotions without their realizing or being informed of it. What role do our personal preferences play in a system of algorithms of which we are not even aware?

 

The opaque side of algorithms

One of the main ethical problems with algorithms stems from this lack of transparency. Two users who carry out the same query on a search engine such as Google will not have the same results. The explanation provided by the service is that responses are personalized to best meet the needs of each of these individuals. But the mechanisms for selecting results are opaque. Among the parameters taken into account to determine which sites will be displayed on the page, over a hundred have to do with the user performing the query. Under the guise of trade secret, the exact nature of these personal parameters and how they are taken into account by Google’s algorithms is unknown. It is therefore difficult to know how the company categorizes us, determines our areas of interest and predicts our behavior. And once this categorization has been carried out, is it even possible to escape it? How can we maintain control over the perception that the algorithm has created about us?

This lack of transparency prevents us from understanding possible biases which could result from data processing. Nevertheless, these biases do exist and protecting ourselves from them is a major issue for society. A study by Grazia Cecere, an economist at Télécom École de Management, provides an example of how individuals are not treated equally by algorithms. Her work has highlighted discrimination between men and women in a major social network’s algorithms for associating interests. “In creating an ad for STEMs (sciences, technology, education, mathematics), we noticed that the software demonstrated a preference for distributing it to men, even though women show more interest for this subject,” explains Grazia Cecere. Far from the myth of malicious artificial intelligence, this sort of bias is rooted in human actions. We must not forget that behind each line of code, there is a developer.

Algorithms are used first and foremost to propose services, which are most often commercial in nature. They are thus part of a company’s strategy and reflect this strategy in order to respond to its economic demands. “Data scientists working on a project seek to optimize their algorithms without necessarily thinking about the ethical issues involved in the choices made by these programs,” points out Christine Balagué. In addition, humans have perceptions about the society to which they belong and integrate these perceptions, either consciously or unconsciously, in the software they develop. Indeed, value judgements present in algorithms quite often reflect the value judgments of their creators. In the example of Grazia Cecere’s work, this provides a simple explanation for the bias discovered, “An algorithm learns what it is asked to learn and replicates stereotypes if they are not removed.”

algorithms
What biases are hiding in the digital tools we use every day? What value judgments passed down from algorithm developers do we encounter on a daily basis? Illustration: Diane Rottner for IMTech.

 

A perfect example of this phenomenon involves medical imaging. An algorithm used to classify a cell as sick or healthy must be configured to make a comparative assessment of the number of false positives and false negatives. Developers must therefore decide to what extent it is tolerable for healthy individuals to receive positive tests in order to prevent sick individuals from receiving negative tests. For doctors, it is preferable to have false positives rather than false negatives while scientists who develop algorithms prefer false negatives to false positives, as scientific knowledge is cumulative. Depending on their own values, developers will privilege one of these professions.

 

Transparency? Of course, but that’s not all!

One proposal for combating these biases is to make algorithms more transparent. Since October 2016, the law for a digital republic, proposed by Axelle Lemaire, the former Secretary of State for Digital Affairs, requires transparency for all public algorithms. This law was responsible for making the higher education admission website (APB) code available to the public. Companies are also increasing their efforts for transparency. As of May 17, 2017, Twitter has allowed its users to see the areas of interest the site associates with them. But despite these good intentions, the level of transparency is far from sufficient for ensuring the ethical dimension. First of all, code understandability is often overlooked: algorithms are sometimes delivered in formats which make them difficult to read and understand, even for professionals. Furthermore, transparency can be artificial. In the case of Twitter, “no information is provided about how user interests are attributed,” observes Christine Balagué.

[Interests from Twitter
These are some of the interests matched to you based on your profile and activity.
You can adjust them if something doesn’t look right.]
Which of this user’s posts led to his being classified under “Action and Adventure,” a very broad category? How are “Scientific news” and “Business and finance” weighed in order to display content in the user’s Twitter feed?

 

To take a step further, the degree to which algorithms are transparent must be assessed. This is the aim of the TransAlgo project, another initiative launched by Axelle Lemaire and run by Inria. “It’s a platform for measuring transparency by looking at what data is used, what data is produced and how open the code is,” explains Christine Balagué, a member of TransAlgo’s scientific council. The platform is the first of its kind in Europe, making France a leading nation in transparency issues. Similarly, DataIA, a convergence institute for data science established on Plateau de Saclay for a period of ten years, is a one-of-a-kind interdisciplinary project involving research on algorithms in artificial intelligence, their transparency and ethical issues.

The project brings together multidisciplinary scientific teams in order to study the mechanisms used to develop algorithms. The humanities can contribute significantly to the analysis of the values and decisions hiding behind the development of codes. “It is now increasingly necessary to deconstruct the methods used to create algorithms, carry out reverse engineering, measure the potential biases and discriminations and make them more transparent,” explains Christine Balagué. “On a broader level, ethnographic research must be carried out on the developers by delving deeper into their intentions and studying the socio-technological aspects of developing algorithms.” As our lives increasingly revolve around digital services, it is crucial to identify the risks they pose for users.

Further reading Artificial Intelligence: the complex question of ethics

A public commission dedicated to digital ethics

Since 2009, the Allistene association (Alliance of digital sciences and technologies) has brought together France’s leading players in digital technology research and innovation. In 2012, this alliance decided to create a commission to study ethics in digital sciences and technologies: CERNA. On the basis of multidisciplinary studies combining expertise and contributions from all digital players, both nationally and worldwide, CERNA raises questions about the ethical aspects of digital technology. In studying such wide-ranging topics as the environment, healthcare, robotics and nanotechnologies, it strives to increase technology developers’ awareness and understanding of ethical issues.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *