_Close

“The machine has algorithmic bugs whereas we have cognitive bugs.”

A doctor in science and an entrepreneur, Aurélie Jean is a rising figure in the digital world in France. Trained at the prestigious MIT in the United States, this coder and expert in artificial intelligence is specialised in algorithm and is interested in algorithmic biases. Two ideal skills to shed light on a fascinating issue: the transmission of human bugs to the machine.

Aurélie, do you think a cognitive bias is a bug?
If the answer is yes, the algorithms are inevitably doomed to bug, right?


It is possible to imagine that a cognitive bias is a bug of our behaviours, our perceptions, our distorted visions of things and beings. When you say that the algorithms are necessarily intended to bug, I would like to tell you that in fact an algorithm cannot be objectively perfect. They are humans who develop them, the risk of introducing a bias always exists even if it is not systematic.

In 2016 the word2vec algorithm was used on the Google news articles to categorise the trade names according to their gender, in order to associate the male job with its female counterpart.
The result was interesting because the doctor became a nurse and the female web developer turned into a housewife. There is clearly a bug, but is it embedded in the algorithm itself or in the cognitive biases of the man behind it?


Word2vec is an algorithm that is used to classify words and create correlations based on their use in texts, and which is the basis of Google Translate. Indeed, there was a bug but that is not specific to the algorithm, it is the reflection of our perceptions and our uses of these words.

 

A quick answer would be to say: eliminate our biases! But I think that’s what gives us our richness. On the other hand, we must be aware of it, it allows us to better understand each other and to better evolve our society.

Do you feel that the general public has trouble understanding this transmission of cognitive biases to the machine?


Yes, surely, but it’s because people do not know how an algorithm is made. An algorithm is a sequence of mathematical equations and logical operations that would be translated into a computer code. It is a product like any other, with its own subjective criteria so human errors are possible. For example, the designers of the airbag being men, the product was not initially sized for women or children. Another concrete example, in the case of machine learning: the first facial recognition algorithms did not recognise black skin, for two possible reasons. First, the explicit criteria of contrast considered in the analysis of the image to define a face did not have a specific reference for black skins. It is an explicit criterion introduced by man. Secondly, the images provided to the algorithm on which it learned were perhaps mainly composed of white faces. These are implicit criteria.

"Sometimes the bug is already in the algorithm. We only discover it when we add a new use case."

So when man 'bug', machines "bug". Is it serious or does it have to be accepted?


Very good remark, yes, man introduces his bias in the machine, and we must accept it. The important thing is that it does not lead to serious, illegal or toxic consequences. All our biases do not necessarily lead to discrimination. On the other hand, sometimes, the bug is already in the algorithm. We only discover it when we add a new use case. So, there are surely “bugged” algorithms that will only be discovered later by adding a new use case.

For you, is binarity one of the most insidious biases?


No, the binary language of the machine does not boost the binarity of the world. If for example we have on the one hand the people who know how to use the machine and on the other those who do not know, it is not because of the binary language of the algorithm but the fact of a lack education. Developers and scientists have a duty to talk about what they do, to democratise complicated concepts. Yann Lecun, head of Facebook’s artificial intelligence lab, makes it clear that there is “no adoption without understanding”. You cannot adopt a technological tool if you do not understand it.

You mentioned it, but would not cognitive biases be the essential difference between robots and human being?


Yes, because there is the cognitive word. The machine has algorithmic biases while we have cognitive biases, therefore related to our consciousness, our mind. Robots do not have consciousness.

"The main challenge in artificial intelligence is to limit the spread of our biases in the machine."

And will one day a machine be able to bug without man having done it first?


It’s a question I never asked myself, but I think no. Behind an algorithm there is always data that has been chosen.Even for a machine that does self-learning, there is a man who made decisions. We must not exempt human from liability in that equation.

Our thoughts, beliefs, and decisions are marked by cognitive biases that affect our perception of reality.
Through the prism of the cognitive sciences we can now understand how our brain perceives the world. Is it vital to stop the transmission of the human bug to the machine?


Vital I do not know, it’s not my field, I’m not an expert in neuroscience. On the other hand, it is important not to suppress our biases but to understand them, to become aware of them in order to have the right reflexes to avoid transmitting them to our tools.

Is this transmission of the bug from man to machine the main challenge of artificial intelligence made by man?


I think so, besides when we talk about artificial intelligence, we talk about killer robot, job loss… There are already threats that exist and the main one is precisely the algorithmic bias. Yes, today the main challenge in artificial intelligence is to limit the spread of our biases in the machine, but also to detect their existence through testing. We can also better educate people about these tools so that they can inform in case of bug.

"Algorithms are the uses that people make of them that determine their impact."

Is distraction, which is also a cognitive bias, not the very condition of human creativity, the one that will always differentiate us from the machine?


I think that man will always be different from the machine because he has emotions, guts, a heart and a brain. I was talking the other day with a woman of letter who told me that one day, we would manage to create an algorithm that will win the Goncourt prize. In this case, it will be necessary for the reader to know that the novel was written by an algorithm because it will change their perception. When you read a book that was automatically generated by a machine you will not put yourself in the character’s skin as if it were written by a man.

 

For me creativity naturally goes by emotions, by the guts. For example, the emotional nature of Jackson Pollock painting a canvas in a moment of trance, the machine does not have it so it would remove any substance to the work. Just as it is reported today if a photo has been retouched, we could have an annotation “work generated by an algorithm”.

Is the perniciousness of the machine bug scary and lead to a huge AI bug?
There is always this fantasy that one day Man would create something that no longer belongs to him...


It reminds me of the transition to the year 2000, which we had prepared for. As a result, there was no bug. It’s always us who drive the machine, to stop it so I do not believe in this fantasy. We can have bugs that could have catastrophic consequences, but there is always the “reset” button. The important thing is to work collectively to correct each other.

Would this cognitive bias then be the ultimate bulwark to say today that we do not have to be afraid?


Exactly, I would say that this bias is a way to reach what I call nobility. To stop people from thinking a bit or at all nonsensical about AI. Where there is a possible threat is what people will do with these tools. Noam Chomsky already spoke in the 70s of tools in the future, as an alternative media in which people exchange, share and communicate, he had imagined social networks from a positive angle. But there are also people who use it to distil hatred, to manipulate opinions. It is not the fault of the algorithms; it is the uses that men make of it that determine their impact.

You lament a glaring lack of gender and ethnic diversity in the world of computer programming.
Knowing that behind each algorithm there is a human being, can we say that this gap is the proof of another bug?


You are absolutely right! One of the ways to avoid the formation of bugs is to have a plurality in the vision of the developers. It is this diversity that will allow to focus on differences, it creates an open mind. This lack of diversity is a big problem because it can create social and even economic discrimination.

Americans often say that diversity is good for business. The people I worked with in my profession really believe in it. In France I also hear the same thing, but I always wonder if the decision-makers are sincerely convinced or if they speak about it because it is well seen. I think that the change in France will take time.

 

It will take a proactive policy to engage women and achieve goals in terms of numbers. Without speaking strictly of quotas that are reminiscent of the unconstitutional on both sides of the Atlantic, we seek to deploy proactive policies to increase diversity in the selection pool. In English we speak of Positive Action and not Positive Discrimination (in French) which has a bad image. Without reaching the goal of attaining the female percentage we will never succeed!This is a temporary solution, but once again we do not focus on the number to achieve but on the means implemented to achieve it.

Aurélie, let's end with the two familiar questions of Bug Me Tender. First, what is your own definition of a bug?


It’s when you have a computer code running on a computer, that this code has no syntactical error, so it runs well. The computer understands it and runs an application but the application bugs: it means that it has made an error in its result.

Secondly, in your professional field what is the biggest bug you have encountered?


I don’t remember a big bug that left an impression, but we spend our lives fixing bugs and fortunately for the vast majority of them they are innocuous and without serious consequences. From a generic point of view a big bug would be for example a model that attempts meteorology predictions, and he or she would mistakenly put a comma in some data input. This would disrupt all the projections and give a temperature of 45° C in December when in fact it is 3° C.

"One of the ways to avoid the introduction of bugs is to have a plurality in the vision of the developers."
"I think that man will always be different from the machine because he has emotions, guts, a heart and a brain."