Biases are creeping into the Internet’s AI systems
Today in New Delhi, India
Feb 13, 2019-Wednesday
New Delhi
  • Humidity
  • Wind

Biases are creeping into the Internet’s AI systems

Social media platforms such as Twitter should realise this problem and rectify it. Otherwise, they’ll pay a heavy price

analysis Updated: Feb 13, 2019 08:53 IST
Twitter is under the cosh these days from a section of concerned citizens, who feel that the social media platform is biased towards the communist ideology(REUTERS)

Twitter is under the cosh these days from a section of concerned citizens, who feel that the social media platform is biased towards the communist ideology

There have been concerns raised about the bias of other social media platforms in the last few years by people cutting across the ideological spectrum.

The standard reply of the social media platform administrators to their detractors, whenever such accusations of bias are made, is that there is no manual intervention and an algorithm based Artificial Intelligence (AI) runs these platforms. So there is no question of bias.

But is it really true? No, it is not. The fact of the matter is that there is ample evidence of the existence of various kinds of bias in the AI and the algorithms are not as neutral as they are projected to be. In fact, the world over, the major challenge for all major players in the field of AI is how to make it bias-free and they haven’t been successful in this so far.

The absence of bias is based on the concept of fairness which has to be defined in a particular social context. These so called unbiased algorithms do not take into account the social context.

According to a recent research paper, Fairness and Abstraction in Sociotechnical Systems (Proceedings of the Conference on Fairness, Accountability, and Transparency, Pages 59-68 Atlanta, GA, USA — January 29 - 31, 2019), there are many ways when the absence of the social context could lead to severe bias in the way AI would operate and make decisions. It further says that abstraction is one of the bedrock concepts of computer science and there are five failure modes of this abstraction error: the Framing Trap; Portability Trap; Formalism Trap; Ripple Effect Trap; and Solutionism Trap. Each of these traps arise from failing to consider how social context is interlaced with technology in different forms, and thus the remedies also require a deeper understanding of “the social to resolve problems,” says this research paper.

A recent essay in MIT Technology Review by Karen Hao puts it more clearly. It is “documented now that how the vast majority of AI applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data”.

Hao says, “We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.”

The bias, according to researchers, can creep in any time. It can be there during data collection for the algorithm or even during the testing of the same.

“The introduction of bias isn’t always obvious during a model’s construction because you may not realize the downstream impacts of your data and choices until much later. Once you do, it’s hard to retroactively identify where that bias came from and then figure out how to get rid of it. In Amazon’s case, when the engineers initially discovered that its tool was penalizing female candidates, they reprogrammed it to ignore explicitly gendered words like ‘women’s’. They soon discovered that the revised system was still picking up on implicitly gendered words — verbs that were highly correlated with men over women, such as ‘executed’ and ‘captured’—and using that to make its decisions,” says the MIT essay.

One of the challenges is that there has been a complete lack of transparency about how the algorithms developed and were deployed by all the major digital players, including Google, Facebook, Twitter etc.

After all, these algorithms are created by the individuals. And their biases are bound to find a place in whatever they have created. Their conscious preferences may not reflect but their unconscious preferences are bound to creep in through these algorithms in the whole system.

The IBM Research clearly mentions on its website, “Within 5 years , the number of biased AI systems and algorithms will increase.”

It is very clear, thus, that the social media platforms need to handle the biases within their own systems rather than hiding behind lame excuses and stock replies. Having no human intervention in running the platform doesn’t assure absence of bias or presence of fairness. This is now widely accepted.

It is also time for the citizens of this country to raise the issue of biases creeping into the AI systems all over internet. If platforms like Twitter would continue to ignore these in-built biases in their systems, they should remember what IBM Research has said: “AI bias will explode. But only the unbiased AI will survive.”

Arun Anand is is CEO of Indraprastha Vishwa Samvad Kendra.

The views expressed are personal

First Published: Feb 12, 2019 22:37 IST