New Delhi -°C
Today in New Delhi, India

Sep 22, 2019-Sunday



Select city

Metro cities - Delhi, Mumbai, Chennai, Kolkata

Other cities - Noida, Gurgaon, Bengaluru, Hyderabad, Bhopal , Chandigarh , Dehradun, Indore, Jaipur, Lucknow, Patna, Ranchi

Sunday, Sep 22, 2019

Artificial Intelligence is not the silver bullet for human development

If its potential to do good is to be fully realised, focus more on the obstacles that is preventing its uptake.

opinion Updated: Jan 07, 2019 07:40 IST
M Chui and M Harryson
M Chui and M Harryson
Project Syndicate
If the potential of artificial intelligence to do good globally is to be fully realised, proponents must focus less on the hype and more on the obstacles that are preventing its uptake.
If the potential of artificial intelligence to do good globally is to be fully realised, proponents must focus less on the hype and more on the obstacles that are preventing its uptake.(Getty Images/iStockphoto)

The excitement surrounding artificial intelligence nowadays reflects not only how AI applications could transform businesses and economies, but also the hope that they can address challenges like cancer and climate change. The idea that artificial intelligence could revolutionise human well being is obviously appealing, but just how realistic is it?

To answer that question, the McKinsey Global Institute has examined more than 150 scenarios in which artificial intelligence is being applied or could be applied for social good. What we found is that artificial intelligence could make a powerful contribution to resolving many types of societal challenges, but it is not a silver bullet – at least not yet. While artificial intelligence’s reach is broad, development bottlenecks and application risks must be overcome before the benefits can be realised on a global scale.

To be sure, artificial intelligence is already changing how we tackle human-development challenges. In 2017, for example, object-detection software and satellite imagery aided rescuers in Houston as they navigated the aftermath of Hurricane Harvey. In Africa, algorithms have helped reduce poaching in wildlife parks. In Denmark, voice-recognition programmes are used in emergency calls to detect whether callers are experiencing cardiac arrest. And at the MIT Media Lab near Boston, researchers have used “reinforcement learning” in simulated clinical trials involving patients with glioblastoma, the most aggressive form of brain cancer, to reduce chemotherapy doses.

Moreover, this is only a fraction of what is possible. Artificial intelligence can already detect early signs of diabetes from heart rate sensor data, help children with autism manage their emotions, and guide the visually impaired. If these innovations were widely available and used, the health and social benefits would be immense. In fact, our assessment concludes that artificial intelligence technologies could accelerate progress on each of the 17 United Nations Sustainable Development Goals.

But if any of these artificial intelligence solutions are to make a difference globally, their use must be scaled up dramatically. To do that, we must first address developmental obstacles and, at the same time, mitigate risks that could render artificial intelligence technologies more harmful than helpful.

On the development side, data accessibility is among the most significant hurdles. In many cases, sensitive or commercially viable data that have societal applications are privately owned and not accessible to non-governmental organisations. In other cases, bureaucratic inertia keeps otherwise useful data locked up.

So-called last-mile implementation challenges are another common problem. Even in cases where data are available and the technology is mature, the dearth of data scientists can make it difficult to apply artificial intelligence solutions locally. One way to address the shortage of workers with the skills needed to strengthen and implement artificial intelligence capabilities is for companies that employ such workers to devote more time and resources to beneficial causes. They should encourage artificial intelligence experts to take on pro bono projects and reward them for doing so.

There are of course risks. Artificial intelligence tools and techniques can be misused, intentionally or inadvertently. For example, biases can be embedded in artificial intelligence algorithms or data sets, and this can amplify existing inequalities when the applications are used. According to one academic study, error rates for facial analysis software are less than 1% for light-skinned men, but as high as 35% for dark-skinned women, which raises important questions about how to account for human prejudice in artificial intelligence programming. Another obvious risk is misuse of artificial intelligence by those intent on threatening individuals’ physical, digital, financial, and emotional security.

Stakeholders from the private and public sectors must work together to address these issues. To increase the availability of data, for example, public officials and private actors should grant broader access to those seeking to use data for initiatives that serve the public good. Already, satellite companies participate in an international agreement that commits them to providing open access during emergencies. Data-dependent partnerships like this one must be expanded and become a feature of firms’ operational routines.

Artificial intelligence is fast becoming an invaluable part of the human-development toolkit. But if its potential to do good globally is to be fully realised, proponents must focus less on the hype and more on the obstacles that are preventing its uptake.

Michael Chui is a partner at the McKinsey Global Institute. Martin Harrysson is a partner in McKinsey & Company’s Silicon Valley office.

The views expressed are personal

First Published: Jan 07, 2019 07:31 IST