(Getty Images/iStockphoto)
(Getty Images/iStockphoto)

Why artificial intelligence needs to become less and less artificial

Amid all the hype, genuine and inflated, around the world of AI, it is pertinent to ask an important question. Do humans really love AI? Are humans really happy that many of their daily tasks will now be taken care of by a machine? The adoption and so the future of AI is dependent on the answers to these questions.
By Biju Dominic
UPDATED ON JUN 26, 2019 12:51 PM IST

AI (Artificial Intelligence) is everywhere and it’s here to stay. It now powers so many real-world applications, ranging from facial recognition to language translators and assistants like Siri and Alexa. Along with these consumer applications, companies across sectors are increasingly harnessing AI’s power for productivity growth and innovation.

There are many who believe that AI has the potential to become more significant than even the internet. Availability of enormous amount of data combined with huge leap in computational power and huge improvements in engineering skills should help AI, backed with deep learning, to make huge impact across various facets of human life.

Amid all the hype, genuine and inflated, around the world of AI, it is pertinent to ask an important question. Do humans really love AI? Are humans really happy that many of their daily tasks will now be taken care of by a machine? The adoption and so the future of AI is dependent on the answers to these questions.

In 2016, AlphaGo, an AI-based algorithm, trounced South Korean grandmaster Lee Sedol four games to one in Seoul, South Korea. This was the first time a computer program had beaten a top player in a full contest and was hailed as a landmark for AI. Today there are more efficient AI algorithms that can beat AlphaGo squarely. But the moot question is whether the human Go players would want to play more games with these machines. The superior computing power of AI has taken away even the remotest chance of a human winning a game against AlphaGo. It is unlikely that a human will want to play a game that he is sure of losing, every time. This holds a valuable lesson about the future of AI. An AI product that makes humans look like losers will not have high adoption rates.

The last time there was a serious discussion about machines making humans redundant was at the beginning of the industrial revolution. Newly invented machines and industrial engineering principles put forward by F.W. Taylor treated humans as a replaceable parts of an assembly line. No one cared for the men who lost their jobs to machines, nor the men who worked on those machines. Workers in the world’s early factories faced long hours of work under extremely unhygienic conditions, and mostly lived in slums. This soon resulted in significant resistance to the introduction of machines and several labour riots.

Government soon intervened to provide basic rights and protection for workers. Statutory regulations forced factory owners to set up formal mechanisms to look into workers’ wages and welfare. Several new studies like Elton Mayo’s Hawthorne Studies debunked Taylor’s Scientific Management approach toward raising productivity and established that the major drivers of productivity and motivation were non-monetary factors. A host of new theories and management practices emerged that started treating workers as a resource, an asset. This human-centric approach played a significant role in making the industrial revolution a success.

Every displacement of humans with machines will generate pain. This will surely result in small and big protests. The best way to manage these protests is by upping the human centricity of the new scientific advancement. Personnel management and human resource development (HRD) were management functions that emanated from attempts to alleviate concerns posed by the industrial revolution. In a similar vein, new paradigms in human behaviour management will have to be initiated to enhance the adoption of AI.

If anyone has any doubt about the AI industry’s need to develop a human-centric approach, one only needs to keep in mind the fate of another industry—nuclear energy. In the initial years of its commercial use, nuclear energy generated as much optimism as AI is generating today. Former US president Dwight D. Eisenhower told the United Nations General Assembly, “Experts would be mobilized to apply atomic energy to the needs of agriculture, medicine and other peaceful activities. A special purpose would be to provide abundant electrical energy in the power-starved areas of the world.” The nuclear energy industry that held so much promise has almost come to standstill after the Fukushima nuclear plant accident. Safety concerns are the reason cited for this turn of events. But what is the truth?

There have been only three significant accidents in the nuclear energy industry. Nobody died of radiation in the Three Mile Island and Fukushima accidents, and fewer than 50 have died from the fallout of Chernobyl in the 30 years since that disaster. How, then, did everyone come to see those nuclear accidents as so catastrophic? Why is that an industry that killed so few people has huge safety concerns, while the automobile industry that continues to cause the death of million more humans is not seen as a threat?

The real issue is not safety. The real reason is the irrational fear that the common man has of nuclear energy. The fact that nuclear energy has remained a black box has only fuelled these subconscious biases. On the other hand, the automotive industry has always reached out to the common man and built an emotional bond with them. So humans are able to condone its huge failures.

The future of AI depends on superior data analytics and engineering capability. But to make sure that these new technological marvels are loved by all, AI products have to understand the conscious and subconscious fears it might generate among its target audience. Interventions to allay those fears have to be incorporated as an integral part of the product design. Yes, to achieve its true potential, AI has to become less and less artificial.

(The author Biju Dominic is chief executive officer of Final Mile Consulting, a behaviour architecture firm)

Story Saved