India has not lost the AI race, it can still build a trillion-dollar industry
Software’s real value lies in its implementation: what you do with it. Anyone can use openly available Artificial Intelligence codes to build advanced applications
There is a deep-seated fear that India has been left behind in the Artificial Intelligence (AI) race. There are many questionable claims being made by people such as Chinese venture capitalist, Kai-Fu Lee, who says that China and the US are the two AI superpowers, and China has the edge.

There is no doubt that AI has an incredible potential and will provide militaries with lethal advantages. But the technology is still in its infancy; there are no AI superpowers. The race to implement AI has hardly begun, particularly in business. As well, the most advanced AI tools are available as open source, which means that everyone has access to them at the same time.
Tech companies are generating hype with cool demonstrations of AI such as Google AlphaGo Zero, which learned the world’s most difficult board game in three days and beat champions. Several companies are claiming breakthroughs with self-driving vehicles. But don’t be fooled: the games are just special cases, and the self-driving cars are still on their training wheels.
AlphaGo developed its intelligence through the use of generative adversarial networks, a technology that pits two AI systems against each another to allow them to learn from each other. The trick was that before the networks battled each other, they received a lot of coaching. And, more importantly, their problems and outcomes were well defined.
Unlike board games and arcade games, business systems don’t have defined outcomes and rules. They work with very limited data sets, often disjointed and messy. The computers also don’t do critical business analysis; it’s the job of humans to comprehend information that the systems gather and to decide what to do with it. Humans can deal with uncertainty and doubt; AI cannot. Google’s Waymo self-driving cars have collectively driven close to 10 million miles, yet are nowhere near ready for release. Tesla’s autopilot, after gathering a billion miles’ worth of data, won’t even stop at traffic lights.
Today’s AI systems do their best to reproduce the functioning of the human brain’s neural networks, but their emulations are very limited. They use a technique called Deep Learning, which adjusts the relationships of computer instructions designed to behave like neurons. To put it simply, after you tell an AI exactly what you want it to learn and provide it with clearly labelled examples, it analyses the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data, so the more examples you give it, the more useful it becomes.
Herein lies a problem, though. An AI is only as good as the data it receives, and is able to interpret them only within the narrow confines of the supplied context. It doesn’t “understand” what it has analysed, so it is unable to apply its analysis to scenarios in other contexts. And it can’t distinguish causation from correlation.
The larger issue with this form of AI is that what it has learnt remains a mystery: a set of indefinable responses to data. Once a neural network has been trained, not even its designer knows exactly how it is doing what it does. They call this the black box of AI.
Businesses can’t afford to have their systems making unexplained decisions, as they have regulatory requirements and reputational concerns and must be able to understand, explain, and prove the logic behind every decision that they make.
Then there is the issue of the reliability. Airlines are installing AI-based facial-recognition systems, and China is basing its draconian national surveillance systems on such systems. AI is being used for marketing and credit analysis and to control cars, drones, and robots. It is being trained to perform medical-data analysis and assist or replace human doctors. The problem is that, in all such uses, they can be fooled.
Google published a paper last December that showed that it could trick AI systems into recognising a banana as a toaster. Researchers Konda Reddy Mopuri, Aditya Ganeshan, and R Venkatesh Babu at the Indian Institute of Science have just demonstrated that they could confuse almost any AI system without even using, as Google did, knowledge of what the system has used as a basis for learning. With AI, security and privacy are afterthoughts, just as they were early in the development of computers and the Internet.
Leading AI companies have handed over the keys to their kingdoms by making their tools available as open source. Software used to be considered a trade secret, but developers realised that having others look at and build on their code could lead to great improvements in it. Microsoft, Google, and Facebook have released their AI code to the public for free to explore, adapt, and improve. China’s Baidu has also made its self-driving software, Apollo, available as open source.
Software’s real value lies in its implementation: what you do with it. Just as China built its tech companies and India created a $160 billion IT-services industry on top of tools created by Silicon Valley, anyone can use openly available AI tools to build sophisticated applications. There is nothing stopping India from leaping ahead and creating a trillion dollar AI industry.
Vivek Wadhwa is a Distinguished Fellow at Harvard Law School and Carnegie Mellon University
The views expressed are personal
