HT Explains | The pandemic’s missing AI response and why India should care
Back in early 2020, a host of 21st century tools gave humanity hope that a pandemic in this era would be easier to overcome than the one 100 years ago. In many ways, these tools did help — technology helped the world shield at home, while being able to work, study, play and stay in touch with loves. At least for those with the means.
Among the tools meant to help was also Artificial Intelligence (AI), the bleeding edge touted to revolutionise future technology. Two recent papers, by researchers at the Alan Turing Institute and from the University of Cambridge, highlight how the gains from applying AI to these problems have been limited at best, and misleading in many instances.
The problems, to be sure, stem not from principle of predictive analysis of AI (the part that was deemed to be most helpful in understanding a virus that continues to befuddle scientists), but from issues in data science and design. These made the applications not just redundant, but at times a potential threat that could exacerbate a pandemic with incorrect insight.
These are issues of relevance to India, where these problems are acute, and where AI-based technologies are being integrated into state services and law enforcement and judicial functions.
To understand the pitfalls, here is an attempt to look back at what AI could and could not do over the past 18 months.
What is and isn’t AI in Covid?
There is a difference between the gains from technology as a whole and from AI in particular in fighting the pandemic.
The basic contact-logging function of the NHS Covid app or Aarogya Setu is not AI – these are algorithmic circuits that log close proximity contacts between two people carrying mobile phones. Developers of Aarogya Setu have claimed the platform includes data analytics that can potentially be classified in one of AI’s branches, but there is little technical information in public domain to judge this.
An example of AI in Covid-19 would be BlueDot, a Canadian company that has claimed to have identified outbreak epicentres early.
At the same time, some oft-cited tools for estimation of outbreaks and resource requirements are not examples of AI – they are based on biostatistical analysis.
What was AI expected to do in Covid?
According to the OECD, it was poised to help understand the virus, detect budding outbreaks, improve diagnosis (perhaps even warn when a person with Covid-19 was at risk of slipping), and aid scientists in developing therapies. In some instances, AI researchers even attempted to develop Covid detection technologies that could identify an infection from the sound of a person’s cough or the way they breathed.
How did these efforts turn out?
The Turing Institute report identifies some limited ways it helped. Project Odysseus used “data from traffic cameras and sensors to provide anonymised, near-real time estimates of pedestrian densities and distances”. This was used for “numerous interventions to keep people socially distanced, such as moving bus stops, widening pavements and closing parking bays”. Another effort is still underway to answer some key clinical questions, with results expected later in 2021.
The Cambridge researchers’ report, published in Nature, identifies the several ways in which other AI efforts created problems in the clinical context. An MIT Technology Review article, which first reported on this paper, summarised some of the examples. In one, the authors found the algorithm to predict lying down as link to becoming sicker, since it used a mixed set of x-rays in which healthy people were scanned while standing up, while the sickest were scanned lying down. In another case, the model picked up a font that some hospitals used commonly to identify it as a common factor in severe cases.
“This pandemic was a big test for AI and medicine,” the article quotes researcher Derek Driggs as saying. “It would have gone a long way to getting the public on our side,” he says in the piece. “But I don’t think we passed that test.”
Why should India pay attention?
India has planned a series of efforts to nudge innovation and interest in AI – the PM announced in July “Safal” and “AI for All” programmes that build awareness modules in schools and for the general public. The hype is not misplaced – India is behind countries like China when it comes to the AI industry.
Also Read | 87.58m that could change everything
But India too has struggled with application of AI that is fair and accurate. For instance, the Samagra Vedika project of the Telangana government left out roughly 14,000 people from State benefits registry when it attempted to weed out duplicate or redundant registrations. The culprit was lack of standardised data.
The issues also extend to ethics and test the philosophical limits of technology’s interventions. Can algorithms carry out executive functions? How much of an impact will they and should they have in the judicial domain, where they are being deployed for case research? How do we ensure auditing and oversight on code, some of which may originate from private enterprise?
Some of these questions have begun strong debates in other countries. Bad and biased AI has led to innocent people being wrongly incarcerated and fewer black people being given the same amount of health care than their white counterparts.
What should India do?
The potential for AI technologies to push the frontiers of what is possible is undoubtedly clear. These technologies are now in-step with human intelligence in several domains, such as when DeepMind’s Alpha Go beat the world’s number 1 Go player. Its successor Alpha Zero is now regarded as the most formidable Go, and possibly chess, opponent. Similarly, another DeepMind algorithm, Alpha Fold, late last year demonstrated remarkable accuracy in determining how proteins fold – thus answering one of biology’s deepest mysteries.
DeepMind’s breakthroughs have progressed from merely learning what humans do to challenge the human intuition, building on years of research in neuroscience and machine learning, coupled with the large funding and talent pool that comes with being in the Silicon Valley.
While India looks to the inspiration of success stories like DeepMind, it must also pay attention to the challenges — it can begin with the examples of AI that didn’t work during the pandemic and the debates around ethics that now rages in the West.