Address the deepfake problem| Analysis
War and sex have been pivotal drivers of consumer tech in the past. Satellite navigation, penicillin, microwave ovens and superglue, all trace their origins to battlefield imperatives. It is also no coincidence that for a couple of decades, the flagship consumer electronics show at Las Vegas happened alongside the adult entertainment expo, usually in the same building. The Internet owes a great deal to the United States military for its inception as well as the pornography industry for its rapid diffusion.
More lately, a third influencer has joined these two drivers of consumer tech — politics. Political actors world over have begun adopting new technological solutions to substantially shape public opinion. Both porn and politics are at the vanguard of consumer demand that drives digital technologies.
The Barack Obama campaign of 2008 marked the beginning of this trend, with the Republican opposition coming a cropper against the former’s social media onslaught. Subsequent electoral battles have seen favoured technologies of the season emerge, including targeted digital marketing through social media posts and tweets, and constructed echo chambers of viral political opinion using personal messaging apps.
The recent video of a Bharatiya Janata Party (BJP) politician speaking in doctored English, and with an accent that may appeal to a certain voter base, has sparked allegations of resort to “deepfake” for the first time in Indian politics. This episode forces the question: Will deepfakes become new and shiny tech tools at the disposal of the propaganda industry?
Deepfake videos are a substantial advance over clumsy image morphs of the nineties. They are the outcome of using an array of artificial intelligence and deep learning solutions, collectively termed generative adversarial networks (GANs), to believably mimic the real world, be it images, music, speech or prose. As it happened with Nancy Pelosi recently, the higher the public availability of video footages of an individual, stronger the possibility of algorithmically generating her fake videos.
There are three major problems with deepfakes that render them particularly worrisome. The first problem relates to the compelling narrative created in our minds by the moving image. For sure, from fake news to phishing emails, the world wide web is a crucible of fraud and deception. Yet, deepfake videos trouble us because internally, we place differential levels of trust in what we read and what we view. While the former is an expression of something inside a person’s mind, the latter is an outcome of physical movement. Because we are conscious of the fact that we have many more data points to visually assess and repudiate a fake in the latter scenario, we also place more confidence in our judgment. A fake well done will, therefore, attract much less self-doubt.
The second problem is that refuting deepfake videos becomes far more difficult because of the manner in which GANs operate to create such videos. Even videos and audio clips doctored using much less advanced technologies are not easy to refute because of the technical processes of alteration. The problem becomes worse with GANs. These adversarial networks deploy the architecture of two neural networks pitted against each other. The generator network analyses datasets from the real world and generates new data that appears to belong to these analysed datasets, while the discriminator network evaluates the generated data for authenticity. Through multiple cat-and-mouse rounds between the two networks, the generated data attains high levels of authenticity, spawning synthetic data that nearly matches real data. By its very design, significant data and algorithms that can parse the same are needed to verify the synthetic data. The discerning member of a WhatsApp family group may find her voice of reason lost in such situations.
The fact that human judgment can no longer serve as a first line of defence against this barrage of automatically generated deepfakes also makes it abundantly clear that we are confronted with an ethical choice here. The broad choice is to either sign up for a world where the truth is algorithmically determined, or one where we protect the human element but choose not to make much progress with GANs and their significant potential in taking forward the domain of artificial intelligence.
The ethical choice outlined above must at some point translate to regulatory action, a matter that is predominantly the preserve of politics. This creates the third problem, the special attraction that political campaigns potentially hold for deepfakes. If political actors simultaneously benefit from the GAN-written rules of truth and falsehood, they may do no better than leave matters to self-regulation. We already saw this with the Internet and Mobile Association of India’s ineffective voluntary code to tackle misinformation during the 2019 parliamentary elections. Deepfakes are more concerning, and India must avoid being formulaic in her response.
When politics drives consumer tech, it is ethically different from the porn industry’s early adoption. As Jonathan Coopersmith noted in 1998, the subject matter of the latter comes in the way of publically accepting its endorsement. But with political actors, we run a higher risk of failing to evaluate the adopted technology for its long-term harms. For this additional reason too, independent regulators like the Election Commission of India must begin to address the deepfake problem before it becomes an unmanageable crisis.
Ananth Padmanabhan is dean, Academic Affairs, Sai University and visiting fellow, Centre for Policy Research.
The views expressed are personal