Life after life: What it will take to tackle deepfakes
Who owns the image being altered? How much disclosure should be mandated? What about the potential for misuse? In large part, it seems like the answers to questions raised by deepfakes will involve more, and more advanced, AI.
About a year since they first appeared, deepfakes have faded away… or have they? App such as MyHeritage uses artificial intelligence (AI) and deep learning programs to allow users to animate still photos to make them smile, blink, turn their heads.
MyHeritage calls it “deep nostalgia”. Apps like Avatarify and Wombo, both launched in the second half of 2020, are designed to make still images appear to “sing” – the photographed mouth moves in sync with a chosen audio clip.
“Our goal is to harness technology to put a smile on a billion faces,” says Wombo co-founder Angad Arneja, adding that he wanted to “democratise a niche form of meme-making and provide some light entertainment to those stuck at home in these dark times.”
There are fears that the easily available technology will push online content into even darker territory. In 2017, a Reddit user who went by the name Deepfakes used similar technology to morph celebrities’ faces into adult videos. Many worry that doctored videos could worsen cyberbullying and harassment and make the battle against fake news and misinformation harder to win than it already is.
“The phenomenon of deepfake videos — or synthetic media, more generally — raises profound and urgent questions about how advances in technology can undermine trust, and sow misinformation and disinformation,” says Rob Reich, director at the Center for Ethics in Society and associate director at the Institute for Human-Centred Artificial Intelligence at Stanford University.
The ethical tangles alone are considerable. For instance, who owns a family photo? If someone else reanimates an image of a loved one against your wishes, is there any legal recourse?
In large part, it seems like answers to the questions raised by deepfakes will involve more, and more advanced, AI. Governments, universities, and tech firms are all funding research to more effectively detect fakes.
Facebook ran a Deepfake Detection Challenge in 2019-20. They worked with researchers from various universities and the top-performing model created achieved 82.56% accuracy.
“Part of the onus to prevent harm must be on the apps themselves,” says Priyanka Khimani, founder of law firm Khimani & Associates, who works on media and technology cases. “Self-regulation is possible, but apps that prioritise and commercially thrive on constant engagement ought to put stringent measures in place. There’s also need for more robust detection mechanisms and take-down tools on these apps.”
On the Wombo app, only preset songs can be used, and the lip-sync on the videos looks exaggerated enough for anyone to see it’s not genuine. “We have decided to not remove the watermark for any of our users. Even for premium users, Wombo videos have a watermark. As a result, everyone knows that it is a deepfake and not a real video,” Arneja says.
But technology is improving all the time and regulation is only now catching up. In April 2021, the European Commission proposed rules under which deepfakes would need to be clearly labelled. If implemented and adopted worldwide, such rules could help protect unwitting users, without stifling creativity.