Analysis | Facial recognition will threaten individual liberty

ByVidushi Marda and Apar Gupta
Jul 26, 2019 09:11 PM IST

The system is unreliable. Deploying it may institutionalise the discrimination of a diverse demographic of people

With rise in crime and insecurity, many are looking towards technological solutions. A predominant application is the widespread installation of CCTV cameras and, more recently, the government’s proposed Automated Facial Recognition System (AFRS). A tender released by the National Crime Records Bureau for the AFRS wants to build a system capable of, “criminal identification, verification and its dissemination among various police organizations and units across the country.” What would a functional AFRS mean for people in a city like New Delhi, where the state government recently commissioned the installation of 1.4 lakh CCTV cameras? How would the integration of CCTV cameras and AFRS work?

Being watched will become synonymous with being safe, only because of a constant, perpetual curfew on individual autonomy(Reuters file photo for representation)
Being watched will become synonymous with being safe, only because of a constant, perpetual curfew on individual autonomy(Reuters file photo for representation)

Illusionary safety but certain discrimination: Imagine that you are walking home from work, and you pass a crowded market. Law enforcement is keeping a close eye on movements because of increased instances of purse snatching. This doesn’t worry you — why would it? You’re just passing through your local market to get back home. The AFRS and CCTV cameras are in play to catch suspected criminals, not an innocent person like you. Each time you pass a camera, the system efficiently confirms that your face doesn’t match those of suspected criminals. It creates a comprehensive map of your face, like every other individual who passes a camera - recording, measuring, and mapping your features. Many may even feel a fuzzy sense of warmth and safety as the camera scans and then the AFRS system does a facial match. Such presumed benefits of safety and security are unfortunately not true even in theory, although felt at an instinctive level by people.

The use of facial recognition provides a veneer of technological objectivity without delivering on its promise, while at the same time facilitating policing based on societal norms. The system is unreliable. It watches you, classifies you with questionable accuracy, and you are not exactly sure what inferences the system draws about you. An independent study of the use of facial recognition by London’s Metropolitan Police found that the technologically, it is a failure in real world scenarios, reaching the wrong conclusion 81% of the time. This mirrors controlled experiments in India, where the Delhi Police’s use of facial recognition to find missing children in 2018 was found to be accurate only 2% of the time. Accuracy rates have been demonstrated to fall significantly in the case of vulnerable populations, including women and people with darker skin. With the deployment of such technology, we may be institutionalising systematic discrimination of the diverse appearances of large numbers of people in a city such as Delhi.

Policing individual conduct to social norms: The second large area of harm is to individual privacy, best explained by the theory of the chilling effect. Chilling effect is a change in individual behaviour which is legal, but considered deviant. Fearing social judgment or sanctions, people routinely self-censor their words and actions. But with CCTVs and AFPRs systems, the risks are greater. More than about maintaining a disciplined line at a ticket counter, in a diverse space such as Delhi, where people share public spaces from mixed social and economic realities, it may turn into a method to police individual conduct. Here, CCTV and the AFRS will put individual autonomy of the residents of Delhi in a deep freeze - from innocuous displays of affection between young couples, to the use of a public space, such as a colony park being used by domestic help. These are real possibilities which cannot be wished away.

It is crucial to have safeguards to prevent against this, and more importantly, to carry out feasibility studies and evidence about the use of technology in public spaces. Such deployments, which at present are being done without any legal framework, policy consultations, safeguards or respect for the Supreme Court’s judgment on privacy, would fundamentally change the character of public spaces, and convert them into socially policed zones. Being watched will become synonymous with being safe, only because of a constant, perpetual curfew on individual autonomy. This risks further entrenching marginalisation and discrimination of vulnerable sections of society, and acutely expose them to harm.

We recognise that people will still trust the system since the video is ultimately gathered in a public space. Most people continue to hold the mistaken and inaccurate belief that public spaces do not have any privacy protections. Many will further state that the emotional security blanket a surveillance system provides comfort even if it fails to prevent theft — like a scarecrow. Such arguments are centred on an emotional belief that risks real harm. This is not a technically informed policy position, and instead would greatly damage individual liberty and rights.

Vidushi Marda is a lawyer and global research-lead on AI at Article 19, and a non-resident research analyst at Carnegie India. Apar Gupta is a lawyer and the Executive Director of the Internet Freedom Foundation

The views expressed are personal

Story Saved
Live Score
Saved Articles
My Reads
My Offers
Sign out
New Delhi 0C
Sunday, September 24, 2023
Start 14 Days Free Trial Subscribe Now
Register Free and get Exciting Deals