Facial surveillance is a threat to privacy
The technology, which the NCRB is keen to use, is ineffective and discriminatory. Reconsider it
In early July, an independent report found that four out of five – that is, 80% - of people, identified as criminal suspects by the London Metropolitan Police’s facial recognition technology, were innocent. The report caused an immediate outcry in the United Kingdom. Tellingly, it came only two months after the city of San Francisco in the United States – long-known for being at the cutting edge of new technology – banned the use of facial recognition technology. Responding to the decision, the American Civil Liberties Union noted that remote surveillance technologies such as facial recognition “provides government with unprecedented power to track people going about their daily lives.”
These developments, however, appear to have passed Indian policy-makers by. The National Crime Records Bureau (NCRB) decision to call for a tender for an “Automated Facial Recognition System” [AFRS] is one glaring example. Expounding the belief that AFRS will provide a highly accurate method of identification, the tender advocates its use in crime fighting and in identification of missing persons. The fine print reveals, however, that the information used for the AFRS will be obtained from CCTV feeds, and matched against information contained in any other potential existing database (specific example include passport data, information with ministries, and so on). Furthermore, information obtained from CCTVs (whether in public spaces or private), as well as footage from police stations, will itself be stored at the NCRB.
What the NCRB tender reveals, therefore, is a proposal for an open-ended data-mining operation in the name of efficient crime-fighting, with open-ended merging of information held in different databases, in a manner that can enable profiling. Under this system, every person is a potential suspect, whose facial records can be captured and stored for a potential future criminal investigation. To take an analogy: it is like being asked by the police for your email password, on the basis that the State must have access to everyone’s emails in order to effectively fight the circulation of child pornography. While the objective is laudable, the method shows a complete disregard for vitally important individual rights. This is why Vidushi Marda, an expert on data protection, has observed that “facial recognition systems are a fundamental threat to privacy by their very nature.”
Moreover, even if we were to accept the argument that a little bit of privacy must be given up in the interests of efficient crime-fighting – and even if we were to assume (against all evidence) that the government will put in place adequate measures to protect against data theft and breaches of privacy (contrary to the Supreme Court’s observations, there is still no law on surveillance and data protection) - the crime fighting rationale fails on its own terms. As was pointed out in the beginning of this piece, as recently as this very month, it was found that state-of-the-art facial recognition technology used by the London Police had an error rate of 81%. Marda points out that recorded success rates in India are even lower – down to 2%.
However, there are even deeper problems with facial recognition, going beyond its inability to perform the very task that its backers recommend it for. Facial recognition technology has been repeatedly found to be discriminatory. Repeated experiments, conducted across the globe, have found that facial surveillance technologies show racial and gender biases: they are better at recognising features of whites, and of males, and more prone to error when it comes to women, or people with dark skins. This means that, not only does the use of facial recognition technology risk miscarriages of justice (because of its inefficiency), but also that these miscarriages work to further entrench existing socio-economic disadvantages in society.
It is nobody’s case that the government – and investigative agencies – should not harness technology to improve their crime-fighting techniques. However, if the use of technology ends up disempowering the very people that better crime-fighting is meant to protect, then we should take a step back and reconsider what we are doing in the name of efficiency. Facial recognition technology does not work efficiently, works in a discriminatory fashion, and is a privacy nightmare that captures both the innocent and the guilty. The NCRB should reconsider its tender.
Gautam Bhatia is an advocate in the Supreme Court
The views expressed are personal