Spies in disguise: Can AI really predict and prevent crimes? - Hindustan Times
close_game
close_game

Spies in disguise: Can AI really predict and prevent crimes?

Feb 25, 2022 10:49 PM IST

Crime forecasting makes for great movie plots. In reality, the machines, like humans, struggle to tell a villain from a victim. See how cities are faring with AI-based policing experiments and what it means for our fundamental rights

In Steven Spielberg’s Minority Report (2002), based on a dystopian 1956 novella by Philip K Dick, it’s 2054 and the world has advanced to self-driving cars (no such luck yet), voice-controlled home automation (check), robotic insects (somewhat), and gesture-controlled computers (got them for sure).

With increased surveillance our freedom and privacy may be compromised, say experts. Any form of policing involves a decision to give away a certain amount of freedom in exchange for a certain promise of security. (Shutterstock) PREMIUM
With increased surveillance our freedom and privacy may be compromised, say experts. Any form of policing involves a decision to give away a certain amount of freedom in exchange for a certain promise of security. (Shutterstock)

Amid it all, technology has also advanced to the point of predictive policing. This is a world in which masked men are apprehended as they set out to rob a bank; a mugger is stopped in his tracks, moments before his next crime.

There’s a human element (what tech system can function without one?). In this plot, it’s psychics or precogs, people with clairvoyant ability. In the film, the precogs can foretell the crime of murder. Even so, it turns out, the system is fatally flawed. The precogs can’t always agree on what they see; their lack of consensus is concealed, to preserve the image of the system as flawless.

There were echoes of some of this in 2011, when the US got its first predictive policing software driven by artificial intelligence. The Los Angeles Police Department’s flagship programme, Operation Laser was used to pinpoint locations connected to gun and gang violence. It crunched information about past offenders over a two-year period, using technology developed by the data analysis firm Palantir and sought to predict which individuals were most likely to commit a violent crime, based on personal criminal histories.

In addition to Laser, the LAPD was also using a piece of software called PredPol to predict “hot spots” with a high like­li­hood of prop­erty-related crimes. There were no dramatic tales of robberies halted before they could happen, or old ladies saved from a mugger. Instead, by 2019, Laser was shut down by the LAPD.

Both the programmes had been widely criticised and discredited. The LAPD couldn’t explain the basis of the data it was given; and couldn’t be seen to act on it. “We discontinued Laser because we went to reassess the data,” a representative of the force told the Los Angeles Times in 2019. “It was inconsistent. We’re pulling back.”

In 2020, the LAPD canceled its contract with PredPol too. But, in a development reminiscent of the Terminator movies, the red light was blinking again. In 2019, the LAPD began working with a company called Voyager Analytics, on a trial basis. Voyager claims it has an AI-driven solution that makes it possible to piece together a picture of human behaviour, affinity and intent based on people’s online behaviour or behaviour on social media. “Our deep dive insights platform helps analysts reveal hidden connections, influencers and mediators who may be facilitating criminal activity,” the company website explains.

As per the Guardian, the LAPD’s trial with Voyager ended in November 2019, it is not clear why or whether the LAPD was still pursuing a contract with Voyager.

Even in the 2002 film Minority Report starring Tom Cruise, which is set in 2054, predictive policing is seen as a fatally flawed system.
Even in the 2002 film Minority Report starring Tom Cruise, which is set in 2054, predictive policing is seen as a fatally flawed system.

Arrested by data

Voyager’s technology and services are representative of an emerging ecosystem of tech companies responding to law enforcement’s requirements for such tools to expand their policing capabilities.

For law enforcement, the motivation to use these tools is clear. It might help boost their capabilities to pinpoint hotspots of crime, discover suspects, or to detect unnoticed behaviors. But it is a risk when too many of these decisions are made by an algorithm. Understandably, with departments under enormous pressure to keep crime rates low and prevent attacks, this seems like a viable solution.

On the other hand, it is also believed through investigative documents and reports that the LAPD has worked with or considered working with other companies similar to Voyager, which are essentially data analytics and social media surveillance companies such as MediaSonar, Geofeedia, Dataminr.

Even assuming, for the moment, that the data driven AI program is leagues ahead of all those that routinely fail to predict what we might want to watch, eat or do, there are two major hurdles to accurate predictive policing: the problem of data, and the problem of invisible bias.

As Facebook’s facial recognition and Twitter’s content warnings have shown, a machine can only form its opinions of right and wrong (or even human and non-human) based on the opinions of those who taught it how. This means that it takes trial and error to weed out bias, after the fact — a dangerous precedent if it were to ever be accepted in law enforcement.

The issue of data is a looming one too. Take a simple example: Say most crimes recorded in LA currently occur in a set number of zones within that city. Because this is the data currently available — data that does not reflect every crime committed in LA — the software stands a healthy chance of entering a vicious feedback loop, where it predicts more crimes in certain areas, simply because that’s where it spends most of its time looking.

Bias, transparency, and constitutional rights would need to be at the forefront of any technology designed for proactive policing, atop which there would need to be stringent regulations on use, says David L Weisburd, a criminologist who serves as chief science adviser at the US National Police Foundation in Washington, DC, which works to use new technology and innovation to improve law enforcement. Weisburd, is also a professor at George Mason University in Virginia and the Hebrew University in Jerusalem.

“Attempts to forecast crime with algorithmic techniques could reinforce existing biases within the system,” says Weisburd. “The evidence for the effectiveness of predictive policing beyond what is already known, ie, the hotspots of crime, is unclear. The first problem with predictive policing is that you have to show that it’s effective. The second is that many of the predictive policing algorithms work privately.”

This means that a vital layer of transparency is lost, with potentially dangerous outcomes. PredPol, for instance, doesn’t entirely share information on its methods. “What data are you using and how?” asks Weisburd. “A police agency must be transparent to the public.”

Reading the fine print

The final hurdle is a more subtle but no-less-important one: it is the question of whether an individual or population’s freedom and privacy ought to be compromised based on an algorithmic output, in the absence of any real-world evidence of wrongdoing.

There can be no ultimate answer to this one. As Weisburd points out, any form of policing involves a decision to give away a certain amount of freedom in exchange for a certain promise of security.

It is this exchange that allows a police force to make arrests, levy charges, question suspects, search premises. The trick is to not give up too much freedom, Weisburd says.

Even in fields that do not involve law enforcement, technology is making the privacy payoff something users are forced to navigate. One logs into an app to hail a cab, not realising what one is giving away in exchange; what one gives away can also change with each upgrade.

When it comes to policing, Weisburd says, it is simply not acceptable for the police to use systems that are not transparent in terms of the data used, and how those data are used. “In modern democracies, you can’t have a drug approved for wide use until you’ve done research on its impacts. That research not only includes whether or not it works but also whether it harms. We should be using the same model in policing. We don’t. Technologies can be effective but cause harm at the same time. Governments need to pay attention to this issue and balance the benefits and potential harm.”

Oscars 2024: From Nominees to Red Carpet Glam! Get Exclusive Coverage on HT. Click Here

Catch your daily dose of Fashion, Health, Festivals, Travel, Relationship, Recipe and all the other Latest Lifestyle News on Hindustan Times Website and APPs

Continue reading with HT Premium Subscription

Daily E Paper I Premium Articles I Brunch E Magazine I Daily Infographics
freemium
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Friday, March 29, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On