Translate in another language

Algo-Bias & Citizenship πŸš”πŸ“‘

Algo-Bias & Citizenship πŸš”πŸ“‘

Dear Reader,

Citizenship is the right to have rights- Hannah Arendt

If you torture the data long enough it will confess - Ronald Coase

Cognitive bias is defined as a tendency, inclination, or prejudice for or against something or someone. It relies on patterns borne out of stereotypes, pre-judgements, and leads often to rushed, and bad decisions. Some of these biases are buried deep in data and/or situations and are not part of our active thinking. When data warehousing, the precursor to today's big data, came about in theearly 90's it was thought to be a step on the road to making better decisions through data. The last three decades have proved that infoglut is in no way the sure path to better decisions either by humans, or by machines.

This becomes even more important in one of the most important areas of human welfare - to feel secure within, and outside, our homes. Security today is extremely complicated, as well as opaque, often riddled with alphabet-soup organizations. However, policing is one of the most impactful faces of the security apparatus that we, as citizens, encounter in some form, all over the world.

In the last decade, there has been an active talk of AI, and its various manifestations in policing. There have been significant experiments the world over. Most of them have escaped public accountability due to the sheer complexity involved. However, in the last four years, some light has found its way into the vast gray of predictive policing.

In 2010, the Los Angeles Police Department got in touch with academics at nearby Universities to work on a project of trying to solve crimes before they happen. One of the leaders of the project was an anthropologist, Jeffery Branthigam, who was quoted in a Los Angeles Times feature on predictive policing:

"The naysayers want you to believe that humans are too complex and too random β€” that this sort of math can't be done,” said Jeff Brantingham, a UCLA anthropologist who is helping to supervise the university's predictive policing project.

β€œBut humans are not nearly as random as we think,” he said. β€œIn a sense, crime is just a physical process, and if you can explain how offenders move and how they mix with their victims, you can understand an incredible amount.” (https://www.latimes.com/archives/la-xpm-2010-aug-21-la-me-predictcrime-20100427-1-story.html)

The police department was trying out technology because it is one of the most under-policed areas in the United States. Even though LAPD is a large police department, the city keeps on growing and the police need all the help they can get.

So, in the last decade what happened? To sum it up, humans turned out to be more complicated than people like Jeffery Branthigam assumed. The biases that the software claimed to be free of, a few studies have found that it actually perpetuates them. However, the company co-founded by Prof. Brantingham claims that predictive mapping is useful, and there might be other issues that require attention.

Let us move away from the western world to take stock of what is happening in India, in the name of predictive policing. Delhi Police became the first police force in India to use predictive policing through CMAPS (Crime Mapping Analytics and Predictive System) in association with ISRO (Indian Space Research Organization). The idea behind CMAPS was to collect spatial data every three minutes from ISRO, and combine it with Dial 100 data, and use it for predicting crime hotspots. Theoretically, these hotspots could help in identifying criminal behavioural patterns. In a very interesting, though data constrained, study done by two researchers - three biases; historical, measurement, and representation biases could be seen at play here (https://www.vidushimarda.com/storage/app/media/uploaded-files/fat2020-final586.pdf). The other challenge with this is the lack of representational access to the data - something that the western counterparts have been able to offer, even if in a severely restricted manner.

Predictive policing is racist. Rashida Richarson, a lawyer who studies algorithmic bias has this to say about predictive policing "They lead to biased outcomes that do not improve public safety. I think many predictive policing vendors like PredPol fundamentally do not understand how structural and social conditions bias or skew many forms of crime data.”

The work of AI researchers such as Timnit Gebru, formerly at Google, shows that facial recognition systems often discriminate against minorities, and especially minority women due to lack of training data. Such issues can be rectified through the introduction of training data, but what do we do about the sphere of something like predictive policing where training data is not sufficient? Humans being in the feedback loop system ought to be introduced and encouraged, as they are better equipped to potentially identify biases. To eliminate bias might be just a step too far yet.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to IP Wave.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.