A new report from Amnesty International has raised serious concerns about the use of AI-based predictive policing systems in Bristol. The report claims that these computer algorithms, used by Avon and Somerset Police, reinforce racial bias and unfairly target minority communities. While the police argue that these systems help in crime prevention and resource allocation, Amnesty insists they violate human rights and worsen discrimination. Let’s explore both sides of this debate and understand the implications of predictive policing.
How Does Predictive Policing Work?
Predictive policing uses advanced computer algorithms to analyze crime data and predict where crimes are likely to occur and who might commit them. Avon and Somerset Police use a system called Qlik Sense, which assigns a “risk score” to individuals based on past criminal records. This system has profiled over 170,000 people in the past six years.
The police claim that this technology helps in efficiently managing resources and preventing crime. However, critics argue that it unfairly targets specific communities, particularly those with higher Black and minority populations.
Why Amnesty Says It’s ‘Supercharging Racism’
1. Targeting Minority Communities
Amnesty’s report, titled ‘Automated Racism – How police data and algorithms code discrimination into policing’, highlights that predictive policing systems disproportionately focus on areas with high Black and minority populations. This means that these communities are policed more heavily, leading to a cycle where more crime is detected there simply because police are looking for it, while similar crimes in other areas go unnoticed.
2. Secret Profiling of Individuals
Another major concern is that individuals are unknowingly placed in a police database and assigned a “risk score” without their knowledge. This means a person could be repeatedly stopped and searched based on an AI-generated prediction, even if they haven’t committed a crime. Amnesty warns that such practices can violate basic human rights and undermine trust in the police.
A Real-Life Case: David’s Story
The Amnesty report includes the experience of a Bristol resident named David. He was first stopped by the police in 2016 for placing a sticker on a lamppost. Since then, he has been stopped and searched nearly 50 times.
When David tried to find out what his risk score was or why he was being targeted, the police refused to disclose this information. He described his experience with Avon and Somerset Police as deeply negative, stating that he now requires therapy due to the distress caused by repeated police actions against him.
Amnesty’s Demands: End Predictive Policing
Sacha Deshmukh, CEO of Amnesty International UK, has strongly criticized the use of AI-based policing. He argues that:
- The technology violates fundamental human rights.
- There is no evidence that predictive policing actually makes communities safer.
- The UK government must ban these AI tools to prevent racial discrimination.
“These tools treat entire communities as potential criminals, purely based on their skin color or socio-economic background,” Deshmukh said. He called for more transparency and a right for individuals to challenge decisions made using AI-based risk scores.
Bristol Copwatch: Predictive Policing as Racial Profiling
John Pegram from Bristol Copwatch also criticized the system, saying it punishes people for past mistakes and assumes they will commit crimes again. He argues that:
- Predictive policing does not allow for personal change or rehabilitation.
- It is assumptive and biased, often leading to racist profiling.
- People with minor past offenses remain under surveillance for years, even if they have changed their lives.
Avon and Somerset Police Defend Their Use of AI
Despite the criticism, Avon and Somerset Police insist that their predictive policing system is fair and unbiased. A spokesperson stated:
- The system follows legal and ethical guidelines.
- It only assigns risk scores to individuals charged with a crime in the past two years.
- No personal characteristics like race or address are included in the model.
- The system is not used to predict future crimes but to assess risk levels and allocate police resources.
The police also acknowledged that racial disparities exist in the criminal justice system and said they are working towards becoming an anti-racist police service.
The debate over predictive policing highlights a major conflict between technology and human rights. While the police argue that AI helps fight crime efficiently, Amnesty and other critics believe it reinforces racial bias and unfairly targets minority communities.
The key issues include lack of transparency, the potential for discrimination, and the impact on people’s trust in law enforcement. As AI-driven policing continues to grow, the need for fairness, accountability, and oversight becomes more urgent. The question remains—should predictive policing be reformed, or should it be banned altogether?
Visit for More News and Updates | WSOA NEWS |