WhatsNew2Day
Latest News And Breaking Headlines

AI predicts crime a week in advance with 90 per cent accuracy – but may also perpetuate racist bias

RoboCop may be getting a 21st century reboot as an algorithm has been found that predicts future crime a week in advance with 90 percent accuracy.

The artificial intelligence (AI) tool predicts crime by learning patterns over time and geographic locations of violent and property crimes.

Data scientists at the University of Chicago trained the computer model using public data from eight major US cities.

However, it has proved controversial, as the model does not account for systemic bias in police enforcement and its complex relationship with crime and society.

Similar systems have been shown to perpetuate racial bias in the police force, but these researchers argue that their model could also be used to expose the bias.

It also found that socio-economically deprived areas may receive disproportionately less police attention than wealthier neighborhoods.

A new artificial intelligence (AI) tool, developed by scientists in Chicago, USA, predicts crime by learning patterns over time and geographic locations about violent and property crimes

A new artificial intelligence (AI) tool, developed by scientists in Chicago, USA, predicts crime by learning patterns over time and geographic locations about violent and property crimes

Violent crimes (left) and property crimes (right) recorded in Chicago in the two-week period between April 1 and 15, 2017. These incidents were used to train the computer model

Violent crimes (left) and property crimes (right) recorded in Chicago in the two-week period between April 1 and 15, 2017. These incidents were used to train the computer model

Accuracy of models' predictions of violent (left) and property crime (right) crimes in Chicago.  The prediction is made one week in advance and the event is registered as a successful prediction if a crime is registered within ± one day of the predicted date

Accuracy of models’ predictions of violent (left) and property crime (right) crimes in Chicago. The prediction is made one week in advance and the event is registered as a successful prediction if a crime is registered within ± one day of the predicted date

The computer model was trained using historical data of criminal incidents from the city of Chicago from 2014 to the end of 2016.

It then predicted crime levels for the weeks following this training period.

The incidents it trained with fell into two broad categories of events that are less prone to enforcement bias.

These were violent crimes, such as murders, assaults and batteries, and property crimes, including burglary, theft and theft of motor vehicles.

These incidents were also more frequently reported to the police in urban areas where there is historical mistrust and lack of cooperation with law enforcement officers.

HOW DOES THE AI WORK?

The model was trained using historical data from criminal incidents in Chicago from 2014 to the end of 2016.

It then predicted crime levels for the weeks following the training period.

The incidents it trained with fell into violent crimes or property crimes.

It takes into account the time and spatial coordinates of individual crimes and detects patterns in them to predict future events.

It divides the city into spatial tiles about 300 meters wide and predicts crime in these areas

The model also takes into account the temporal and spatial coordinates of individual crimes and detects patterns in them to predict future events.

It divides the city into spatial tiles about 300 meters wide and predicts crime in these areas.

This is in contrast to viewing areas as ‘hotspots’ of crime that spread to surrounding areas, as previous studies have done.

The hotspots often rely on traditional neighborhood or political boundaries, which are also subject to bias.

Co-author Dr James Evans said: ‘Spatial models ignore the natural topology of the city,

‘Transport networks respect streets, footpaths, train and bus lines, and communication networks respect areas with a similar socio-economic background.

‘Our model makes it possible to discover these connections.

“We demonstrate the importance of discovering city-specific patterns for the prediction of reported crime, which generates fresh perspectives on urban neighborhoods, enables us to ask new questions and evaluates our policing in new ways.”

According to the results published yesterday in Nature Human behaviorthe model performed just as well in data from seven other US cities as Chicago.

Graphical representation of the modeling approach of the AI ​​tool.  A city is divided into small spatial tiles about 1.5 times the size of an average city block and the model calculates patterns in the successive event streams captured on different tiles

Graphical representation of the modeling approach of the AI ​​tool. A city is divided into small spatial tiles about 1.5 times the size of an average city block and the model calculates patterns in the successive event streams captured on different tiles

These were Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland and San Francisco.

The researchers then used the model to study police response to incidents in areas of diverse socioeconomic backgrounds.

They found that when crimes took place in wealthier areas, they attracted more police resources and resulted in more arrests than those in deprived neighborhoods.

This suggests bias in police response and enforcement.

Senior author, Dr. Ishanu Chattopadhyay, said: ‘What we see is that when you put pressure on the system, it takes more resources to arrest more people in response to crime in an affluent area and takes police resources away from lower-income areas. social economical status. †

The model also found that when crimes occurred in a more affluent area, they attracted more police resources and resulted in more arrests than those in deprived neighborhoods.

The model also found that when crimes occurred in a more affluent area, they attracted more police resources and resulted in more arrests than those in deprived neighborhoods.

Accuracy of the model's predictions about property and violent crime in major US cities.  a: Atlanta, b: Philadelphia, c: San Francisco, d: Detroit, e: Los Angeles, f: Austin.  All of these cities show comparably high predictive performance

Accuracy of the model’s predictions about property and violent crime in major US cities. a: Atlanta, b: Philadelphia, c: San Francisco, d: Detroit, e: Los Angeles, f: Austin. All of these cities show comparably high predictive performance

The use of computer modeling in law enforcement has proved controversial, as there are concerns that it may further instill existing police biases.

However, this tool is not intended to direct police officers to areas where it predicts crime may occur, but is used to inform current police strategies and policies.

The data and algorithm used in the study have been made public so that other researchers can examine the results.

dr. Chattopadhyay said, “We have created a digital twin of urban environments. If you enter data about what happened in the past, it will tell you what will happen in the future.

“It’s not magical, there are limitations, but we’ve validated it and it works very well.

‘Now you can use this as a simulation tool to see what will happen if crime increases in one part of the city, or if enforcement is increased in another area.

‘If you apply all those different variables, you can see how the systems react to them.’

Can an AI lie detector with facial recognition tell the police when suspects aren’t telling the truth?

Forget the old ‘good cop, bad cop’ routine – soon police may turn to artificial intelligence systems that can reveal a suspect’s true emotions during interrogations.

The technology for scanning faces is said to be based on micro-expressions, small involuntary facial movements that betray and even reveal true feelings when people are lying.

London-based startup Facesoft has trained an AI on micro-expressions seen on real people’s faces, as well as in a database of 300 million expressions.

The company is in talks with both the British and Mumbai police forces about possible practical applications for the AI ​​technology.

Read more here

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More