San Francisco says it will use AI to reduce bias when charging people with crimes

San Francisco announces a & # 39; bias mitigation tool & # 39; used those basic AI techniques to automatically correct information from police reports that could identify the race of a suspect. It is designed as a way to prevent prosecutors from being influenced by racist prejudices when deciding whether to accuse someone with a crime. The tool will be ready early July.

The tool will not only remove race descriptions, but also descriptors such as eye color and hair color, according to the SF district attorney's office. The names of people, locations, and neighborhoods who can consciously or unknowingly let a prosecutor know that a suspect has a specific racial background are also removed.

"If you look at the people who are imprisoned in this country, they will be disproportionately large in numbers of men and women," prosecutor George Gascón said in a media briefing today. He pointed out that seeing a name like Hernandez can immediately tell the prosecutors that a person is of Latino descent, which may affect the outcome.

A DA spokesperson says The edge that the tool will also remove details about police officers, including their badge number, in case the public prosecutor happens to know them and may be biased towards or against their report.

San Francisco is currently using a much more limited handbook to try to prevent prosecutors from seeing these things – the city only removes the first two pages of the document, but prosecutors see the rest of the report. "We had to create machine learning around this process," Gascón said. The office of the public prosecutor calls this a "first-in-the-nation" use of this technology, and says it is unaware of an agency that uses AI to do this earlier.

The tool was built by Alex Chohlas-Wood of the Stanford Computational Policy Lab, who also contributed to the development of the Patternizr system of the NYPD to automatically search case files to find patterns of crime. Wood says the new tool is basically just a lightweight web app that uses different algorithms to automatically create a police report, recognize words from the report using computer vision and replace them with generic versions such as Location, Officer # 1, and so on. .

Wood says the tool is in the final phase and will be open to others within a few weeks. He says it uses a specific technique called named-entity recognition, among other components, to identify what to edit.

Without seeing the system working on real police reports – for legal reasons, the office of the officer said it should show us a mock-up – it is unclear how well it could work. When a journalist asked if it wanted to edit other descriptions, such as transvestism, Gascón could only say that today is a starting point and that the tool will evolve. It is also only used in the first decision in an arrest. The prosecutors' final decisions are based on the complete unbroken report. And if the decision to initially invoice is based on video evidence, that can of course also reveal the race of a suspect.

The decision to attack people with crime is just one, relatively less important place where the prejudices of the police come to the fore. When police officers make the decision to arrest a suspect – or worse – that happens long before this trial takes place. And a journalist in the audience pointed that out a study of 2017 found "people of color get more serious burdens during the first booking phase", which can take many hours before the office of the public prosecutor intervenes with his decision.

It's interesting to see if the new tool helps – at the moment AI is better known for introducing prejudices than removing them, especially a controversial form known as & # 39; predictive policing & # 39 ;, and you can read about it in the examples below.