Home Tech Amazon-Powered AI Cameras Used to Detect Emotions of Unwitting UK Train Passengers

Amazon-Powered AI Cameras Used to Detect Emotions of Unwitting UK Train Passengers

0 comment
Amazon-Powered AI Cameras Used to Detect Emotions of Unwitting UK Train Passengers

Network Rail did not respond to questions about the evidence submitted by WIRED, including questions about the current state of AI use, emotion detection and privacy concerns.

“We take the security of the rail network very seriously and use a range of advanced technologies at our stations to protect passengers, our colleagues and the rail infrastructure from crime and other threats,” says a Network Rail spokesperson. “When we deploy technology, we work with police and security services to ensure we are taking proportionate measures and always comply with relevant legislation on the use of surveillance technologies.”

It’s unclear how widely the emotion detection analysis has been deployed, with documents sometimes saying the use case should be “viewed with more caution” and station reports saying it’s “impossible to validate accuracy.” However, Gregory Butler, chief executive of computer vision and data analytics company Purple Transform, which has been working with Network Rail on the trials, says the capability was suspended during the trials and no images were stored when it was active.

Network Rail’s documents on AI testing outline multiple use cases involving the ability for cameras to send automatic alerts to staff when they detect certain behaviour. Neither system uses controversial facial recognition technology, which aims to match people’s identities with those stored in databases.

“A primary benefit is faster detection of intrusion incidents,” says Butler, who adds that his company’s analytics system, SiYtE, is used at 18 sites, including train stations and along tracks. In the last month, Butler says, there have been five serious cases of trespassing that systems have detected at two sites, including a teenager who picked up a ball from the lanes and a man who “spent more than five minutes picking up golf balls along the way.” along a high-speed road.” line.”

At Leeds train station, one of the busiest outside London, there are 350 CCTV cameras connected to the SiYtE platform, says Butler. “Analytics are being used to measure people flow and identify issues like platform crowding and, of course, trespassing, where technology can filter runway workers through their PPE uniform,” he says. . “AI helps human operators, who cannot monitor all cameras continuously, assess and address security risks and issues promptly.”

Network Rail documents claim that cameras used at one station, Reading, allowed police to speed up investigations into bike thefts by being able to identify bikes in the footage. “It was established that while the scans could not reliably detect a theft, they could detect a person with a bicycle,” the files say. They also add that the new air quality sensors used in the tests could save staff time performing manual checks. An AI instance uses sensor data to detect “sweated” floors, which have become slippery from condensation, and alerts staff when they need to be cleaned.

While the documents detail some elements of the lawsuits, privacy experts say they are concerned about the overall lack of transparency and debate over the use of AI in public spaces. In a paper designed to assess data protection issues with the systems, Big Brother Watch’s Hurfurt says there appears to be a “dismissive attitude” towards people who may have privacy concerns. One the question asks: “Are some people likely to object or find it intrusive?” One staff member writes: “Normally not, but for some people there is no accounting.”

At the same time, similar AI surveillance systems that use the technology to monitor crowds are increasingly used around the world. During the Paris Olympics later this year, AI video surveillance will observe thousands of people and try to Identify crowd surges, weapon use, and abandoned items..

“Systems that don’t identify people are better than those that do, but I worry there’s a slippery slope,” says Carissa Véliz, associate professor of psychology at the University of Oxford’s AI Ethics Institute. Véliz points to similar AI tests on the London Underground that had initially blurred the faces of people who may have been evading fares, but then changed focus, blurring the photos and keeping the images on for longer than initially planned.

“There is a very instinctive drive to expand surveillance,” Véliz says. “Human beings like to see more, see further. But surveillance leads to control, and control to a loss of freedom that threatens liberal democracies.”

You may also like