Discover the stupidity of AI emotion recognition with this small browser game

0

Tech companies don’t just want to identify you with facial recognition, they also want to read your emotions using AI. For many scientists, however, claims about the ability of computers to understand emotions are fundamentally flawed, and a small in-browser web game built by researchers at the University of Cambridge wants to show why.

Go to emojify.info, and you can watch your emotions “read” by your computer through your webcam. The game challenges you to produce six different emotions (happiness, sadness, fear, surprise, disgust and anger) that the AI ​​will try to identify. However, you will likely find that the software’s measurements are far from accurate and often interpret even exaggerated expressions as ‘neutral’. And even if you bring out a smile that convinces your computer that you are happy, you know you were faking it.

This is the point of the site, says creator Alexa Hagerty, a researcher at the University of Cambridge Leverhulme Center for the Future of Intelligence and the Center for the Study of Existential Risk: to show that the premise of much emotion recognition technology is that facial movements are intrinsically linked to changes in feeling, is flawed.

“The premise of these technologies is that our faces and inner feelings are correlated in a very predictable way,” Hagerty says. The edge‘When I smile, I am happy. When I frown, I am angry. But the APA did this major review of the evidence in 2019, and they found that people’s emotional space cannot be easily deduced from their facial movements. In the game, says Hagerty, “you have the chance to move your face quickly to mimic six different emotions, but the point is, you didn’t feel six different things inwardly, one after the other in a row.”

A second minigame on the site brings up this point by asking users to tell the difference between a wink and a flashing light – something machines can’t. “You can close your eyes, and it could be an involuntary act or it could be a meaningful gesture,” Hagerty says.

Despite these problems, emotion recognition technology is rapidly gaining ground, with companies promising that such systems can be used to screen applicants (giving them a “employability score”), Place future terrorists, or judge whether they are commercial riders sleepy or drowsy(Amazon is even implementing similar technology in its own vans.)

Of course people also make mistakes when we read emotions on people’s faces, but transferring this task to machines has specific drawbacks. First, machines can’t read other social cues the way humans can (like the wink / blink dichotomy). Machines also often make automated decisions that humans cannot question and can oversee widely without our awareness. In addition, as with facial recognition systems, AI is often present racially biased, for example, by judging black people’s faces more often as negative emotions. All of these factors make AI emotion detection much more troubling than people’s ability to read the feelings of others.

“The dangers are many,” says Hagerty. “With human miscommunication, we have many options to correct that. But once you automate something or the reading is done without your knowledge or scope, those options are gone. “