Scientists develop a robot that matches people’s expressions

0

Scientists have developed a robotic head that matches the facial expressions of nearby people in real time, thanks to its pliable blue face.

The autonomous bot, called Eva, uses deep learning, a form of artificial intelligence (AI), to ‘read’ and then mirror the expressions on human faces via a camera.

Eve can express the six basic emotions – anger, disgust, fear, joy, sadness and surprise, as well as a ‘range of more nuanced’ reactions.

Artificial ‘muscles’ – cables and motors to be precise – pull at specific points on Eva’s face, mimicking muscles under our skin.

Scientists from Columbia University in New York say that human-like facial expressions on robots’ faces can create trust between humans and their robot colleagues and caretakers.

HOW DOES EVA WORK?

The human makes the different facial expressions on a camera, which transmits the image in real time on a small screen, aimed at the face of the robot.

Eva then relies on a library of facial emotions to mimic the specific facial expression.

This is accomplished by artificial “muscles” – computerized cables and motors – under his blue skin.

Researchers say Eve can mimic the movements of the more than 42 small muscles attached to the skin and bones of human faces at various points.

Most robots today have been developed to mimic human abilities, such as grasping, lifting, and moving from one place to another.

A detail that is therefore often missing are human-like facial expressions. The researchers point out that robots tend to have “the blank and static face of a professional poker player.”

Eva’s bright blue face is inspired by the Blue Man Group – an American performance art company with three mute blue-faced artists.

“The idea for EVA took shape a few years ago when my students and I started noticing that the robots in our lab were staring at us through plastic, slippery eyes,” said Hod Lipson, director of the Creative Machines Lab at Columbia University.

Lipson saw a similar trend in a supermarket, where he encountered resupply robots with name badges, and in one case a bot with a hand-knitted hat.

“People seemed to humanize their robot colleagues by giving them eyes, an identity or a name,” he said.

“This made us wonder, if eyes and clothes work, why not make a robot with a super expressive and responsive human face?”

The first phase of the project began in Lipson’s lab several years ago, when the team constructed Eva’s disembodied bust, using several muscle points controlled by a computer.

The team used 3D printing to fabricate parts with complex shapes that seamlessly integrated with Eva’s skull.

Researchers then used a multi-stage training process to enable Eva to read and mimic the faces of nearby people in real time.

First they had to teach Eva what her own face looked like. To do this, the team filmed hours of footage of her randomly pulling a series of faces.

Eva is seen here during the training phase - making random facial expressions while being recorded on a camera

Eva is seen here during the training phase – making random facial expressions while being recorded on a camera

Then, like a human looking at itself on Zoom, Eva’s internal neural networks learned to link the different faces in the video images to the muscle movements needed to make them.

In other words, Eve saw herself making a certain face (such as a happy face) and learned how to imitate it.

The final stage was to essentially replace Eva’s own face with a human face, captured on camera, using a second neural network.

After several refinements and iterations, Eva acquired the ability to read human facial gestures from a camera and respond by mirroring that human’s facial expression.

After weeks of pulling cables to make EVA smile, frown or upset, the team noticed that Eva’s blue, disembodied face could elicit emotional responses from their lab peers.

The human makes the different facial expressions on a camera, which transmits the image in real time on a small screen, aimed at the face of the robot.

The human makes the different facial expressions on a camera, which transmits the image in real time on a small screen, aimed at the face of the robot.

Eva's bright blue face is inspired by the Blue Man Group - an American performance art company with three mute, blue-faced artists (pictured)

Eva’s bright blue face is inspired by the Blue Man Group – an American performance art company with three mute, blue-faced artists (pictured)

“I was minding my own business one day when Eva suddenly gave me a big, friendly smile,” Lipson said. “I knew it was purely mechanical, but I smiled back reflexively.”

While Eva is currently still a lab experiment, such technologies may one day have useful, practical applications.

For example, robots capable of responding to a wide range of human body language would be useful in workplaces, hospitals, schools and homes.

“There’s a limit to how much we humans can emotionally engage with cloud-based chatbots or disembodied smart home speakers,” Lipson says.

‘Our brains seem to respond well to robots that have some sort of recognizable physical presence.’

WHAT IS DEEP LEARNING?

Deep learning is a form of machine learning that deals with algorithms that have a wide variety of applications.

It is a field that is inspired by the human brain and focuses on building artificial neural networks.

It was originally formed based on brain simulations and to make learning algorithms better and easier to use.

Processing large amounts of complex data then becomes much easier and allows researchers to trust algorithms to draw accurate conclusions based on the parameters the researchers have set.

The existing task-specific algorithms are better for specific tasks and goals, but deep learning allows for a broader scope of data collection.

.