Home Money This new technology puts AI in touch with its emotions (and yours)

This new technology puts AI in touch with its emotions (and yours)

0 comment
This new technology puts AI in touch with its emotions (and yours)

A new “empathetic voice interface” launched today by Hume’s artificial intelligencea New York-based startup, makes it possible to add a range of emotionally expressive voices, plus an emotionally attentive ear, to large language models from Anthropic, Google, Meta, Mistral and OpenAI, heralding an era in which AI helpers can become more effusive with us more routinely.

“We specialize in creating empathetic personalities that speak the way people would speak, rather than stereotypical AI assistants,” says Hume AI co-founder. Alan Cowena psychologist who has co-authored several books research articles on AI and emotions, and who previously worked on emotional technologies at Google and Facebook.

WIRED tested Hume’s latest voice technology, called EVI 2, and found that its output was similar to that developed by OpenAI for ChatGPT. (When OpenAI gave ChatGPT a flirtatious voice in May, the company’s CEO, Sam Altman, touted the interface as “Like the AI ​​in the movies.” Later, a real movie star, Scarlett Johansson, claimed that OpenAI had stolen her voice.)

Like ChatGPT, Hume is far more emotionally expressive than most conventional voice interfaces. If you tell it that your pet has died, for example, it will adopt an appropriately somber and sympathetic tone. (Also, as with ChatGPT, you can interrupt Hume mid-stream and it will pause and adapt with a new response.)

OpenAI hasn’t said to what extent its voice interface attempts to measure users’ emotions, but Hume’s is designed expressly to do so. During interactions, Hume’s developer interface will display values ​​that indicate a measure of things like “determination,” “anxiety,” and “happiness” in users’ voices. If you speak to Hume in a sad tone, it will detect that, too — something ChatGPT doesn’t appear to do.

Hume also makes it easy to implement a voice with specific emotions by adding a message into its user interface. Here it is when I asked it to be “sexy and flirty”:

Hume AI’s “sexy and flirtatious” message

And when they told him to be “sad and grumpy”:

Hume’s “sad and gloomy” message AI

And here is the particularly unpleasant message when they are asked to be “angry and rude”:

Hume AI’s “angry and rude” message

Technology didn’t always seem so polished and smooth Like OpenAI’s, it did behave strangely at times. For example, at one point the voice suddenly sped up and uttered gibberish. But if the voice can be refined and made more reliable, it has the potential to help make human-like voice interfaces more common and varied.

The idea of ​​recognizing, measuring and simulating human emotions in technological systems dates back decades and is studied in a field known as “affective computing”, a term introduced by Rosalind Picardprofessor at the MIT Media Lab in the 1990s.

Albert SalahA professor at Utrecht University in the Netherlands who studies affective computing, he is impressed with Hume AI’s technology and recently demonstrated it to his students. “What EVI appears to be doing is assigning valence and emotional arousal values ​​[to the user]and then modulating the agent’s speech accordingly,” he says. “It’s a very interesting twist on LLMs.”

You may also like