Home Money OpenAI warns that users could become emotionally addicted to its voice mode

OpenAI warns that users could become emotionally addicted to its voice mode

0 comment
OpenAI warns that users could become emotionally addicted to its voice mode

At the end of July, OpenAI has begun implementing an eerily human-like voice interface for ChatGPT. a security analysis Launched today, the company acknowledges that this anthropomorphic voice may induce some users to emotionally bond with its chatbot.

The warnings are included in a “system card” for GPT-4o, a technical document that outlines what the company believes are the risks associated with the model, along with details about the safety testing and mitigation efforts the company is undertaking to reduce potential risk.

OpenAI has come under scrutiny in recent months after several employees working on the long-term risks of AI left the company. Some subsequently accused OpenAI of taking unnecessary risks and muzzling dissenters in its race to commercialize AI. Revealing more details of OpenAI’s security regime may help mitigate criticism and reassure the public that the company is taking the issue seriously.

The risks discussed in the new system factsheet are wide-ranging and include the possibility that GPT-40 could amplify societal biases, spread disinformation and contribute to the development of chemical or biological weapons. It also reveals details of tests designed to ensure that AI models do not try to escape their controls, deceive people or hatch catastrophic plans.

Some outside experts praise OpenAI for its transparency, but say it could go further.

Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face, a company that hosts AI tools, notes that OpenAI’s system card for GPT-4o doesn’t include extensive details about the model’s training data or who owns that data. “The question of consent needs to be addressed in order to create such a large dataset that spans multiple modalities, including text, image, and voice,” Kaffee says.

Others point out that the risks could change as the tools are used in practice. “Their internal review should be just the first step in ensuring AI safety,” he says. Neil ThompsonMIT professor who studies AI risk assessments. “Many risks only become apparent when AI is used in the real world. It is important that these other risks are catalogued and assessed as new models emerge.”

The new system card highlights how quickly AI risks are evolving with the development of powerful new features such as OpenAI’s voice interface. In May, when the company introduced its voice mode, which can respond quickly and handle interruptions naturally, many users noted that it seemed too flirtatious in demonstrations. The company subsequently faced criticism from actress Scarlett Johansson, who accused it of copying her speaking style.

A section of the system card titled “Anthropomorphization and Emotional Dependency” explores the problems that arise when users perceive AI in human terms, something seemingly exacerbated by the human-like voice mode. During the red teaming, or stress test, of GPT-4o, for example, OpenAI researchers noticed instances of users’ speech that conveyed a sense of emotional connection to the model. For example, people used language like “This is our last day together.”

Anthropomorphism could lead users to trust a model’s output more when it “hallucinates” incorrect information, OpenAI claims. Over time, it could even affect users’ relationships with other people. “Users could form social relationships with AI, reducing their need for human interaction, which could benefit lonely people but possibly hurt healthy relationships,” the paper says.

Joaquín Quiñonero Candela, a team member working on AI safety at OpenAI, says that voice mode could evolve into a uniquely powerful interface. He also notes that the kind of emotional effects seen with GPT-4o can be positive, for example by helping those who feel lonely or need to practice social interactions. He adds that the company will be closely studying anthropomorphism and emotional connections, including by monitoring how beta testers interact with ChatGPT. “We don’t have results to share at this time, but it’s on our list of concerns,” he says.

You may also like