Home Tech OpenAI deems its voice cloning tool too risky for general release

OpenAI deems its voice cloning tool too risky for general release

0 comments
OpenAI deems its voice cloning tool too risky for general release

A new tool from OpenAI that can generate a convincing clone of a person’s voice using just 15 seconds of recorded audio is being deemed too risky for general release as the AI ​​lab seeks to minimize the threat of harmful disinformation in a global election year .

Voice Engine was first developed in 2022 and an initial version was used for the text-to-speech feature built into ChatGPT, the organization’s leading AI tool. But its power has never been publicly revealed, in part because of the “cautious and informed” approach OpenAI is taking to releasing it more widely.

“We hope to spark a dialogue about the responsible use of synthetic voices, and how society can adapt to these new capabilities,” OpenAI said in an unsigned blog post. “Based on these conversations and the results of these small-scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

In its message, the company shared examples of real-world use of different partners’ technology who were given access to it to build into their own apps and products.

Education technology company Age of Learning uses it to generate scripted voiceovers, while ‘AI visual storytelling’ app HeyGen gives users the ability to generate translations of recorded content in a way that is fluid, but loses the accent and voice of retains the original speaker. For example, generating English with an audio clip from a French speaker will result in speech with a French accent.

In particular, researchers at the Norman Prince Neurosciences Institute in Rhode Island used a poor-quality 15-second clip of a young woman giving a presentation during a school project to “restore the voice” she had lost due to a vascular brain tumor.

“We are choosing to preview but not widely release this technology at this time,” OpenAI said, to “increase societal resilience against the challenges posed by increasingly compelling generative models.” In the near future, it says, “We encourage steps such as phasing out voice-based authentication as a security measure for accessing bank accounts and other sensitive information.”

OpenAI also called for exploring “policies to protect the use of individuals’ voices in AI” and “educating the public to understand the capabilities and limitations of AI technologies, including the potential for misleading AI contents”.

Voice Engine generations are watermarked, OpenAI said, allowing the organization to trace the origin of generated audio. Currently, it added, “our terms with these partners require explicit and informed consent from the original speaker and do not allow developers to devise ways for individual users to create their own voice.”

But while OpenAI’s tool stands out for its technical simplicity and the small amount of original audio required to generate a convincing clone, competitors are already available to the public.

With just “a few minutes of audio,” companies like ElevenLabs can generate a complete voice clone. To limit the damage, the company has implemented a ‘no-go voices’ protection, intended to detect and prevent the creation of voice clones ‘that mimic political candidates actively involved in presidential or prime ministerial elections, starting with those in the US and the United States. the UK”.

You may also like