Open AI, the company behind ChatGPT, has unveiled a “scary” new tool, Sora, capable of producing hyper-realistic videos from text, prompting warnings from experts.
Sora, unveiled by Open AI on Thursday, shows powerful examples like drone footage of Tokyo in the snow, waves crashing against the cliffs of Big Sur, or a grandmother enjoying a birthday party.
Experts have warned that the new artificial intelligence tool could wipe out entire industries, such as film production, and lead to a rise in deeply fake videos ahead of the crucial US presidential election.
“Generative AI tools are evolving very quickly and we have social media, which is an Achilles heel in our democracy and it couldn’t have happened at a worse time,” Oren Etzioni, founder of TruMedia.org, told CBS.
“As we try to resolve this, we are facing one of the most consequential elections in history,” he added.
Open AI’s Sora Tool Created This Video of Golden Retriever Puppies Playing in the Snow
The tool was given the quick “drone view of waves crashing against the steep cliffs along Big Sur’s Garay Point Beach” to create this hyper-realistic video.
Another AI-generated video of Tokyo in the snow has surprised experts with its realism
The quality of AI-generated images, audio, and video has increased rapidly over the past year, with companies like OpenAI, Google, Meta, and Stable Diffusion racing to create more advanced and accessible tools.
“Sora is capable of generating complex scenes with multiple characters, specific types of movement, and precise subject and background details,” OpenAI explains on its website.
“The model understands not only what the user has requested in the message, but also how those things exist in the physical world.”
The tool is currently being tested and evaluated for potential security risks, and there is no date available yet for its public release.
The company has revealed examples that are unlikely to be offensive, but experts warn that the new technology could unleash a new wave of extremely realistic deepfakes.
“We are trying to build this plane as we fly it, and it will land in November, if not sooner, and we don’t have the Federal Aviation Administration, we don’t have the history and we don’t have the tools necessary to do it,” Etzioni warned.
Sora “will make it even easier for malicious actors to generate high-quality deepfake videos and give them greater flexibility to create videos that could be used for offensive purposes,” said Dr. Andrew Newell, chief scientific officer at the identity verification firm. iProov. he told CBS.
“Voice actors or people who make short videos for video games, for educational or advertising purposes will be the most immediately affected,” Newell warned.
Deeply fake videos, including those of a sexual nature, are becoming a growing problem, both for individuals and those with a public profile.
The company has revealed examples that are unlikely to be offensive, but experts warn that the new technology could unleash a new wave of extremely realistic deepfakes.
Sora was asked to create this video: “a drone camera circles around a beautiful historic church built on a rocky outcrop along the Amalfi Coast.”
The technological interpretation of: ‘A Chinese Lunar New Year celebration video with a Chinese dragon’
‘A young man in his 20s is sitting on a cloud in the sky, reading a book’
‘Look how far we’ve come in just one year of generating images. Where will we be in a year? Michael Gracey, a film director and visual effects expert, told the Washington Post.
“We will take several important security measures before Sora is available in OpenAI products,” the company wrote.
‘We are working with red team members (experts in areas such as misinformation, hate content and bias) who will adversely test the model.
Adding: “We are also building tools to help detect misleading content, such as a detection classifier that can tell when Sora generated a video.”
The deeply fake images gained wider attention earlier this year when AI-generated sexual images of Taylor Swift circulated on social media.
The images originated from the website. Celeb Jihad, which shows Swift in a series of sexual acts dressed in Kansas Memories of the head of the city and in the stadium.
The star was left “furious” and considered taking legal action.
President Joe Biden also spoke about the use of AI and revealed that he has fallen for deepfakes of his own voice.
‘It’s already happening. Artificial intelligence devices are being used to fool people. Deepfakes use AI-generated audio and video to defame reputations,β Biden said, and βspread fake news and commit fraud.β
‘With AI, scammers can record your voice for three seconds. “I’ve seen one of myself a couple of times and said, ‘When the hell did I say that?'” Biden told a crowd of officials.
He then talked about technology’s ability to trick people through scams. IT experts have also warned about the potential for abuse of AI technology in the political space.
‘When the hell did I say that?’ President Joe Biden said he had seen an AI-generated video of himself. He warned of potential technology abuses as he signs new executive actions
Nvidia, which makes computer chips used in artificial intelligence technology, has seen its value explode. At one point last week, Nvidia closed at $781.28 per share, giving a market capitalization of $1.78 trillion.
On Friday, several major technology companies signed a pact to take “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.
Executives at Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok have promised to take preventive measures.
“Everyone recognizes that no technology company, no government, no civil society organization is capable of addressing the advent of this technology and its potential harmful use alone,” said Nick Clegg, president of global affairs at Meta. , after signing.
It comes as Nvidia, which makes computer chips used in artificial intelligence technology, has seen its value explode. At one point last week, Nvidia closed at $781.28 per share, giving a market capitalization of $1.78 trillion.
That’s higher than Amazon’s market capitalization of $1.75 trillion.
It was the first time since 2002 that Nvidia was worth more after the market closed, according to CNBC.
The California-based company has seen its shares rise 246 percent in the past 12 months as demand for its AI server chips grows. Those chips can cost more than $20,000 per share.