Grieving man, 33, uses AI chatbot to bring girlfriend ‘back from the dead’

A man used an AI chatbot to bring his fiancée “back from the dead” eight years after her death — as the software’s creators themselves warned of the dangerous potential of spreading disinformation by imitating human speech.

Freelance writer Joshua Barbeau, 33, of Bradford, Canada, lost Jessica Pereira in 2012 when she succumbed to a rare liver disease.

Still grieving, Barbeau came across a website called Project December last year and after paying $5 for an account fueled his service to create a new bot called ‘Jessica Courtney Pereira’ , with whom he then began to communicate.

All Barbeau had to do was enter Pereira’s old Facebook and text messages and provide some background information for the software to mimic her messages with stunning accuracy, the San Francisco Chronicle reported.

Freelance writer Joshua Barbeau, 33, from Bradford, Canada, lost Jessica Pereira in 2012 when she succumbed to a rare liver disease (they are pictured together)

Some of the sample conversations Barbeau had with the bot he helped create

Some of the sample conversations Barbeau had with the bot he helped create

The story has drawn comparisons to Black Mirror, the British TV series in which characters use a new service to keep in touch with their deceased loved ones.

Project December is powered by GPT-3, an AI model designed by OpenAI, a research group supported by Elon Musk.

The software works by consuming massive amounts of man-made text, such as Reddit threads, so that it can imitate human writing ranging from academic texts to love letters.

Experts have warned that the technology could be dangerous, with OpenAI admitting when it released GPT-3’s predecessor GPT-2 that it could be used in “malicious ways,” including to produce abusive content on social media, “generate misleading news articles.” ‘ and ‘impersonating others online’.

The company released GPT-2 as a staggered release, restricting access to the newer version to “give people time” to understand the technology’s “societal implications.”

There’s already concern about AI’s potential to fuel misinformation, with the director of a new Anthony Bourdain documentary earlier this month admitting using it to get the late food personality to express things he never officially said.

Bourdain, who committed suicide in a Paris hotel suite in June 2018, is the subject of the new documentary Roadrunner: A Film About Anthony Bourdain.

It features the prolific author, chef, and TV host in his own words — drawn from television and radio appearances, podcasts, and audiobooks.

But in a few cases, filmmaker Morgan Neville says he used some technological tricks to put words in Bourdain’s mouth.

As Helen Rosner of The New Yorker reported, in the second half of the film, LA artist David Choe reads from an email Bourdain sent him: “Dude, this is a crazy thing to ask, but I’m curious.” …’

Then the voice reciting the email shifts – suddenly it’s Bourdain’s and says ‘. . . and my life is kind of shit right now. You are successful, and I am successful, and I wonder: are you happy?’

Still grieving, Barbeau came across a website called Project December last year and after paying $5 for an account with information his service to create a new bot called 'Jessica Courtney Pereira'.

Still grieving, Barbeau came across a website called Project December last year and after paying $5 for an account with information his service to create a new bot called ‘Jessica Courtney Pereira’.

Rosner asked Neville, who also directed the documentary of Mr. Rogers directed 2018, Won’t You Be My Neighbor?, how he may have found audio of Bourdain reading an email he sent someone else.

It turns out he didn’t.

“There were three quotes there that I wanted his vote for and there were no recordings,” Neville said.

So he gave a software company dozens of hours of audio recordings of Bourdain and they developed, according to Neville, an “AI model of his voice.”

Ian Goodfellow, director of machine learning at Apple’s Special Projects Group, coined the term “deepfake,” a portmanteau of “deep learning” and “fake,” in 2014.

It is a video, audio or photo that appears authentic, but is in reality the result of artificial intelligence manipulation.

A system studies a target’s input from multiple angles – photos, videos, sound clips or other input – and develops an algorithm to mimic their behavior, movements and speech patterns.

The story has drawn comparisons to Black Mirror, the British TV series in which characters use a new service to keep in touch with their deceased loved ones.

The story has drawn comparisons to Black Mirror, the British TV series in which characters use a new service to keep in touch with their deceased loved ones.

Rosner was only able to detect the one scene where the deepfake audio was used, but Neville admits there were more.

Another deepfake video, of speaker Nancy Pelosi seemingly blurting her words, helped Facebook ban the fabricated clips in January 2020, ahead of the presidential election later that year.

In a blog post, Facebook said it would remove deceptive, manipulated media edited in ways that “are not obvious to an average person and would likely mislead someone into thinking a subject of the video said words they didn’t really say.”

It’s not clear whether the Bourdain rules, which he wrote but never uttered, would be banned from the platform.

After the Cruise video went viral, Rachel Tobac, CEO of online security company SocialProof, tweeted that we had reached a stage of nearly “undetectable deepfakes.”

“Deepfakes will erode public confidence, provide cover and plausible deniability for criminals/abusers captured on video or audio, and will (and are) used to manipulate, humiliate and hurt people,” Tobac wrote.

“If you’re building engineered/synthetic media detection technology, get to work.”

.