Snapchat has jumped on the artificial intelligence (AI) bandwagon as it now rolls out an in-app version of ChatGPT.
Users can ask the chatbot, dubbed ‘My AI’, questions while messaging their friends to facilitate the conversation.
It can help them come up with dinner suggestions, send a personal poem to a loved one, or come up with a flirty icebreaker.
My AI uses the same technology as OpenAI’s ChatGPT, but is specially trained to meet the app’s security guidelines.
Snapchat has also revealed that it is still “prone to hallucinations and can be tricked into saying just about anything.”
Snapchat users can ask the chatbot, dubbed ‘My AI’, questions while messaging their friends to ease the conversation

My AI uses the same technology as OpenAI’s ChatGPT, but is specially trained to meet the app’s security guidelines
In AI, hallucinations are when the technology confidently responds to a question with incorrect information that it appears to have made up.
For example, Google’s rival chatbot Bard had a wrong question in a promotional video, wiping £100 billion off its parent company’s value.
The bot had been asked what to tell a nine-year-old about the James Webb Space Telescope and its discoveries.
In response, Bard defiantly announced that Webb was the first to take pictures of a planet outside Earth’s solar system.
However, astronomers were quick to point out that this was actually done in 2004 by the European Observatory’s Very Large Telescope.
Indeed, ChatGPT also appears capable of sending users insults, lies, and conversations questioning its capabilities.
A social media post showed it calling someone “a sociopath, a psychopath, a monster, a demon, a devil.”
While My AI is designed not to provide “biased, inaccurate, harmful, or misleading information,” Snapchat has admitted that “errors can occur.”
It’s currently only rolling out to Snapchat+ subscribers, who pay £3.99 a month for the latest app features.
A conversation with the AI - complete with Bitmoji – is pinned to the top of the Chat tab and can be switched during a conversation with another user.

In AI, hallucinations are when the technology responds confidently to a question with misinformation, which it appears to have made up. For example, Google’s rival chatbot Bard had the wrong question about the James Webb Space Telescope (pictured)

It has been found that ChatGPT (pictured) can send users insults, lies, and conversations questioning its capabilities. Snapchat has said its custom AI chatbot “My AI” is still “prone to hallucinations and can be tricked into saying just about anything”
Snapchat has said that My AI is an experimental feature and that user feedback will help improve it in the future, if and when it is rolled out more widely.
The company added: “All conversations with My AI are saved and can be reviewed to improve the product experience.
“Please don’t share secrets with My AI or rely on it for advice.”
While the disclaimer may seem redundant, a woman reportedly divorced her husband based on relationship advice ChatGPT gave her.

My AI is a modified version of ChatGPT that doesn’t provide comments that contain swearing, sexual references, or other inappropriate content.
This is especially important because Snapchat can be downloaded by children as young as 13 years old.
The company hopes the new feature will help “foster deeper connections between friends,” as well as become something that will draw “(its) community” to the app.
Speak against The edgeEvan Spiegel, Snap’s CEO, said, “The big idea is that we’re not just talking to our friends and family every day, but we’re going to talk to AI every day.
“This is something we as a messaging service are good at.”
Snapchat is by far the first company to capitalize on ChatGPT’s massive success, with greeting card vendor Moonpig considering integrating it into its online store.
Elon Musk is also rumored to be working on an “anti-woke” rival for the supposedly biased chatbot.