“Sorry in advance!” Snapchat warns of hallucinations with new AI conversation bot

A colorful and wild rendition of the Snapchat logo.
Benj Edwards / Snap, Inc.

reader comments
30 with

On Monday, Snapchat announced an experimental AI-powered conversational chatbot called “My AI,” powered by ChatGPT-style technology from OpenAI. My AI will be available for $3.99 a month for Snapchat+ subscribers and is rolling out “this week,” according to a news post from Snap, Inc.

Users will be able to personalize the AI bot by giving it a custom name. Conversations with the AI model will take place in a similar interface to a regular chat with a human. “The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day,” Snap CEO Evan Spiegel told The Verge.

But like its GPT-powered cousins, ChatGPT and Bing Chat, Snap says that My AI is prone to “hallucinations,” which are unexpected falsehoods generated by an AI model. On this point, Snap includes a rather lengthy disclaimer in its My AI announcement post:

“As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything. Please be aware of its many deficiencies and sorry in advance! All conversations with My AI will be stored and may be reviewed to improve the product experience. Please do not share any secrets with My AI and do not rely on it for advice.”

Among machine-learning researchers, “Hallucination” is a term of art that describes when an AI model makes inaccurate inferences about a subject or situation that isn’t covered in its training dataset. It’s a well-known drawback of current large language models such as ChatGPT, which can easily make up convincing-sounding falsehoods, such as academic papers that don’t exist and inaccurate biographies.

seized on this kind of dissonance to show that perhaps these chatbots are not ready for widespread use, especially when presented as a reference.

While people have made something of a game of trying to circumvent ChatGPT and Bing Chat’s workarounds, Snap has reportedly trained its variety of GPT model to not discuss sex, swearing, violence, or political opinions. Those restrictions may be especially necessary to avoid the unhinged behavior we saw with Bing Chat a few weeks ago.

And doubly so, because “My AI” may have something very powerful running under the hood: OpenAI’s next-generation large language model. According to The Verge, Snap is utilizing a new OpenAI enterprise plan called “Foundry” that OpenAI quietly rolled out earlier this month. It gives companies dedicated cloud access to OpenAI’s GPT-3.5 and “DV” models. Several AI experts have speculated that “DV” may be equivalent to GPT-4, the rumored high-powered follow-up to GPT-3.

In other words, the “hallucinations” Snap mentioned in its news release may come faster and be more detailed than ChatGPT. And considering the highly convincing nature of other GPT models, people may just believe it, despite the warnings. It’s something to keep an eye on in the months ahead as new GPT-powered commercial services come online.

Article Tags:
Article Categories: