Home Money Anti-perplexity lawsuit alleges hallucinations from fake news

Anti-perplexity lawsuit alleges hallucinations from fake news

0 comments
Anti-perplexity lawsuit alleges hallucinations from fake news

Perplexity did not respond to requests for comment.

In an emailed statement to WIRED, News Corp CEO Robert Thomson unfavorably compared Perplexity to OpenAI. “We applaud principled companies like OpenAI, who understand that integrity and creativity are essential if we are to realize the potential of Artificial Intelligence,” the statement said. “Perplexity is not the only AI company abusing intellectual property and it is not the only AI company we will pursue with vigor and rigor. “We have made it clear that we would rather court than sue, but for the sake of our journalists, our writers and our company, we must challenge the kleptocracy of content.”

However, OpenAI faces its own accusations of brand dilution. In it New York Times vs. OpenAIthe times alleges that ChatGPT and Bing Chat will attribute fabricated quotes to the Times, and accuses OpenAI and Microsoft of damaging their reputations through brand dilution. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times called red wine (in moderation) a “heart-healthy” food, when in fact it was not; The Times maintains that its factual reporting has debunked health claims about moderate alcohol consumption.

“Copying news articles to operate commercial generative AI substitute products is illegal, as we made clear in our letters to Perplexity and in our litigation against Microsoft and OpenAI,” says NYT external communications director Charlie Stadtlander. “We applaud this lawsuit from Dow Jones and the New York Post, which is an important step in ensuring that publisher content is protected from this type of misappropriation.”

If the publishers prevail in arguing that hallucinations may violate trademark law, AI companies could face “immense difficulties,” according to Matthew Sag, a professor of law and artificial intelligence at Emory University.

“It is absolutely impossible to guarantee that a language model does not suffer from hallucinations,” says Sag. In his view, the way language models operate in predicting correct-sounding words in response to cues is always a type of hallucination; sometimes it just sounds more plausible than others.

“We only call it a hallucination if it doesn’t match our reality, but the process is exactly the same whether we like the result or not.”

You may also like