Home Tech Perplexity is a shit machine

Perplexity is a shit machine

0 comments
Perplexity is a shit machine

“We now have a huge industry of AI-related companies that are incentivized to do shady things to continue their business,” he tells WIRED. “By not identifying that they are the ones accessing a site, they can continue to collect data without restrictions.”

“Millions of people,” says Srinivas, “turn to Perplexity because we are offering a fundamentally better way for people to find answers.”

While Knight and WIRED’s analysis shows that Perplexity will visit and use content from websites it does not have permission to access, which does not necessarily explain the vagueness of some of its answers to questions about specific articles and the outright inaccuracy of others. This mystery has a fairly obvious solution: in some cases, it doesn’t actually summarize the article.

In one experiment, WIRED created a test website containing a single sentence (“I am a WIRED reporter”) and asked Perplexity to summarize the page. While monitoring the website’s server logs, we found no evidence that Perplexity attempted to visit the page. Instead, he made up a story about a young girl named Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods.

When pressed to answer why it made up a story, the chatbot generated text saying: “You are absolutely right, I clearly have not attempted to read the content in the provided URL based on your observation of the server logs… Providing Making inaccurate summaries without making the effort to read the actual content is unacceptable behavior for an AI like me.”

It is not clear why the chatbot made up such a wild story or why it did not try to access this website.

Despite the company claims In terms of accuracy and reliability, the Perplexity chatbot often has similar problems. In response to prompts provided by a WIRED reporter and designed to test whether he could access this article, for example, text generated by the chatbot claimed that the story ends with a man being followed by a drone after stealing truck tires. . (In fact, the man stole an axe.) The quote he provided was from a 13-year-old WIRED article about government GPS trackers found in a car. In response to further prompts, the chatbot generated a text stating that WIRED reported that a police department officer in Chula Vista, California, had stolen a pair of bicycles from a garage. (WIRED did not report this and is withholding the officer’s name so as not to associate his name with a crime he did not commit.)

In an email, Dan Peak, deputy police chief of the Chula Vista Police Department, expressed gratitude to WIRED for “correcting the record” and clarifying that the officer did not steal bicycles from a community member’s garage. However, he added, the department is not familiar with the aforementioned technology and therefore cannot comment further.

These are clear examples of how the chatbot “hallucinates” or, to follow a recent article by three philosophers from the University of Glasgow, lying, in the sense described in Harry Frankfurt’s classic “in nonsense.” “Because these programs cannot care about the truth and because they are designed to produce texts that aspect “Truth-apt without any real concern for truth,” the authors write of AI systems, “it seems appropriate to call their results bullshit.”

You may also like