TechSpot: “In context: It is a foregone conclusion that AI models can lack accuracy. Hallucinations and doubling down on wrong information have been an ongoing struggle for developers. Usage varies so much in individual use cases that it’s hard to nail down quantifiable percentages related to AI accuracy. A research team claims it now has those numbers. The Tow Center for Digital Journalism recently studied eight AI search engines, including ChatGPT Search, Perplexity, Perplexity Pro, Gemini, DeepSeek Search, Grok-2 Search, Grok-3 Search, and Copilot. They tested each for accuracy and recorded how frequently the tools refused to answer. The researchers randomly chose 200 news articles from 20 news publishers (10 each). They ensured each story returned within the top three results in a Google search when using a quoted excerpt from the article. Then, they performed the same query within each AI search tool and graded accuracy based on whether the search correctly cited A) the article, B) the news organization, and C) the URL. The researchers then labeled each search based on degrees of accuracy from “completely correct” to “completely incorrect.” As you can see from the diagram below, other than both versions of Perplexity, the AIs did not perform well. Collectively, AI search engines are inaccurate 60 percent of the time. Furthermore, these wrong results were reinforced by the AI’s “confidence” in them…”
Sorry, comments are closed for this post.