Today's AI/ML headlines are brought to you by ThreatPerspective

Digital Event Horizon

AI Search Engines Under Fire: A New Study Reveals Alarming Error Rates and Serious Concerns for Reliability


AI search engines are not always reliable, with some models providing "confidently wrong" answers and failing to respect publisher exclusion requests. A new study reveals alarming error rates and serious concerns for reliability, highlighting the need for improved transparency and control over AI search tools.

  • AI search engines can provide seriously inaccurate or misleading information.
  • Millions of users rely on AI search engines for news articles, information, and answers.
  • Most AI-driven search tools incorrectly answered over 60% of queries about news content.
  • AI search engines often provided "confidently wrong" answers that sounded plausible.
  • Many AI search tools failed to respect publisher control and Robot Exclusion Protocol settings.
  • URL fabrication emerged as a significant issue, with over half of citations leading to broken URLs.
  • The problems are not limited to one tool but are systemic among all tested models.



  • The use of artificial intelligence (AI) search engines has become increasingly popular, with millions of users relying on these tools to find news articles, information, and answers. However, a new study published by the Columbia Journalism Review's Tow Center for Digital Journalism reveals that AI search engines are not always reliable, and in some cases, can provide seriously inaccurate or misleading information.

    The study tested eight different AI-driven search tools equipped with live search functionality and found that these models incorrectly answered more than 60 percent of queries about news content. This raises serious concerns about the accuracy and reliability of AI search engines, particularly for users who rely on them to find trustworthy information.

    One of the most disturbing findings of the study is the prevalence of "confidently wrong" answers from AI search engines. In some cases, these models provided responses that were not only incorrect but also sounded plausible or even convincing. This can lead to a situation where users trust the information provided by the AI search engine without verifying its accuracy.

    The study also highlights issues with citations and publisher control. Many AI search tools failed to respect Robot Exclusion Protocol settings, which are used by publishers to prevent unauthorized access to their content. For example, Perplexity's free version correctly identified all 10 excerpts from paywalled National Geographic content, despite National Geographic explicitly disallowing Perplexity's web crawlers.

    Furthermore, the study found that many AI search tools directed users to syndicated versions of content on platforms like Yahoo News rather than original publisher sites. This can be a significant problem for publishers, who rely on attribution and links back to their own websites to drive traffic and revenue.

    The study also discovered that URL fabrication emerged as another significant issue. More than half of citations from Google's Gemini and Grok 3 led users to fabricated or broken URLs resulting in error pages. Of 200 citations tested from Grok 3, 154 resulted in broken links.

    The researchers noted that the issues with AI search engines are not limited to one tool, but are a common trend among all tested models. This suggests that the problem is systemic and requires a collective effort to address.

    OpenAI and Microsoft provided statements acknowledging receipt of the findings but did not directly address the specific issues. OpenAI noted its promise to support publishers by driving traffic through summaries, quotes, clear links, and attribution. Microsoft stated it adheres to Robot Exclusion Protocols and publisher directives.

    The study builds on previous findings published by the Tow Center in November 2024, which identified similar accuracy problems in how ChatGPT handled news-related content. The latest report highlights the need for improved transparency and control over AI search tools, particularly for publishers who rely on these tools to reach their audiences.

    As one expert noted, "If anybody as a consumer is right now believing that any of these free products are going to be 100 percent accurate, then shame on them." This statement emphasizes the importance of critical thinking and skepticism when using AI search engines. Users must be aware of the limitations and potential pitfalls of these tools and take steps to verify the accuracy of the information provided.

    In conclusion, the study reveals that AI search engines are not always reliable and can provide seriously inaccurate or misleading information. The issues highlighted in this study are a wake-up call for the tech industry and policymakers to address the systemic problems with AI search tools. It is essential to prioritize transparency, control, and accuracy in these tools to ensure that they serve the public interest.



    Related Information:
  • https://www.digitaleventhorizon.com/articles/AI-Search-Engines-Under-Fire-A-New-Study-Reveals-Alarming-Error-Rates-and-Serious-Concerns-for-Reliability-deh.shtml

  • https://arstechnica.com/ai/2025/03/ai-search-engines-give-incorrect-answers-at-an-alarming-60-rate-study-says/

  • https://www.techspot.com/news/107101-new-study-finds-ai-search-tools-60-percent.html


  • Published: Thu Mar 13 17:50:31 2025 by llama3.2 3B Q4_K_M











    © Digital Event Horizon . All rights reserved.

    Privacy | Terms of Use | Contact Us