Google has published new findings that raise questions about the reliability of modern AI chatbots, showing that even the most advanced systems often struggle to deliver factually correct information. Using its newly developed FACTS Benchmark Suite, the company found that leading AI models fail to surpass a 70% accuracy rate, even when presenting answers with high confidence. Gemini 3 Pro led the pack with a score of 69%, while other widely used models from OpenAI, Anthropic, and xAI scored lower, demonstrating that factual errors remain a persistent challenge in current AI technology.
The FACTS Benchmark was created to address a critical gap in how AI performance is evaluated. Traditional assessments often focus on task completion or fluency rather than factual correctness. This distinction is particularly important for sectors such as healthcare, finance, and law, where inaccuracies can have serious consequences. An AI chatbot might produce responses that sound authoritative, yet contain errors that mislead users who assume the output is fully reliable. Google emphasized that the benchmark tests models in ways that go beyond surface-level performance to evaluate the accuracy and trustworthiness of their outputs.
The FACTS Benchmark Suite evaluates AI performance across four key categories. Parametric knowledge tests whether a model can correctly answer questions using information learned during training. Search performance examines the model’s ability to retrieve accurate information using web tools. Grounding evaluates whether the chatbot can stay faithful to a given document without adding false details. Finally, multimodal understanding measures a model’s ability to interpret charts, diagrams, and images accurately. Google’s tests revealed that multimodal tasks remain the weakest area across all models, with accuracy frequently dropping below 50%, highlighting the risk of confidently presenting incorrect numerical data or misinterpreted visuals.
The benchmark results illustrate notable performance differences between AI systems. Gemini 3 Pro achieved the highest overall score at 69%, followed closely by Gemini 2.5 Pro and OpenAI’s ChatGPT-5 at around 62%. Anthropic’s Claude 4.5 Opus scored roughly 51%, while xAI’s Grok 4 reached approximately 54%. Google noted that even the top-performing models make errors in roughly one out of every three responses. These findings underscore the importance of human oversight, especially in areas where factual accuracy is critical. Google clarified that these results do not diminish the value of AI chatbots, but they highlight the ongoing need for safeguards, verification processes, and careful validation to ensure reliable use in professional or high-stakes environments.
As AI systems continue to evolve, the FACTS Benchmark provides a standardized way to evaluate not only whether models can complete tasks but also whether they provide correct and trustworthy information. For now, users are advised to treat AI-generated content with caution, particularly in fields where incorrect data could lead to serious errors or financial and legal consequences. Google’s findings make it clear that while AI chatbots are improving rapidly, human judgment remains a critical part of ensuring the accuracy and reliability of AI-assisted workflows.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.