Google has removed AI-generated summaries from specific health-related search queries following an investigation by The Guardian that highlighted potential risks of misleading medical information. The decision came after concerns were raised over AI Overviews providing numerical ranges for liver blood tests without accounting for variables such as age, sex, ethnicity, or nationality. This omission could mislead users into believing their results were normal when they were not, demonstrating the limitations of relying solely on AI for medical guidance.
After the investigation, searches for phrases like “what is the normal range for liver blood tests” and “what is the normal range for liver function tests” no longer returned AI Overviews. Despite this, similar queries such as “lft reference range” and “lft test reference range” still generated AI summaries, indicating that while certain results were removed, broader challenges with AI-generated health content remain. Google continues to offer users the option to rerun searches in AI Mode, and in some instances, top search results now link directly to The Guardian’s reporting on the removals.
Google has stated that it does not comment on individual search changes but emphasized ongoing efforts to improve its systems broadly. A spokesperson mentioned that an internal team of clinicians reviewed the examples cited and found that in many cases, the information was accurate and supported by reputable websites. Previous initiatives from Google have focused on enhancing AI for health-related searches, including updates to Search overviews and development of health-focused AI models aimed at providing more reliable guidance.
Health organizations have welcomed the removal of these AI Overviews but remain cautious about the broader implications. Vanessa Hebditch, director of communications and policy at British Liver Trust, described the changes as positive yet emphasized continued concern over AI-generated medical information. She argued that disabling specific results does not address the overall risks posed by AI in healthcare, highlighting the need for careful oversight and ongoing evaluation of digital health tools to ensure accuracy and safety. The incident underscores the challenges search engines face when integrating AI into critical domains such as healthcare, where incomplete or context-free information can have significant consequences.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.