OpenAI has formally stepped into the healthcare space with the announcement of ChatGPT Health, a dedicated health focused chatbot designed to discuss symptoms with users in a private and encrypted environment. The company says the service will integrate with Apple Health and will not use personal medical data for model training, positioning it as a more secure and responsible extension of its widely used conversational AI. The move comes at a time when millions of people already rely on general purpose chatbots for informal medical guidance, highlighting a growing gap between public demand and regulated healthcare access. However, as consumer facing medical AI gains momentum, concerns are intensifying around accuracy, accountability, and whether this approach addresses the real problems facing healthcare systems.
OpenAI is not introducing a new user behaviour but formalising one that already exists at scale. More than 230 million people reportedly consult ChatGPT about health related concerns every week, often seeking reassurance or explanations long before speaking to a medical professional. By launching a dedicated health product, OpenAI is effectively assuming greater responsibility for how this information is interpreted and acted upon. That responsibility carries risk. Dr Sina Bari, a surgeon and AI specialist at iMerit, recently described treating a patient who arrived deeply alarmed by a chatbot generated diagnosis. The AI had claimed that the patient’s medication carried a 45 percent risk of pulmonary embolism. After investigation, Dr Bari found that the figure had been hallucinated from a narrow research paper focused on tuberculosis patients, a context that had no relevance to the case at hand. Despite this, the patient trusted the AI output more than clinical judgement. Independent research supports these concerns. Vectara’s Factual Consistency Evaluation Model indicates that OpenAI’s GPT 5 shows a higher rate of hallucinations compared to competing models from Google and Anthropic. Alongside accuracy issues, regulatory experts such as Itai Schwartz, co founder of MIND, warn that medical data may be flowing from HIPAA compliant institutions into ecosystems that are not fully aligned with healthcare compliance standards, raising serious data governance and security questions.
While OpenAI targets consumers, other AI companies are taking a different path by focusing on healthcare professionals and institutional workflows. The core challenge for modern healthcare is not a lack of patient curiosity, but systemic overload. Administrative duties account for roughly half of a primary care physician’s working hours, contributing to appointment delays that can stretch from three to six months. This bottleneck has direct consequences for patient outcomes and satisfaction. Anthropic has chosen to address this problem with Claude for Healthcare, a platform designed to assist providers rather than replace them. The tool focuses on tasks such as prior authorisation requests and documentation, areas that drain time without adding clinical value. According to Anthropic CPO Mike Krieger, the system can reduce case handling time by 20 to 30 minutes, allowing doctors to redirect attention toward patient care. In a similar vein, Stanford Health Care is developing ChatEHR, an AI system embedded directly within electronic health record platforms. Early users like Dr Sneha Jain report that it enables clinicians to query patient histories instantly, eliminating the need to manually search through fragmented records and freeing time for diagnosis and treatment.
The divide between these approaches reflects a broader tension in health technology. Clinicians like Dr Bari point out that medical professionals are driven by patient safety, while technology companies must also answer to commercial pressures. Stanford’s Dr Nigam Shah argues that patient reliance on AI is often rooted in desperation, as individuals seek immediate answers when access to human care is delayed. In that context, conversational AI becomes a coping mechanism rather than a solution. Providing uncertain or hallucinated medical advice may offer temporary reassurance, but it does not resolve the structural issues causing patients to turn to machines in the first place. The more sustainable path for healthcare AI lies in strengthening the systems behind the scenes, reducing administrative burden, and enabling doctors to spend more time with patients, rather than positioning chatbots as substitutes for professional medical judgement.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.