Editorial illustration for Experts caution against giving health info to chatbots as Google updates MedGemma
AI Health Chatbots Risk Dangerous Medical Misinformation
Experts caution against giving health info to chatbots as Google updates MedGemma
Why should you think twice before typing your latest blood‑test numbers into a chat window? A growing chorus of clinicians and privacy advocates warns that many generative‑AI assistants still treat medical data like any other user input, offering no guarantee that the information stays confidential or that the advice is clinically sound. Yet the hype around AI‑driven health tools keeps rising, and some of the biggest names in the field appear to be steering the conversation in surprising directions.
While OpenAI’s platforms have been criticized for prompting users to upload lab results and even full medical records, other providers have stayed relatively quiet about their own medical‑focused models. The silence is striking, especially given the recent rollout of a new version of a developer‑oriented health AI from a major search‑engine company. This backdrop makes the following observation worth a closer look:
(Notably absent is Google, whose Gemini chatbot is one of the world's most competent and widely used AI tools, though the company did announce an update to its MedGemma medical AI model for developers.) OpenAI actively encourages users to share sensitive information like medical records, lab results, and health and wellness data from apps like Apple Health, Peloton, Weight Watchers, and MyFitnessPal with ChatGPT Health in exchange for deeper insights. It explicitly states that users' health data will be kept confidential and won't be used to train AI models, and that steps have been taken to keep data secure and private. OpenAI says ChatGPT Health conversations will also be held in a separate part of the app, with users able to view or delete Health "memories" at any time. OpenAI's assurances that it will keep users' sensitive data safe have been helped in no small way by the company launching an identical-sounding product with tighter security protocols at almost the same time as ChatGPT Health.
Is it wise to hand your medical history to a chatbot? The numbers say millions are already doing it, asking ChatGPT for health advice week after week. OpenAI’s stance encourages sharing diagnoses, prescriptions, even lab results, framing the bot as a helpful ally.
Yet the technology is not a substitute for professional care, and the risks of data exposure remain. Google’s Gemini, while widely used, is conspicuously missing from the discussion, even as the firm rolled out an update to its MedGemma model for developers. That omission raises questions about how the industry will handle privacy safeguards.
Developers now have a more capable medical AI, but whether they will enforce strict confidentiality is unclear. Users should weigh the convenience of instant answers against the possibility of inaccurate guidance and unsecured personal information. In short, the promise of AI‑driven health support is evident, but the safety of entrusting sensitive records to a conversational interface has yet to be demonstrated.
Further Reading
- MedGemma: Our most capable open models for health AI development - Google Research Blog
- Gemini 3 in Healthcare: An Analysis of Its Capabilities - IntuitionLabs
- MedGemma 1.5 model card | Health AI Developer Foundations - Google Developers
- MedGemma - Google DeepMind - Google DeepMind
Common Questions Answered
What is ChatGPT Health and what types of medical data can users upload?
[time.com](https://time.com/7344997/chatgpt-health-medical-records-privacy-open-ai/) reports that ChatGPT Health allows users to upload medical records including lab results, visit summaries, and clinical history. Users can also connect apps like Apple Health, Function, MyFitnessPal, and Weight Watchers to provide additional health data such as steps, sleep duration, blood test markers, and nutrition information.
What privacy concerns have experts raised about ChatGPT Health?
[bbc.com](https://www.bbc.com/news/articles/cpqy29d0yjgo) highlights that privacy advocates are worried about the sensitive nature of health data being shared with AI tools. Andrew Crawford from the Center for Democracy and Technology emphasized the crucial need for 'airtight' safeguards around users' health information, especially as AI companies explore new business models like potential advertising.
How many people are currently using ChatGPT for health-related questions?
[time.com](https://time.com/7344997/chatgpt-health-medical-records-privacy-open-ai/) indicates that over 40 million people ask ChatGPT health care-related questions every day, which represents more than 5% of all global messages on the platform. [bbc.com](https://www.bbc.com/news/articles/cpqy29d0yjgo) further notes that approximately 230 million people ask the chatbot questions about health and wellbeing every week.