ChatGPT wants your medical records for its Health AI feature to work for you

Ask AI to Summarize: ChatGPT Perplexity Grok Google AI

OpenAI rolled out ChatGPT Health on January 7, 2026, a feature that urges users to hand over their medical records and data from wellness apps to receive customized responses on health matters.

ChatGPT wants your medical records for its Health AI feature to work for you
Credit: OpenAI

The San Francisco-based company positioned this as a tool to empower individuals, yet the requirement for intimate health details raises serious questions about whether the benefits outweigh the vulnerabilities in data handling.

Users must connect through partners like b.well for medical records in the United States, or link apps such as Apple Health on iOS devices, MyFitnessPal for nutrition tracking, and Function for lab insights, all to inform the AI's replies.

OpenAI claims over 230 million people worldwide query ChatGPT weekly on health topics, and this integration aims to address fragmented information from portals, wearables, and notes by grounding answers in personal contexts.

The rollout began with a limited group of users outside the European Economic Area, Switzerland, and the United Kingdom, with plans to broaden access soon via web and iOS platforms for ChatGPT Free, Go, Plus, and Pro subscribers.

Medical integrations remain restricted to the U.S., where b.well facilitates connections to healthcare providers.

OpenAI developed the feature alongside more than 260 physicians from 60 countries, who contributed feedback on over 600,000 model outputs to refine safety and clarity.

The company insists ChatGPT Health exists solely to assist with everyday queries, like deciphering lab results or planning workouts, and it explicitly avoids diagnosing conditions or prescribing treatments.

"Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment," OpenAI stated in its announcement.

Fidji Simo, OpenAI's CEO of applications, described the move as a progression:

“ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life.”

Despite these assurances, the push for users to upload sensitive records feels like a calculated expansion into healthcare, where AI firms like OpenAI could amass vast troves of personal data under the guise of convenience, potentially prioritizing profit over user security in an industry already plagued by breaches.

OpenAI touts enhanced safeguards, including a segregated space within ChatGPT where health chats, files, and memories stay isolated from other conversations, with purpose-built encryption and no use of these interactions to train its models.

Users can delete connections anytime through settings, and multi-factor authentication adds another layer.

Yet the reality of partnering with third parties like b.well, even if it meets high security standards, introduces points of failure that could expose medical histories to unintended eyes.

OpenAI's prior forays into health, such as the August 2025 release of GPT-5 and the May 2025 HealthBench evaluation framework, signal a deliberate strategy to dominate this sector, but the rapid deployment of ChatGPT Health underscores a recklessness that undervalues the irreversible harm from mishandled health data.

Sam Altman, OpenAI's CEO, highlighted healthcare as the domain with the most dramatic AI advancements, a view that now manifests in this feature but glosses over ethical pitfalls.

By demanding access to clinical histories, visit summaries, and fitness patterns, OpenAI not only personalizes advice on insurance options or appointment prep but also positions itself as an unchecked intermediary in private health decisions, a role that demands far greater scrutiny than the company has invited.