Potential risks of sharing medical records with a consumer AI platform

- OpenAI recently introduced “ChatGPT Health,” a specialized extension of ChatGPT designed to handle health-related conversations and enable users to link their medical records and wellness apps for more personalized insights. The company says this builds on its existing security framework.
- According to OpenAI, the new health feature includes “additional, layered protections” tailored to sensitive medical information — such as purpose-built encryption and data isolation that aims to separate health data from other chatbot interactions.
- The company also claims that data shared in ChatGPT Health won’t be used to train its broader AI models, a move intended to keep medical information out of the core model’s training dataset.
- OpenAI says millions of users widely ask health and wellness questions on its platform already, which it uses to justify a dedicated space where those interactions can be more contextualized and, allegedly, safer.
- Privacy advocates, however, are raising serious concerns. They note that medical records uploaded to ChatGPT Health are no longer protected by HIPAA, the U.S. law that governs how healthcare providers safeguard patients’ private health information.
- Experts like Sara Geoghegan from the Electronic Privacy Information Center warn that releasing sensitive health data into OpenAI’s systems removes legal privacy protections and exposes users to risk. Without a law like HIPAA applying to ChatGPT, the company’s own policies are the only thing standing between users and potential misuse.
- Critics also caution that OpenAI’s evolving business model, particularly if it expands into personalization or advertising, could create incentives to use health data in ways users don’t expect or fully understand.
- Key questions remain unanswered, such as how exactly the company would respond to law enforcement requests for health data and how effectively health data is truly isolated from other systems if policies change.
- The feature’s reliance on connected wellness apps and external partners also introduces additional vectors where sensitive information could potentially be exposed or accessed if there’s a breach or policy change.
- In summary, while OpenAI pitches ChatGPT Health as an innovation with enhanced safeguards, privacy advocates argue that without robust legal protections and clear transparency, sharing medical records with a consumer AI platform remains risky.
My Opinion
AI has immense potential to augment how people manage and understand their health, especially for non-urgent questions or preparing for doctor visits. But giving any tech company access to medical records without the backing of strong legal protections like HIPAA feels premature and potentially unsafe. Technical safeguards such as encryption and data isolation matter — but they don’t replace enforceable privacy laws that restrict how health data can be used, shared, or disclosed. In healthcare, trust and accountability are paramount, and without those, even well-intentioned tools can expose individuals to privacy risks or misuse of deeply personal information. Until regulatory frameworks evolve to explicitly protect AI-mediated health data, users should proceed with caution and understand the privacy trade-offs they’re making.
InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | AIMS Services | Security Risk Assessment Services | Mergers and Acquisition Security
At DISC InfoSec, we help organizations navigate this landscape by aligning AI risk management, governance, security, and compliance into a single, practical roadmap. Whether you are experimenting with AI or deploying it at scale, we help you choose and operationalize the right frameworks to reduce risk and build trust. Learn more at DISC InfoSec.
- The Hidden Frontlines: How Awareness, Intellectual Property, and Environment Shape Today’s Greatest Risks
- AI Can Help Our Health — But at What Cost to Privacy?
- AI Agent Security: The Next Frontier of Cyber Risk and Defense
- California Opt Me Out Act: A New Era of Automated Privacy Control
- Agentic AI: Why Autonomous Systems Redefine Enterprise Risk


