ChatGPT Health Promises Safety and Clarity – But at What Price to Consumer Privacy?
- Stefanie Schappert
- 3 days ago
- 4 min read
Guest Editorial by Stefanie Schappert, Senior Journalist, Cybernews
AI health assistants are here to stay, and they may provide real value in helping people interpret complicated medical information, but consumers should understand exactly what that means before inviting those tools into their most sensitive digital lives. What are the data risks consumers need to know before plunging headfirst into this new era of healthcare?
ChatGPT Health: Insight vs Exposure in AI-Driven Healthcare

Health data is already among the most sensitive personal information people have, and with the introduction of ChatGPT Health last week, users will undoubtedly be pouring their medical data into the AI chatbot with the same verve they have since ChatGPT was first launched in November 2022.
But should they?
The amount of sensitive information users freely and regularly post into ChatGPT (and other popular AI chatbots) is astounding.
A study last January found that nearly one in ten workers regularly exposed their own companies' sensitive data when using AI.
And when thousands of ChatGPT conversations were leaked via search engines last August, the conclusion was that people pretty much share everything with AI, literally.
So when OpenAI introduced its ChatGPT Health to the public, tech and health experts began sounding the warning bells about privacy and security issues, as well as the limits of AI's accuracy.
This makes it crucial to understand where information is going and how it’s being used, especially when the data in question includes deeply sensitive details such as medical history or chronic conditions.
"Designed to Support, Not Replace, Medical Care"
OpenAI touts ChatGPT Health as a “dedicated experience” intended to help people understand lab results, prepare for doctor visits, track fitness and wellness trends, or compare insurance options, marking a significant shift in how consumers interact with AI.
“Health is already one of the most common ways people use ChatGPT,” OpenAI said in the announcement, noting that 230 million people worldwide ask the bot health and wellness questions every week.
Users can now upload and connect Health not only to medical records, but also to wellness apps – such as Apple Health, Function, and MyFitnessPal – creating a complete individual health profile, the likes of which we have never seen before.
Traditionally, health data has been scattered across many devices and platforms – a hospital portal here, a fitness tracker there, a PDF of bloodwork in your inbox.
But now, health data will be woven together into new AI-generated interpretations and summaries, all stored within a single system.
Not just storing medical records, Health will aggregate and interpret them, creating narratives, patterns, and insights – a fundamental departure from how most people think about their medical data.
This matters because the value of health data isn’t just in its raw form; it’s what can be inferred and contextualized from it.
Derived insights, health trends over time, connections between symptoms and test results, and personalized explanations can prove more revealing than the “data points” themselves.
People may also consent to sharing individual data points, for example, a symptom or lab result, without understanding the new meaning that emerges once those data points are combined.
AI algorithms developed from aggregated data have already proven that, in the wrong hands, could easily lead to AI biases, workplace, or societal discrimination, impacting such variables as individual treatment plans or health insurance premiums, among many others.
Understanding the Privacy Tradeoffs
On the technical side, OpenAI says ChatGPT Health builds on its existing security architecture with additional, layered protections, including purpose-built encryption and isolation to keep health conversations protected and compartmentalized.
Users can also enable multi-factor authentication, review or delete Health memories, and revoke access to connected apps at any time, according to OpenAI.
With layered, end-to-end encryption, health conversations are isolated and not used to train models, the company further states.
Still, privacy critics have pointed out that when users upload medical records into an AI service – even one with promises of encryption and compartmentalization – they may effectively remove traditional privacy protections that would otherwise apply in regulated healthcare settings.
One expert recently told The Record that giving an AI access to electronic medical records can strip those records of the legal safeguards they enjoy under rules like HIPAA, which lays out how Protected Health Information (PHI) is processed, stored, transmitted, and secured.
“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time,” explained Sara Geoghegan, senior counsel at the Electronic Privacy Information Center.
Because health data remains among the most valuable targets for hackers, any system that aggregates medical records, wellness data, and AI-generated health insights – especially on a single platform – can significantly increase the amount of data exposed in the event of a breach.
From a cybersecurity perspective, aggregation also concentrates value, making AI health platforms especially attractive targets for attackers seeking high-impact data rather than isolated records.
The tradeoff – insight versus exposure – is destined to be the burning question we face moving forward.
One thing is certain: weighing insight vs. exposure is no longer theoretical – it is now the defining moment of AI-driven healthcare.
Stefanie Schappert, a senior journalist at Cybernews, is an accomplished writer with an M.S. in cybersecurity, immersed in the security world since 2019. She has a decade-plus experience in America’s #1 news market working for Fox News, Gannett, Blaze Media, Verizon Fios1, and NY1 News. With a strong focus on national security, data breaches, trending threats, hacker groups, global issues, and women in tech, she is also a commentator for live panels, podcasts, radio, and TV. Earned the ISC2 Certified in Cybersecurity (CC) certification as part of the initial CC pilot program, participated in numerous Capture-the-Flag (CTF) competitions, and took 3rd place in Temple University's International Social Engineering Pen Testing Competition, sponsored by Google. Member of Women’s Society of Cyberjutsu (WSC), Upsilon Pi Epsilon (UPE) International Honor Society for Computing and Information Disciplines.
