Connecting the Dots: Can ChatGPT Health Fix Fragmented Care?
- Deborah Nas
- Jan 8
- 4 min read
Updated: Mar 2

OpenAI just announced ChatGPT Health. Although I’m usually quite critical about ChatGPT’s sycophantic behaviour and its impact on what people believe to be true, I see this as a promising move. Here are my thoughts:
First, let’s look at what ChatGPT Health is
A dedicated, privacy‑hardened space inside ChatGPT for health-related questions. Health lives in a separate tab with its own history and memories, which are not used to train foundation models and are walled off from your regular chats.
Lets you securely connect electronic medical records and wellness apps like Apple Health, MyFitnessPal and Peloton so that explanations, trends and suggestions are grounded in your own data rather than generic search results.
Designed to help you understand lab results, prepare for doctor visits, make sense of device and wearable data, and compare things like insurance options.
Crucially, this feature is not available in Europe yet. Given the stringent regulatory landscape here, it will likely take some time before it arrives.
Why this is a promising development
Over 200 million people already ask ChatGPT health questions every week. Often late at night, in between appointments, or when they simply get stuck in the healthcare system. I’m hopeful that people will get more privacy protection and better output from a health-specific flavour of ChatGPT than what they’re getting from the generic one today.
Healthcare is a fragmented system. Health information is scattered across portals and apps, doctor visits are short, there can be long gaps between appointments, and documentation that is available is often jargon-filled. As a result, many people struggle to use their own health information effectively. An always‑available assistant can help people make sense of their own health at their own pace, and show up better prepared for the scarce time they have with clinicians.
At the same time, this is not an unambiguous win
Connecting medical records to a general‑purpose AI system creates more opportunities for subtle errors, biased suggestions, over‑trust and misinterpretation of nuanced cases.
Even with stronger privacy guarantees, encryption and a separate health space, this concentrates highly sensitive data in one more place. Raising questions about governance, access control and long‑term use. And no amount of fine‑tuning or benchmarking (including OpenAI’s HealthBench work with hundreds of physicians) will turn a language model into a licensed clinician.
Where this might be going
As making sense of your health data is such a strong use case, fulfilling a need and creating value, I would expect this in the longer term to become a paid option within ChatGPT. As a top-up on your regular subscription. This will take time, of course, as they first need to collect the data on how people are using it and fine-tune their product.
ChatGPT already launched group chats. Imagine inviting your GP and medical specialists to such a group chat. Enabling your doctors to see the questions you asked and the information you got. And perhaps ask questions themselves and comment on information provided. From an information management and diagnostic perspective, this would be highly beneficial. It does have implications for the healthcare business model. Can a doctor register their time spent in a chat and get that reimbursed from the insurer? How much time are they allowed to spend in such a chat? How to deal with patients that generate hundreds of chat messages each day? It would also make ChatGPT a medical product, meaning it will need appropriate licences in every country. That is probably not the route OpenAI will prioritise. But for patients and doctors alike, it would be the preferred route.
My takeaway
ChatGPT Health is a significant step toward AI as an everyday health companion. It fulfils a huge unmet need, as many people experience healthcare as a broken system. It can genuinely help people navigate complexity, ask better questions and bridge gaps in a strained system, but only if we pair the excitement with clear guardrails, critical thinking and robust oversight. As this rolls out from a small waitlist to broader availability, it will be crucial for patients, clinicians, regulators and builders to stay very intentional about how (and when) this new capability is actually used.
People are using Google and ChatGPT anyway to make sense of their health information. Partly because the health system itself doesn't provide a better option for this. What worries me is that so far, OpenAI prioritised growth and user engagement over safety. Think about the cases of AI psychosis, suicide coaching and their plans to enable erotic role play (which, in my opinion, will be highly addictive).
Research Call
I'm currently doing research into the role of AI companions in our lives and AI health assistants is one of the most beneficial use cases. If you have personal experience with using AI intensively to make sense of your health situation, either with positive or negative experiences, please contact me. I would love to interview you to learn more (confidentiality guaranteed).


