The promise—and risks—of using AI for your health
Medical AI experts explain best practice. Plus, AI prompts you can use for your healthcare.
How to Use AI for your Medical Care, Long Covid MD Podcast, Episode 70
Millions Use AI for Health Questions
More than 40 million people ask ChatGPT health questions every day. That number tells us something important.
Patients—especially those with complex illness like long COVID—are looking for answers they’re not getting in the healthcare system. When you have many symptoms, many doctors, and still don’t have clear explanations, it makes sense to AI for help.
But what are you actually using when you ask AI about your health? And can you trust it?
To answer that, I spoke with two physician experts:
Dr. Leeda Rashid is family medicine doctor who worked for years as a Senior Physician at the FDA Digital Health Center of Excellence. This is the government agency that defines standards for and approves medical devices using AI tools.
Dr. Jennifer Curtin is a physician with ME/CFS and founder of RTHM Health, a tele-health platform serving patients with complex illness like long COVID. Her team recently launched RTHM Intelligence, an AI tool specifically designed for our patient population.
Together, they provide fascinating insight into the potential—and current limits—of AI. Their message is not that AI is bad, but that we need to use it correctly.
Here’s how.
AI is already used in medicine—in ways you may not know
Most people’s experience with AI in medicine is a voice scribe helping take notes.
But there’s a lot more AI does in the hospital. Dr. Rashid explains that, in addition to clinical workflow tools like note-taking, AI is used for things like:
Predictive analytics (risk stratification for disease)
Diagnostics (e.g., identifying structural heart disease from EKG data)
Medical imaging (detecting lesions or pneumonia on scans)
Drug discovery (accelerating molecular research)
In some cases, these systems outperform clinicians in narrow tasks. In one example she describes, an AI model analyzing EKGs achieved 78% accuracy compared to 64% for cardiologists.
But here is the key difference. AI is used in medicine in very specific ways. It is built for one task at a time and tested carefully before doctors use it.
“If it diagnoses or treats… the FDA generally considers it a medical device.” — Dr. Leeda Rashid
As a medical device, that means it must be tested for:
Accuracy
Safety
Performance in different patients
What happens if it fails
The AI tools you use at home do not go through this process.
Why ChatGPT is not a medical tool
Large language models (LLMs) like ChatGPT are not clinical tools. They are language prediction systems.
“What these models do… is predict the next best word or phrase.” — Dr. Leeda Rashid
That’s really important to understand. Unlike FDA-approved medical devices, LLMs:
Are trained on broad, un-curated data (including outdated or incorrect research)
Their reasoning lives in a “black box”: they don’t share how they arrive at conclusions
They are non-deterministic: enter the same question multiple times and you may get different answers.
Unlike physicians, LLMs do not have to prove their reasoning when you ask it to make a conclusion. As Dr Rashid says,
“We really don’t know how that model is thinking… so it’s hard to interrogate it for accuracy.”
The real risk: being led in the wrong direction
The black box results in one of the biggest problems with AI: even when it’s wrong, it can sound very confident.
Dr. Curtin shared an example. When she asked a consumer AI about ME/CFS, it suggested graded exercise therapy—a treatment that can actually harm patients with post-exertional malaise.
“That actually scared me… because that’s not what is supposed to be done…[Graded exercise] was removed from guidelines years ago.” — Dr. Jennifer Curtin
Why does this happen? Because AI pulls from everything it’s been trained on, including outdated or incorrect research—even social media posts. It doesn’t always know what is current or safe.
A different approach: AI built for long COVID
To address these problems, Dr. Curtin and her team built RTHM Intelligence, an AI tool designed for complex illness.
Here’s how it works differently from general AI tools:
Uses curated medical research
Includes prompt engineering designed for people with complex illness: these are safety rules that act as guardrails for medical advice
Organizes patient data like labs and records
Runs in a HIPAA-compliant system
“You can’t remove what’s in its training set… but you can guide what it references…[It’s] very much geared towards this patient population.”— Dr. Jennifer Curtin
Even so, it is not meant to replace a doctor.
“It is not a doctor… it can only give suggestions.” — Dr. Jennifer Curtin
What AI is actually good for
When used the right way, AI can be very helpful. It works best as a support tool, not a decision-maker.
It can help you:
Organize your medical information
Understand complex terms
Spot patterns or connections
Prepare questions for your doctor
Learn about possible conditions or tests
Dr. Curtin puts it simply:
“Think of it as a study guide… not the teacher.”
If you’re using AI for your health, how you ask matters. I’ve put together a set of carefully designed prompts you can use to organize your symptoms, understand conditions, and prepare for medical visits. I also explain the thinking behind each one, so you can adjust them to your own situation.
Available here for paid subscribers.
Privacy: the underestimated risk
If accuracy is one concern, privacy is another. Consumer AI tools are not governed by healthcare privacy laws.
Uploading a lab report into a general AI system may include you name, date of birth, provider information and test results.
Together, these can easily identify an individual.
“You actually don’t need that many pieces of information to identify a person.” — Dr. Jennifer Curtin
HIPAA-compliant systems like RTHM, by contrast, require:
Encrypted data storage and transfer
Restricted access controls
Business associate agreements
Prohibition on using patient data for model training
RTHM is obligated to follow HIPAA guidelines, because as a medical provider it is considered a covered entity. Even so, Dr. Curtin notes that AI regulation is still evolving, and protections are not absolute. While covered entities like RTHM take their privacy systems very seriously, we have to assume the tech companies are holding up their end of the deal. Currently that’s tough to track.
(And yes, Big Tech is still involved. RTHM Intelligence is built using tools from multiple AI companies.)
AI is changing the doctor-patient relationship
AI is also changing how patients and doctors interact.
Patients now come to visits with AI-generated ideas or requests. When doctors disagree, it can lead to frustration on both sides.
This creates tension:
Patients may trust AI over their physician
Doctors may dismiss patient concerns
Communication can break down
I don’t think healthcare professionals are yet ready for AI-powered patients, and it’s going to require some major changes in medical training to catch up.
Dr Zed Zha and I spoke about the need for change in medical culture, especially for people with long COVID. Listen to our conversation or read the post to learn more:
Some rules and recs
If you’re going to use AI for your health, the most important thing is how you use it. After speaking with Drs Rashid and Curtin, here are my key takeaways:
1. Understand what AI actually is
AI tools like ChatGPT are not medical experts. They are language models.
They are designed to organize and predict information—not to make reliable medical decisions.
They can sound confident, even when they are wrong.
2. Protect your privacy
Be very careful about what personal information you share.
Most consumer AI tools are not built to protect your medical data the way healthcare systems are. Even small details can identify you.
If you choose to use AI, limit what you share—or use a platform designed to meet medical privacy standards. Right now, RTHM Intelligence is free to use and HIPAA compliant at all times.
3. Use AI as a tool—not an authority
AI works best as a thinking partner, not a decision-maker.
Use it to:
Organize your information
Learn about your condition
Prepare for appointments
Do not use it to diagnose yourself or make treatment decisions.
Remember to access your healthcare-related AI prompts here.
4. Double-check what it tells you
AI can make mistakes—and it won’t warn you when it does.
Always verify important information with a trusted clinician, especially before acting on it.
5. Recognize why you’re using it
AI is filling a gap in healthcare.
Patients are using it because they need help understanding their illness and navigating the system.
There’s nothing wrong with wanting that clarity, but AI can only support your care, not replace it.
AI is popular for a reason
The rise of AI in healthcare tells us there’s a glaring need for medical support. Until that gap is fixed, AI will continue to play a role, not just as a tool, but as a signal of where healthcare is falling short.
—Dr Zeest Khan
Do you use an LLM? If so, which one and how?




Thank you for writing this. I worry about over-reliance on AI in medicine. My first Substack post was about how my doctor's AI tool, which recorded our conversation, output slop:
https://diagnosticodyssey.substack.com/p/my-doctors-ai-tool-output-slop?r=79n9kb
TIMELY piece! I just sent out a piece on AI as well, and this was a great resource for my readers, too! Thank you.
https://drzedzha.substack.com/p/the-patient-who-asked-chatgpt-first?r=2cpfs3&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true