A practical guide to using ChatGPT, Claude, Gemini, and similar tools during a medical event — to translate bloodwork, draft hard family conversations, prep questions for appointments, and keep your appointment schedule from collapsing. With explicit caveats about what they can’t do, and the patient-safety rules that matter.
AI assistants — ChatGPT, Claude, Gemini — are useful during medical treatment for translation tasks (decoding bloodwork, glossing medical terms, summarizing research), administrative tasks (scheduling, draft emails, prep questions), and emotional logistics (drafting hard family conversations, saying no to commitments). They are NOT a substitute for your care team, do not have access to your medical records unless you paste them in, can hallucinate medical claims, and should never be used as the source of a treatment decision. Below: the four uses that work, the rules that matter, and the prompts that get the most useful answers.
Why this article exists
Most “AI for healthcare” articles are written by tech companies. This one is written for the patient — someone with five doctor’s appointments next month, a stack of paperwork, and a partner who keeps asking “what’s the difference between Stage 2 and Stage 3 again.” The use cases below are the ones that hold up across customer feedback and patient-community discussions.
For the boundaries, this article relies on the FDA’s published guidance on AI as medical device, the AMA’s principles for augmented intelligence in medicine, and the World Health Organization’s ethics guidance on AI in health. None of which approve general AI assistants as diagnostic tools.
Four uses that genuinely work
Decoding bloodwork, pathology reports, and discharge papers
Paste the result into ChatGPT and ask: “Translate this bloodwork into plain English. Tell me what each value means and which ones a non-specialist might ask follow-up questions about.” The translation is usually clear, accessible, and a good starting point for conversations with your team. What it cannot do: tell you whether a value is “good” for you specifically — that depends on your treatment history, drug interactions, and your team’s reference ranges, not the lab’s standard ranges.
Drafting questions for appointments
“I have an appointment next week with my oncologist about [drug]. Help me write 10 questions a thoughtful patient would ask, prioritized by importance.” This produces a better question list than most patients walk in with, and it surfaces topics you might not have known to ask about. What it cannot do: prioritize for your specific situation. Take the list to your appointment as a starting point, not a script.
Drafting messages you don’t have the energy to write
“Help me write a short text to my mother explaining that I won’t be able to come for Thanksgiving this year because of treatment, in a way that’s warm but firm.” AI assistants are good at this — they can produce a draft you can edit, in 15 seconds, when you don’t have 15 minutes to draft from scratch. The same applies for thank-you notes after a meal train, no-thank-you notes for unhelpful advice, and the gentle phrasing for asking a friend to stop calling every day.
Scheduling, summarizing, and remembering
“Here are my treatment dates [paste], and here are my work commitments [paste]. Help me figure out which weeks I should ask for lighter loads.” Or: “Summarize this 12-page treatment plan into a one-page version I can give to my partner.” Or: “What questions should I have answered before signing this consent form?” These are administrative tasks AI handles well; the answer is yours to verify.
— summarized from AMA principles on augmented intelligence in medicine
The rules that matter
- AI is not your doctor. Even when it sounds confident. Even when it’s right. The confidence-versus-accuracy gap is the single most documented issue with general AI assistants in medical contexts.
- Verify medical claims. If the AI tells you something specific about a drug interaction, dosing, or a treatment recommendation, verify with your care team or with a published authoritative source (NIH, Mayo Clinic, MSK, ACS).
- Don’t paste protected information into free consumer products without thinking. ChatGPT and most consumer AI products may use your inputs to train future models unless you opt out. OpenAI’s data controls FAQ covers how to opt out for ChatGPT; similar pages exist for other vendors. For sensitive information, opt out first.
- Be skeptical of “what does this scan show” questions. AI image readers exist as FDA-cleared medical devices in clinical use, but consumer AI assistants are not those — they will speculate confidently about images they aren’t trained to read. Don’t ask ChatGPT to interpret your CT scan.
- Use the right tool for the task. ChatGPT is fine for translation and drafting. For drug-interaction questions, the Drugs.com interaction checker is more reliable. For specific cancer-drug research, NCI’s clinical trial database is authoritative.
The prompts that get the best answers
| What you want | Prompt that works |
|---|---|
| Plain-English bloodwork | “Here are my latest CBC and metabolic panel results: [paste]. Translate each value into plain English and flag the 3 things a thoughtful patient might want to ask their doctor about.” |
| Question prep | “I’m seeing my oncologist next week. Diagnosis: [stage/type]. Drug: [name]. Help me write 10 prioritized questions, organized by topic.” |
| Hard message draft | “Draft a short, warm but firm text saying [the message]. Audience: [relationship]. Tone: [warm/professional/etc].” |
| Treatment plan summary | “Summarize this treatment plan in plain English at a 9th-grade reading level, in under 200 words. Include 3 things I should ask before signing.” |
| Side-effect tracking | “I’m experiencing [symptoms]. Suggest 5 questions I should ask my care team. Don’t tell me what’s wrong; help me describe it accurately.” |
| Insurance denial appeal | “My insurance denied [treatment]. Help me draft an appeal letter focusing on medical necessity. The denial reason was: [paste].” |
| Saying no to commitments | “Help me write a polite text declining [event] because of treatment. Recipient: [relationship]. Goal: warm, brief, leaves the door open.” |
What we do not recommend using AI for
- Diagnostic decisions. Don’t ask AI whether you have a condition or what’s wrong with you. That’s your care team’s job.
- Drug-dosing calculations. Pharmacists and your care team. Don’t trust AI math on medications.
- Image interpretation. Scans, dermatology photos, anything visual. Use the FDA-cleared tools your team uses, not consumer AI.
- Mental health crisis support. If you’re in distress, contact 988 (Suicide & Crisis Lifeline in the US) or your local equivalent. AI is not a substitute for crisis care.
- Anything where being wrong is expensive. Treatment decisions, financial decisions, anything legally binding. AI is a draft tool, not a final-decision tool.
Practical AI literacy is part of the modern patient skill set
The wardrobe is one part of recovery; managing the information flow is another. We don’t sell AI tools or workflow software, but we believe in writing about what real patients are actually doing — and a lot of patients are using ChatGPT and similar tools every week. Read more on tech and AI in recovery, including specific app and tool reviews.
Frequently asked questions
Sources and further reading
- FDA — Artificial Intelligence and Machine Learning in Software as a Medical Device · AI/ML-Enabled Medical Devices list
- American Medical Association — Principles for Augmented Intelligence Development
- World Health Organization — Ethics and governance of artificial intelligence for health
- OpenAI — Data controls FAQ
- NCI — Clinical Trial Database
- 988 Suicide & Crisis Lifeline — 988lifeline.org (call or text 988 in the US)





