Inspired Comforts

Find Your Freedom

[RED FLAG] AI in recovery — 12 ways to use it well, and 5 ways it can hurt

Inspired Comforts hero image
Tech · AI in recovery

A practical guide to using ChatGPT, Claude, and similar AI tools during cancer treatment, surgical recovery, dialysis, and chronic illness — what AI is good at, what it’s dangerous at, and the framing that keeps it useful. Sourced from peer-reviewed research on patient AI use, ASCO and AMA guidance on AI in healthcare, and consistent themes from real-patient feedback.

The simple answer

AI tools (ChatGPT, Claude, Gemini, Microsoft Copilot, and similar) can be genuinely useful during recovery — for translating jargon, organizing questions, summarizing research, drafting letters, and managing logistics. They can also be genuinely dangerous if used as a doctor substitute, if you trust their medical claims uncritically, if you share identifying medical information with public tools, or if you let them replace your real care team. The framing that works: AI is a translator and assistant, not a clinician. Below: 12 specific use cases that work well, plus 5 that hurt — with the safeguards that keep AI helpful.

What AI is genuinely good at

1. Translating medical jargon into plain English

Pathology reports, bloodwork, imaging summaries, surgical-technique descriptions — all written in dense medical English that’s nearly impossible to parse without training. Pasting a snippet (with personally identifying information stripped) into ChatGPT and asking “explain this in plain English” is a legitimate, well-suited AI task. The model is good at restating jargon. It’s also good at distinguishing what each value means and what concerns a high or low value might raise. The translation is not a diagnosis; it’s preparation for a conversation with your team.

2. Generating questions for your next appointment

“I’m seeing my oncologist next week. I’m in the second cycle of [regimen]. What questions should I ask?” The model produces a thoughtful list — the kind a knowledgeable friend would draft. You edit it down to your real situation. The Inspired Comforts doctor questions templates were inspired by exactly this use case; AI can help personalize the list further.

3. Summarizing long medical literature you don’t have time to read

“Summarize this 2024 paper on [drug] in 5 bullets I can bring to my doctor.” The model is good at this. Always note that the summary is a starting point — your physician can correct or contextualize. Don’t make decisions from summaries alone.

4. Drafting letters and difficult emails

The letter to your boss about FMLA. The email to your insurance about a denied claim. The note to a friend who hasn’t visited. The thank-you to your surgical team. AI is excellent at first drafts of difficult correspondence. You edit; the model gets you 80% there in a fraction of the time. This is one of the highest-yield uses for caregivers especially.

5. Helping plan logistics

Treatment-day schedules, FMLA paperwork timelines, pre-op preparation lists, packing lists, meal-train coordination, the things-to-buy lists. AI is good at helping you think through what you’re forgetting. It generates a comprehensive list; you cross out what doesn’t apply.

6. Brainstorming meal options within dietary restrictions

“Give me 10 dinner ideas that are low-potassium, low-phosphorus, low-sodium, easy to make.” The model generates ideas you’d never have thought of. Verify nutritional details with your renal dietitian before adopting; some AI claims about food content are wrong.

7. Practicing hard conversations

“I need to tell my mother I have cancer. Help me think through how to phrase it.” Or: “My partner and I have been fighting about caregiving load. Help me draft what I want to say.” AI can role-play. It’s not therapy, but it’s a low-stakes way to rehearse. Many caregivers describe using AI for exactly this.

8. Generating routines and trackers

“Build me a daily routine for the first 14 days post-mastectomy.” “Build me a weekly schedule for managing chemo + a part-time job + two kids.” The model generates a starting structure; you adapt to your real life.

9. Translating between languages

For patients or families whose first language isn’t English (or whose physician’s first language isn’t English), AI translation is now good enough to bridge real communication gaps. Always keep your hospital’s professional medical interpreter for actual medical decisions; use AI for everyday communication.

10. Demystifying paperwork

Insurance EOBs, HSA / FSA explanations of benefit, FMLA forms, disability paperwork. Asking AI to explain “what this form is asking, in plain English” is a legitimate use. AI is good at structure and process; not at the specifics of your individual case.

11. Generating research questions to take to a specialist

“What are the standard treatment options for [stage X] [cancer type] in someone my age, and what questions should I bring to my second opinion?” The model produces a researched-feeling starting list. Always treat as starter; your specialist has individual context.

12. Caregiver support — synthesizing across what you’ve shared

For caregivers managing complex situations — appointments, medications, kids, work, finances — AI can help organize the load. “Here’s what I’m tracking. What am I missing?” Models are good at completion-style assistance.

What AI is dangerous at — the 5 ways it can hurt

1. Pretending to be your doctor

The most-common failure mode. A patient describes symptoms. The AI generates a confident-sounding diagnosis or treatment plan. The patient acts on it. The AI was wrong. Per a growing body of research from medical journals, AI medical advice can be plausibly worded but factually incorrect (the technical term is “hallucination”). The model has no patient examination, no labs, no imaging, no longitudinal context. It’s pattern-matching against training data. It is not a doctor. Even when it sounds like one.

2. Spreading medical misinformation that sounds authoritative

Some AI responses confidently cite studies that don’t exist, treatment effects that aren’t real, or statistics that are made up. The fluency of the output can mask the unreliability of the content. Always cross-check medical claims with: your physician, primary sources (NIH, NCI, ACS, peer-reviewed journals), or trusted patient organizations. Sounds authoritative is not the same as is authoritative.

3. Privacy compromise from public AI tools

If you paste your full medical history, your name, your DOB, and identifying details into a public chatbot, that information may be stored and used in ways you can’t control. Best practices: strip identifying info (name, DOB, exact dates, exact location, exact diagnosis details). Use general framing: “I’m a 50-year-old woman in early-stage cancer treatment.” Avoid: “My name is X, my MRN is Y, my biopsy report says Z.” Some AI services offer enterprise privacy tiers (HIPAA-compliant); most public ones do not.

4. Over-reliance that erodes self-advocacy

Patients who run every question through AI sometimes describe losing the muscle of asking their own doctors directly. The doctor-patient relationship is built through asking, listening, pushing back, returning. AI is a useful augmentation; if it becomes a replacement, the relationship with your real care team weakens. The goal is more confident conversations with your team, not fewer.

5. Catastrophizing and 3 a.m. spirals

AI is great at generating possibilities. At 3 a.m., that means it can list every horrible scenario for any symptom you describe. The “what’s the worst this could be” question, asked at 3 a.m., leaves you with a worst-case list before sunrise. Many patients describe AI’s role in 3 a.m. anxiety as worse than the anxiety alone. Best practice: don’t use AI for symptom-fear questions late at night. Save those for daytime conversations with humans who know you.

“AI helped me draft my FMLA letter, summarize my pathology report, brainstorm renal-friendly dinners, and rehearse the conversation with my boss. AI also told me, when I asked at 2 a.m., that my new symptom was probably one of seven serious things — none of which were correct. I learned to use it for prep, not for diagnosis.”
— composite of recurring sentiment in r/cancer AI threads

The framing that keeps AI useful

Treat AI like Don’t treat AI like
A research assistant A doctor
A first-draft writer A diagnostician
A plain-English translator A second medical opinion
A brainstorming partner A symptom-checker at 3 a.m.
A logistics organizer An emotional crutch in crisis
A practice partner for hard conversations A therapist or pastoral counselor

Privacy practices to adopt

  • Don’t share names, DOBs, MRNs, or exact lab values that could identify you.
  • Use general framing. “Someone in their 60s with stage II breast cancer…” rather than “I’m Maria, 62, diagnosed last Tuesday at…”
  • Strip identifying info from documents before pasting.
  • Use HIPAA-compliant tools when available. Some hospital portals are now offering AI assistants that operate under HIPAA terms.
  • Read privacy policies for free tiers. Many free AI services use your data for training; paid tiers often don’t.
  • Don’t share family members’ info without their consent.

The verification habit

For any AI claim about medicine, treatment, drug interactions, or your specific case:

  1. Cross-check with one authoritative source — NCI, ACS, ASCO, peer-reviewed paper, or your physician.
  2. If AI cites a study, search the study to verify it exists.
  3. If AI gives a statistic, verify against the source.
  4. If AI gives advice that contradicts your physician’s, trust your physician.
  5. If AI’s confidence increases on a topic where verification gets harder, that’s a signal to slow down.

What real patients describe finding most useful

  • “It helped me write the email I’d been putting off for three weeks.”
  • “It explained my path report in language I could understand.”
  • “It gave me a list of questions I never would have thought to ask.”
  • “It helped me find recipes that fit the renal diet.”
  • “It helped me talk to my kids about what I’m going through.”
  • “It helped me organize my treatment timeline so I could see the whole arc.”

What real patients describe regretting

  • “I asked at 3 a.m. and ended up convinced I had something worse than I did.”
  • “I trusted what the AI said about a drug interaction; turned out it was wrong.”
  • “I shared too much identifying info with a public chatbot.”
  • “I started skipping my actual doctor visits because I felt I’d ‘already asked the AI.'”

Pair with the toolkit

The Inspired Comforts doctor questions templates were designed for the same prep use case AI can help with — bring an AI-generated list AND a printable template. Customize both for your specific situation.

FAQ

Should I tell my doctor I’m using AI to prep?
Yes. Most physicians appreciate informed patients. Some have opinions about which AI tools are reliable; ask. Many cancer centers now have AI literacy resources for patients.
Are some AI tools better than others for medical use?
Generally, paid versions of major models (ChatGPT-4 / Claude / Gemini Pro) are more careful than free versions. None should be used as a doctor.
What about AI built into my hospital portal?
Increasingly common. These often have HIPAA-compliant terms. Use these for hospital-specific questions rather than public AI.
Can AI help me find clinical trials?
It can help summarize trials and questions to ask, but use clinicaltrials.gov as the primary source — AI can miss recent trials or get details wrong.

Sources

  • American Society of Clinical Oncology — asco.org
  • American Medical Association — ama-assn.org
  • National Cancer Institute — cancer.gov
  • Pew Research on AI and health — ongoing reports

Continue reading

By the Inspired Comforts editorial team.
A note on what this is. This article is general information drawn from the sources cited above and from real-patient experience patterns. It is not medical advice, not a diagnosis, and not a substitute for the guidance of your care team. Your situation is specific to you. Always discuss decisions about your treatment, medications, and care with your physician, surgeon, oncologist, nephrologist, OB, or relevant specialist. If you are experiencing symptoms that worry you, contact your medical team. In an emergency, call 911 or your local emergency number.
Visited 1 times, 1 visit(s) today
Close Search Window
Close