Medical writing is one of those niches where the wrong AI tool will waste hours of your time. Stakes are high, accuracy is non-negotiable, and tone matters as much as content. After running hundreds of prompts through every major model, I’ve narrowed it down to two real options for medical content: Claude and ChatGPT. Gemini is out of the race entirely โ and I’ll explain why before we get to the prompts.
Why Gemini Doesn’t Make the Cut
If you’ve ever tried to use Gemini for anything more clinical than “what’s a healthy breakfast,” you already know. Google’s own API Terms of Service explicitly forbid using the model “in clinical practice, to provide medical advice, or in any manner that is overseen by or requires clearance or approval from a medical device regulatory agency.”
Even outside those clinical edges, the consumer Gemini app applies aggressive safety filters to anything that smells medical. Developers building healthcare tools have been complaining about this on Google’s own forums for years โ prompts that work fine in Claude or ChatGPT get refused, sanitized, or watered down into useless generalities. Image generation is even more locked down: medical procedures are flagged as a “high-risk category” by default.
So if you’re writing a patient education brochure, a CME module, an oncology blog post, or anything in between, Gemini is going to fight you the entire way. Just don’t bother. Open a Claude tab and a ChatGPT tab, and let’s actually get work done.
When to Reach for Claude vs. ChatGPT
Quick mental model before the prompts:
- Claude wins on long-form, nuanced, tone-sensitive medical writing. It holds context across 20+ pages without losing the thread, follows AMA Manual of Style instructions reliably, and produces prose that doesn’t read like a chatbot wrote it.
- ChatGPT wins on speed, SEO-shaped health content, structured outputs, and first drafts you’ll heavily edit. It’s also better when you need it to call a tool or search the web mid-draft.
Use both. Seriously. Most of my workflow runs them in parallel.
Best Prompts for Claude
1. Patient Education Material (Plain Language, FDA-Aligned)
You are a medical writer producing patient education material for a U.S. health system. Write a one-page handout on [CONDITION] for newly diagnosed adult patients. Constraints: – Reading level: 6th grade (Flesch-Kincaid) – Voice: warm, second person, no jargon without a plain-English gloss – Structure: “What it is,” “What causes it,” “What to expect,” “When to call your doctor,” “Questions to ask” – Must include a clear disclaimer that this is educational and not a substitute for medical advice – Cite the source category for each medical claim (e.g., “per CDC,” “per UpToDate”) so I can verify before publishing โ do NOT invent specific citations Output as clean markdown.
Why it works in Claude: the model handles the reading-level constraint without flattening the content, and it actually respects the “don’t invent citations” instruction better than most.
2. Clinical Case Summary
Act as a hospital medicine attending. Below is a raw set of clinical notes from a 72-year-old patient. Produce a discharge summary in standard SOAP- adjacent format with the following sections: HPI, Hospital Course, Discharge Diagnoses, Discharge Medications, Follow-up Plan. Tone: professional, concise, suitable for the receiving PCP. Do not add any clinical detail that is not in the source notes. If something is ambiguous, flag it in a “Clarifications Needed” section at the bottom rather than guessing.
[paste notes]
The “flag, don’t guess” instruction is the one that actually saves you from hallucinations. Claude follows this more reliably than ChatGPT in my experience.
3. Literature Synthesis
I'm writing a narrative review on [TOPIC] for a clinician audience. I'll
paste 5 abstracts below. Your job:
1. Synthesize the findings into 3-4 thematic paragraphs (not a study-by-
study walkthrough)
2. Note where studies disagree and offer a possible reason
3. Identify what's missing from this evidence base
4. Write in AMA Manual of Style, past tense for study findings
Do not introduce any claim that isn't supported by one of the 5 abstracts.
At the end, list any place where you felt you needed more data to make
the synthesis honest.
[paste abstracts]
4. Regulatory / Submission-Style Writing
Draft the "Background and Significance" section for an IRB protocol on
[STUDY TOPIC]. Audience: an IRB review committee.
Length: ~600 words.
Tone: formal, dispassionate, third person.
Structure: scope of the public health problem โ current standard of care
โ knowledge gap โ how this study addresses the gap.
I will provide citations separately; leave [CITATION] placeholders where
references should go. Do not fabricate references.
Best Prompts for ChatGPT
1. SEO-Optimized Health Article
Write a 1,200-word evergreen article for a patient-facing health blog on
"[TOPIC]."
Target keyword: [KEYWORD]
Secondary keywords: [LIST]
Search intent: informational
Structure:
- H1 with primary keyword
- 5-7 H2 sections with question-style headings (for People Also Ask)
- A short FAQ block at the end (4 questions)
- Meta description under 155 characters
Include a medical disclaimer block at the bottom. Do not make any
treatment claims; frame everything as "talk to your doctor about X."
Reading level: 8th grade. American English spelling.
ChatGPT’s SEO instincts are tuned higher than Claude’s. If you want something that’s already shaped for Google before an editor touches it, this is the move.
2. Drug Information Sheet (Internal Reference)
Create an internal reference sheet on [DRUG NAME] for our clinical content
team. NOT for patient distribution.
Sections:
- Drug class and mechanism
- FDA-approved indications (label only โ flag off-label uses separately)
- Common adult dosing
- Major contraindications and black box warnings
- Top 5 clinically significant interactions
- Common adverse effects (>10% incidence)
- Counseling points
Format as a markdown table where it makes sense, prose where it doesn't.
Cite the source category (FDA label, Lexicomp, etc.) for each major
claim. Do not invent specific page numbers or DOIs.
3. CME Module Outline
I'm building a 1-hour CME module on [TOPIC] for primary care physicians.
Produce a detailed outline with:
- 3 measurable learning objectives (use Bloom's taxonomy verbs at the
"apply" or "analyze" level)
- A 6-section content map with estimated minutes per section
- 5 case-based assessment questions with answer rationales
- A 3-item pre/post knowledge check
Tone: collegial, evidence-based, no condescension.
4. Quick Reframe / Tone Pass
ChatGPT is also great as a fast second pass when something you wrote is technically correct but lands wrong:
Below is a paragraph from a patient-facing piece about [TOPIC]. It's
accurate but feels cold and clinical. Rewrite it to be warmer and more
reassuring without losing any factual content and without overpromising
outcomes. American English.
[paste paragraph]
Prompt Engineering Rules That Apply to Both
A few things I’ve learned the hard way:
- Always specify the audience. “Patient” vs. “clinician” vs. “regulator” changes everything โ vocabulary, structure, what gets explained, what gets assumed.
- Specify reading level explicitly. “Plain language” is too vague. Say “6th grade Flesch-Kincaid” or “8th grade.” Both models will calibrate.
- Forbid invented citations in writing. Both models will hallucinate DOIs, PMIDs, and page numbers if you don’t shut that down. Use placeholders and add real references yourself.
- Ask for the disclaimer in the same prompt. Don’t bolt it on after โ both models write better when the disclaimer is part of the spec.
- Tell it what NOT to do. “Don’t add information not in the source.” “Don’t make treatment recommendations.” Negative constraints carry more weight than you’d expect.
- For high-stakes work, ask the model to flag its own uncertainty. A “Things I was unsure about” section at the end of a draft saves you from the worst kind of error: confident wrongness.
Final Thought
Medical writing with AI isn’t about getting a publishable draft on the first try. It’s about compressing the part of the job that’s mechanical โ structure, tone calibration, formatting โ so you can spend your attention on the part that actually matters: making sure every claim is true, every disclaimer is honest, and every reader walks away better off than they started.
Claude and ChatGPT both do that well, in different ways. Use the right one for the right job. And give Gemini a few more years.