How to Improve Your AI Skills
Most doctors including ObGyns who use AI assistants use them exactly one way. They type a question. They read the answer. Then they close the tab.
I did that for a while too. It seemed like enough. You ask, it answers. What else is there?
It turns out: quite a lot.
Over the past year I have been thinking carefully about how I actually use Claude — not in theory, but in practice.
How I draft manuscripts, review literature, build clinical tools, respond to critics, produce a weekly newsletter.
What I do now looks almost nothing like what I was doing eighteen months ago.
I want to share what changed, because I think most ObGyns or doctors for that matter using AI are leaving an enormous amount on the table.
The first level is the one everyone knows. You open the chat. You type something. You get an answer. This is genuinely useful — better than a literature search, faster than a colleague down the hall, and available at two in the morning when you are finishing a consult note and cannot remember whether the SMFM threshold for magnesium toxicity is 7 or 9 mEq/L. Ask it. It knows. I use this every day for quick clinical questions, drug interactions, dosing in renal impairment, and the occasional guideline I have not memorized because it changed last month. This alone makes it worth using.
But the second level is where things start to shift. The key move is giving the AI context before you ask your question. Not just the question — the situation.
Here is what I mean. Instead of typing: ‘What should I tell a patient who wants a VBAC?’, try this: ‘I have a 34-year-old G2P1 with a prior low transverse cesarean for arrest of active phase. She has a BMI of 28, no prior vaginal deliveries, presenting at 39 weeks, not in labor. She is asking about VBAC. I practice in a hospital with 24-hour in-house anesthesia coverage. What should I cover in the counseling conversation, including the absolute risk numbers she needs to hear?’ That question gets you something you can use in the room. The generic question gets you a brochure.
Once you understand that context drives quality, you start giving it more context every time. Your hospital’s cesarean rate. Your patient population. Your threshold for induction in IUGR. The AI does not know any of this unless you tell it. When you do, the answers stop being textbook and start being applicable.
The third level is repeatability. This is where I think the average ObGyn is most underestimating the tool.
Think about how many things you do the same way every time. Discharge instructions after a postpartum hemorrhage. Counseling frameworks for a new gestational diabetes diagnosis. A summary of a fetal anomaly for a patient who has never heard the word ‘ventriculomegaly’ before. A letter to an insurance company explaining why a patient needs a cerclage. These tasks follow a structure. They have components. They should look and sound a certain way every time.
You can teach the AI that structure once, save it, and use it forever. I have done this for patient handouts on preterm labor, for obstetric ethics consultations, for structured summaries of research papers I need to digest quickly. The first time takes twenty minutes to set up. After that, it takes thirty seconds. You describe what good looks like — the reading level, the order, the tone, what to include and what to leave out — and it follows your specifications every single time without getting tired, without cutting corners on a Friday afternoon.
One concrete example: I needed to explain placenta previa to patients at a sixth-grade reading level, include what symptoms to watch for, when to call, and what delivery planning looks like, without using the word ‘hemorrhage.’ I described exactly that. The AI produced a handout I edited once and have used since. Before that I was writing it from scratch or handing patients a printout from UpToDate that no one reads.
The fourth level — where I am still building — is using the AI to create actual clinical tools. Calculators. Decision aids. Risk stratifiers. I built a Bishop score calculator for my website. A gestational weight gain tracker. A preterm labor risk tool. I am not a software engineer. I am a seventy-six-year-old professor who described what he wanted clearly enough that the AI built it. That is available to any ObGyn who is willing to describe what they need with the same precision they would use to describe a surgical technique.
The pattern across all four levels is the same. The AI is not the variable. You are. Generic input produces generic output. Specific, contextualized, detailed input produces something worth reading. The doctors who will use this well are not the youngest or the most technically inclined. They are the most precise. The ones who can describe what they do and what good looks like.
Start with one thing you do repeatedly. A counseling framework. A patient letter. A way you explain a diagnosis. Describe it to the AI in detail. Tell it the reading level. Tell it what to include. Tell it what you never say and why. Run it. Edit the output. Correct it once. Then use it for the next ten years.

