The Doctors Who Are Afraid of the Future
Physicians publishing fear-based warnings about AI in medicine are doing more damage than the technology ever has — and the data make that clear. Here is what the evidence actually shows.
A colleague forwards me a link.
Another paper, another warning.
A physician, board-certified, published, credentialed, explains at length why artificial intelligence is dangerous, unreliable, and a threat to patients.
The journal is legitimate. The argument sounds reasonable. The evidence is thin.
This has become a genre.
The anti-AI paper follows a recognizable formula. Find a case where a chatbot gave bad advice. Describe a hallucinated citation. Invoke the image of a vulnerable patient following an algorithm off a cliff. Conclude that AI in medicine is dangerous. Publish. Collect the citations.
What these papers rarely do is apply the same standard to the alternative they are defending.
Medical errors are responsible for an estimated 250,000 deaths per year in the United States, figures that have led researchers to describe them as the third leading cause of death in this country.1
Physicians misdiagnose.
Drug interactions go unnoticed.
Guidelines written for trial populations get applied to patients who were never in any trial.
The benchmark against which AI is being judged is not perfect medicine. It is medicine as we actually practice it, with all its documented failure rates.
The data on AI tell a different story than the warning papers suggest. When GPT-4 sat for the United States Medical Licensing Examination, it passed with scores competitive with human test-takers.2 When researchers compared physician responses to patient questions posted on a public health forum with responses from a large language model, the AI scored higher on measures of both information quality and empathy.3 AI-assisted diagnostic tools in radiology and pathology have matched and in some cases exceeded specialist performance in controlled studies.
None of this means AI is without flaw. Hallucinations are real. Training data bias is real. An AI system trained predominantly on data from academic medical centers may perform less reliably for the patients least well-served by those centers. These limitations deserve serious scientific investigation.
What they do not deserve is misrepresentation dressed up as patient advocacy.
The fear papers have real consequences for real patients. When someone reads that AI medical advice is dangerous, she does not conclude that she will wait for a more careful physician. She gives up a tool that could have answered her question at 2 a.m. when the office was closed, helped her understand a diagnosis delivered in a fifteen-minute appointment, or flagged a drug interaction her doctor did not mention. She returns to a system that fails patients at measurable rates because the alternative has been made to sound worse.
AI does not replace clinical judgment. It extends access to knowledge that was previously available only to people who could afford specialists or happened to ask the right physician.
Restricting that access in the name of safety is not a neutral act. It is a choice with its own costs, and those costs fall on the patients who can least afford them.
My Take
I have been in this field for fifty years. I know what it looks like when a profession defends its authority rather than its patients. The warning papers about AI follow a pattern I recognize: take the worst-case example, generalize it to a categorical claim, and publish it somewhere it will be amplified by people who share the anxiety.
I proposed introducing AI education to two major professional societies in obstetrics and gynecology. Both declined. I was not surprised. Organizations that took decades to accept evidence-based medicine over expert opinion are not going to welcome a technology that makes the limits of expert opinion visible and quantifiable.
An AI system knows millions of papers. A physician, even an excellent one, has read a few thousand at most. That gap is real, and it is not going away. The question is whether we use it honestly to help patients, or spend the next decade publishing papers about why we should not.
The physicians fighting AI are fighting the wrong enemy. The enemy is the gap between what medicine knows and what it delivers.
AI, used honestly, narrows that gap. The warning papers widen it.
If you want analysis of AI in medicine that holds the technology to the same evidentiary standard as the claims made against it, subscribe to ObGyn Intelligence. Independent. Evidence-first. No agenda except the data.
References
1. Makary MA, Daniel M. Medical error — the third leading cause of death in the US. BMJ. 2016;353:i2139.
2. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198.
3. Ayers JW, Poliak A, Dredze M, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023;183(6):589-596.



Agree!! It’s similar to self driving cars. One accident and it’s condemned but look at the accident rate for drivers. And it’s only getting better