Sebastian Rushworth MD: How Well do Doctors Understand Probability?
re: Sebastian Rushworth MD: “62% reduction in the relative risk of dying among covid patients treated with Ivermectin”
Something is awry in medicine, well, lots of things are awry, but you already knew that if you’ve graduated from the lowest levels of awareness of the world.
How well do doctors understand probability?
by Sebastian Rushworth M.D., 23 June 2021
I think that anyone who has even a partial understanding of what doctors do understands that the practice of medicine, although based on scientific knowledge, isn’t a science. Rather it is an art form. And as with all art forms, there are those who excel, and those who plod along, occassionally producing something nice or useful. Most people are probably aware of the fact that if you go to five different doctors with a problem, there is a significant probability that you will get five different answers....
One of the things that always needs to be estimated in any individual consultation is probability... But how good is the average doctor?
That is what a study recently published in JAMA Internal Medicine sought to find out...
... In the pneumonia scenario, the doctors overestimated the pre-test probability of pneumonia by 78%. In other words they thought the likelihood that the patient had pneumonia was almost double what it actually was. Not good. Unfortunately, that was their best performance. When it came to angina, they overestimated the pre-test probability by 148%. When it came to breast cancer, they overestimated the pre-test probability by 976% (i.e. they thought it was ten times more likely than it actually was). And when it came to the urinary tract infection scenario, they overestimated the pre-test probability by 4,489%! (i.e. they thought it was 45 times more likely than it actually was).
Doh! What are doctors being taught in medical school these days?
What I think is particularly interesting here is that the error was always in the same direction – in each of the four scenarios the doctors thought that the disease was more likely than it is in reality. If this reflects real world outcomes, then that would mean that doctors probably engage in an enormous amount of overtreatment. Obviously, if you think a patient likely has a urinary tract infection, you’re going to prescribe an antibiotic. And if you think a patient likely has angina, you’re going to prescribe a nitrate. You might even refer the patient for some kind of interventional procedure.
...When it comes to how much a test changes estimation of probability, the doctors overestimated the effect of a positive lung x-ray by 92%, of a mammography by 90%, and of a cardiac stress test by 804%! They were relatively on the mark, however, when it came to estimating the impact of a positive urine culture, only overestimating by 10%.
...Doctors have a pretty poor understanding of how the tests they use influence the probability of disease, and they heavily overestimate the likelihood of disease after a positive test.
...Finally, the survey asked the doctors to consider a hypothetical scenario in which 1 in 1,000 people has a certain disease, and estimate the probability of disease after a positive and negative result for a test with a sensitivity of 100% and a specificity of 95%... The average doctor in the study thought that the odds of a person with a positive test actually having the disease was 95%. In other words, they overestimated the probability by 4,750%! Apart from that, they thought that a person with a negative test still had a 3% probability of disease, even though the sensitivity was listed as 100% (which means that the test never fails to catch anyone with the disease).
Doctors suck at estimating the probability of common conditions in scenarios they face on a daily basis, are not able to correctly interpret the tests they use, and don’t understand even very basic diagnostic testing concepts like sensitivity and specificity. It’s kind of like a pilot not being able to read an altitude indicator. Be afraid. Be very afraid.
Medical schools should be thinking long and hard about the implications of this study. What it tells me is that medical education needs a massive overhaul, on par with the one that happened a hundred years ago after the Flexner report. We don’t send pilots up in to the air without making sure they have a complete understanding of the tools they use. Yet that is clearly what we are doing when it comes to medicine. Admittedly the practice of medicine is much more complex than flying a plane, but I don’t think that changes the fundamental point.
WIND: incompetence (no other word for it) seems not just demonstrated but proven here—if most doctors don’t even understand test sensitivity, which is so basic as to be laughable. How competent can their total thinking be with such a glaring fault? You can’t do algebra if you can’t get 2+2 correct.
What do you think the odds are of getting a doctor to accurately assess the risks of a COVID vaccine vs the disease itself for a particular patient, given these probability estimation failings? Along with the heavy-handed political pressures* to encourage patients to get it by blind obedience irrespective of any patient-specific considerations... you’re going to get crap advice from most doctors.
These 'F'-grade results for doctors are why I believe I am MORE qualified than every doctor I’ve met to estimate the odds for myself, having training in mathematics and statistics and operations research, and a way better bullshit meter and knowledge of the way the body actually operates (via decades of self study).
Anyone for the “you’re not a medical doctor, so stay in your lane” twaddle so fondly cited in all public forums these days?
* Government, medical boards, insurers, public forums, medical licensing boards, etc.