One of the first criticisms I got about
Saving My Knees went like this:
“This is a lousy book. All he does is complain about how all his doctors and physical therapists are wrong and I can’t tell if he got better anyway.”
(By the way, if anyone else is similarly puzzled on that last point, I can unequivocally state that yes, I got a
whole lot better. My knees are fine today.)
Read between the lines, and the reviewer appears to be annoyed that I have the temerity to suggest there’s something wrong with the expert advice I was given on treating knee pain.
Why such a harsh reaction?
I think among certain people there is a reflexive, total deference to the opinions of “experts,” even though what is accepted as truth by one generation of experts may be soundly rejected by the next. (History is full of examples; in
Saving My Knees I mention the once widespread medical practice of bloodletting to cure a host of ailments, which has been debunked as nonsense.)
Today I’m going to show you that your doctor is very much human -- and not an infallible expert at all -- with a bit of math. It’s taken from
Fooled by Randomness by Nassim Taleb, who himself has borrowed the anecdote from
Randomness by Deborah Bennett.
Medical doctors were given this problem to solve:
A test of a disease presents a rate of 5% false positives. The disease strikes 1/1,000 of the population. People are tested at random, regardless of whether they have the disease. A patient’s test is positive. What is the probability of the patient being stricken with the disease.
(If you want to try to figure it out yourself, go ahead. I start to disclose the solution immediately below.)
Most doctors -- more than four out of five -- got this wrong. They answered 95% because they focused solely on the accuracy rate. But the question being asked isn’t, “How accurate is the test?” The question, stated more fully, is “What’s the probability the patient has a somewhat rare disease if a test that’s wrong 5% of the time says he does?” And the answer to that question is very different:
less than 2%.
Taleb explains how he arrives at that figure:
Assume no false negatives. Consider that out of 1,000 patients who are administered the test, one will be expected to be afflicted with the disease. Out of a population of the remaining 999 healthy patients, the test will identify about 50 with the disease (it is 95% accurate). The correct answer should be that the probability of being afflicted with the disease for someone selected at random who presented a positive test is the following ratio: number of afflicted persons/number of true and false positives. Here, 1 in 51.
Got that? The difference between the right answer and the most commonly mistaken one is very significant. It’s the difference between “you almost surely have the disease” and “you almost surely don’t have the disease.”
Wow.
There are a few points worth making here. The less interesting one, to me, is that doctors often can be mistaken.
The point that I find more interesting (and empowering) is that you don’t have to be a medical school graduate and a practicing physician to analyze information about medical conditions (claims, studies, empirical evidence) and come to conclusions that, in some cases, may be superior to those held by so-called experts.
What’s more, when it comes to your bad knees, you do
know more than your doctor on one very important subject: how your knees behave (what they like and don’t like, what causes pain, etc.)
So if a doctor says, “Ah, your knees will never get better” (which is what I was wrongly told), remember: doctors can be wrong -- very wrong.
After all, four out of five missed the correct answer to a basic statistics problem. :)
Extra credit: Did you notice Taleb's approach to solving the problem? Out of a population of 1,000, he removed the person who has the disease (remember, it strikes 1 out of 1,000 people), then calculated that 5% of the remaining 999 were false positives (49.95). So the chance of having the disease is 1/50.95 or 1.9627%.
Alternatively, you could apply the 5% rate of false positives to the population of 1,000, resulting in 50 people who wrongly test positive for the disease, then add the one person who actually has it. So the chance of having the disease this way is exactly 1/51 or 1.9608% -- a bit different.
So, given the information as laid out in the problem, which answer is correct, and why?
Note: The difference in the results from the two approaches is trivial, so you may think it hardly matters which one is correct. While that's true for this example, it wouldn't be for another, say where 30% of the population has some disease and the test has a rate of say 20% false positives.