As ordinary people push harder to be allowed to return to work, and for their lives to go back to normal, health experts have been puzzling over how best to accomplish this.
One idea involves widely testing for antibodies, to see who has had the virus (and maybe didn’t even know it). Anyone with antibodies would then get some kind of “free movement” badge, I guess, and could go wherever they want without fear of becoming infected.
But of course it’s not that simple, as we are being told:
... we still know very little about what human immunity to the disease looks like, how long it lasts, whether an immune response prevents reinfection, and whether you might still be contagious even after symptoms have dissipated and you’ve developed antibodies. Immune responses vary greatly between patients, and we still don’t know why.And so: lots of questions being asked, good questions. But what I found more intriguing was a Twitter thread I stumbled upon recently talking about, let’s say, “antibody math.” It was written in Twitter shorthand, and flung about phrases like “Bayes theorem” and “Pr(are+|test+).”
So yeah: practically unreadable.
But I thought it would be fun to unpack its message about the reliability of antibody tests. After all, if you’re going to get a badge that says, “Hey, I’ve got Covid-19 immunity!” you’ll want to know if it’s worth the paper it’s printed on.
The accuracy of antibody tests, it turns out, is basically measured two ways: by “sensitivity” and “specificity.”
“Sensitivity” is the percent chance that, if you indeed have Covid-19 antibodies, the test will detect that. So if a certain test has a sensitivity of 95%, that means that out of 100 people that actually have the antibodies, it will correctly identify 95 of them (on average).
“Specificity” is the interesting flip side: the percent chance that, if someone doesn’t have the antibodies, the test will get that right as well. So a specificity of 95% means that the test will incorrectly report the presence of antibodies, when they’re not there, only 5% of the time.
Now, it so happens, that if you dial up the sensitivity on a test, the specificity drops, and the other way around. So there’s a tradeoff between the accuracy of the two variables.
Okay, now for the “antibody math,” using an actual FDA-approved test by a company called Cellex. It has a “sensitivity” of 93.8% and a “specificity” of 95.6%. That sounds pretty good! Suppose you were issued a Covid-19 antibody badge based on a Cellex test.
What are the chances that you actually have the antibodies, and the test didn’t make a mistake?
This is where things get really, really interesting.
First, we have to make an assumption: how prevalent is Covid-19 in the population being tested? For this example, let’s say it’s fairly rare, and only 1% of the population has had the disease.
Let’s suppose we’re testing 100,000 people (by using this number, the math works out more easily, and we don’t even have to use decimals!). Let’s take that population and divide it into 100 slices of 1,000 each (so each percentage point is represented by 1,000 people).
We expect that 1% of our population, or 1,000 people, have had the virus. The test will detect 938 of these people because it has a “sensitivity” of 93.8%.
Now for the “specificity.” Let’s take the next 1,000 people in line, all of whom have never had Covid-19. The test mostly gets this right, but mistakenly classifies 44 as having the antibodies. (If you’re unclear how I got there: it correctly identifies 95.6%, or 956 of the 1,000 people, as not having the virus. The remaining 44 thus become “false positives.”)
44 may not seem like much. But remember: this is only one percentile that we’ve looked at here. And 99% of people haven’t had Covid-19. So we have to multiply that 44 times 99. Suddenly – yikes! – we’ve got 4,356 false positives out of our population of 100,000.
Now it’s time to wrap this up. The test identifies 938 people correctly. But it messes up on 4,356 more. That means we’ve got 938 positive antibody test results that are correct out of a total of 938 + 4,356.
Do the division, and that gives you 17.7%, which means a positive result implies a less than one in five chance that you actually had Covid-19! It would be a disaster to release people into the workforce based on this scenario.
The problem is, the number of false positives really distorts the accuracy for a rare disease. But if we tweak the numbers, and assume 10% of the population has had Covid-19, the math becomes more comforting (I’ll spare you the steps, but they’re the same as above).
In this scenario, if you test positive, there’s a 70.3% that you did have the coronavirus. Maybe still not high enough to get comfortable with, but a step in the right direction.
So what can we learn from all this?
* Antibody tests will be more effective in areas where the Covid-19 infection is relatively high, such as New York.
* If we decide to make antibody tests a key part of our “return to normal” plan, it’s important to first do enough random sample testing to get a good handle on what percentage of a given population has actually had Covid-19, because that number is a key input for accuracy.
* We can overcome the inaccuracy problem, to some extent, by testing someone multiple times. For example, in the example above where the test is 70.3% accurate for finding Covid-19 antibodies, testing three times lifts that percentage to more than 97% -- which is pretty good.
Okay, I’m tapped out for now. Hope someone out there finds this edifying (and useful).
Cheers, and keep moving those knees!
No comments:
Post a Comment