Antibody tests don’t tell us what we think they tell us…and that is a problem for immunity certificates

Yvette Madrid
3 min readApr 16, 2020

So you had a sore throat and a headache a few weeks back and you wonder if it was Covid-19. Your country has an immunity certificate scheme that lets immune individuals out of confinement, so you believe it is worth taking an antibody test . It comes back positive. You could not be happier. “Now I can get back to living. Family, friends, work, travel, entertainment- it is all back on. What could be better?” A whole lot, actually. You, and the immunity certification scheme, may be contributing to a rapid resurgence of Covid-19.

How can this be? Probably, because you believe that the test has been vetted for accuracy, and that a positive test result indicates with high certitude that you had Covid-19. Diagnostic tests are usually characterized by sensitivity and specificity which measure something related, but meaningfully different. A highly sensitive tests means that if you did had Covid-19, the test would have a very high probability of saying that you did. A test with high specificity implies that if you didn’t have Covid-19 the test would indicate this with very high probability. So if both of these are high, what is the problem?

The problem is that neither sensitivity nor specificity tell us directly which percent of all positive tests are true positives (actually had Covid-19 and the resultant antibodies). This is a crucial piece of information. When we seek to use an individual’s positive antibody test as a form of verification for past Covid-19 infection (and presumable immunity) we are assuming this percent is very high and that high sensitivity and specificity assure this. But is this true?

It is possible to calculate this crucial percent using sensitivity and specificity, but an additional piece of information is required: the prevalence of Covid-19. Unfortunately, we currently do not know the prevalence and may continue to have gaps with this regard going forward (although, somewhat ironically, antibody tests are extremely useful in the estimation of prevalence). Worse still, the lower the prevalence of Covid-19, the lower the chance that your positive test is positive because you actually had the disease.

This is admittedly a bit of a conundrum, so it can help to visualize it in different ways. In the simplified example above, an antibody diagnostic with 90% sensitivity and 80% specificity is used to test everyone in a population (of 100) where 10% of the people had actually had the disease. Because the number of false positives is high, the chance that a positive test is actually a true positive, is only one in three. That is shockingly low.

The good news is that approved antibody tests for Covid-19 should be better than this in terms of sensitivity and specificity. Unfortunately, the percent of the population actually having been infected by Covid-19 remains an unknown and might be lower than 10%. If we assume a test with 95% sensitivity and 95% specificity, then we would need the prevalence to be over 33% to have good confidence (over 90%) that a positive antibody test is a true positive. A website (https://kennis-research.shinyapps.io/Bayes-App/) lets you explore different scenarios for yourself.

In short, the use of antibody tests to determine individual immunity would give us the most unreliable results when population immunity is lower, and more reliable results as population immunity increases. This runs counter to the prevailing motivation for immunity certification schemes. In fact, these schemes run the real risk of setting “free” too many (unwitting) susceptible individuals, thereby continuing to feed transmission, rather than restricting it. And implementing such a scheme directly following a period of very effective social distancing (that has kept prevalence low) appears to be a particularly poor idea.

--

--