Some back-of-the-envelope math reveals the risk in relying on even the best antibody tests to tell us who’s had the coronavirus

Business Insider

Medical tests, like the blood tests that look for antibodies to the novel coronavirus, aren’t perfect.

Sometimes, you end up with a false negative: You actually did have the coronavirus and your immune system developed antibodies to the virus, but the test says you don’t have those antibodies. Other times, you end up with a false positive: The test results say you have antibodies to the coronavirus, but you don’t actually have them.

Those false positives can be particularly dangerous if you’re relying on the presence of antibodies to hopefully have some protection from the coronavirus and resume normal life. Scientists are still uncertain about how much protection against future infection people with antibodies have, or how long that protection lasts for.

Still, some countries have floated the idea of an “immunity passport” for people who have already gone through the coronavirus and presumably have some protection against reinfection, at least temporarily. In other places such as New York, government officials are using antibody tests to get a sense of how many people have had the coronavirus.

In either case, if there are a lot of false positives, you could end up with misleading results.

While each test has its own performance characteristics, some simple math can help reveal how accurate it is when used in a big population of people.

The positive predictive value of a medical test, like the blood tests for antibodies to the novel coronavirus, tells you the probability of actually having a disease (or antibodies) if you test positive for it.

‘It’s not like something you can take to the bank’

Even with a test that correctly identifies antibodies in coronavirus-positive people more than 90% of the time, and no antibodies in coronavirus-negative people just as often, if you have a population where the actual prevalence of the disease is very low, the test can produce false results for half of those who take it, according to Dr. Andrew Noymer, a public health associate professor at the University of California Irvine.

“Even when you log on to your healthcare portal, and it says, ‘Congratulations, you are positive for Covid antibodies,’ meaning you ostensibly have some immunity, it’s not like something you can take to the bank,” Noymer said in an interview.

One way to increase the chance of accurate results is to make use of multiple tests or pieces of information. For instance, if an individual who previously tested positive for the coronavirus itself later gets an antibody test, you’d expect that test to show that they have antibodies. If it doesn’t, you’re likely to be suspicious of the results.

But if the antibody tests are used to screen large groups of people, we won’t have other pieces of information to use in many cases.

Calculating how often a test gives you accurate results

To calculate the positive predictive value, we need three pieces of information: the true positive rate, or the probability the test will correctly say that you have antibodies if you actually do (a metric called “sensitivity”); the true negative rate, or the probability that the test will correctly say that you don’t have antibodies if you haven’t been infected (called “specificity”); and the underlying share of people in the population that actually have the disease.

The positive predictive value is, at its heart, just the share of people who who have coronavirus antibodies and test positive, out of everyone who tested positive.

The total number of people who have tested positive can be broken down into people who really have antibodies (true positives), and people who don’t (false positives).

We can figure out the number of true positives and false positives based on our knowledge of the performance of the test, along with the number of people who actually have the disease.

This means that the usefulness of our test in figuring out how many people have antibodies depends on two things: how good the test is, and how many people in the population actually have antibodies.

If the underlying condition—in this case, antibodies—is rare in the population, even a very good test can lead to lots of false positives, as the number of people who don’t have the disease could far outweigh the number of people who do.

“If I have high sensitivity and high specificity, my positive predictive value is still going to be lower when the prevalence for that disease is low in the population,” said Dr. Puneet Souda, managing director of life science tools and diagnostics at SVB Leerink, in an interview. “But if the prevalence is high, then my positive predictive value is going to be higher, as long as my sensitivities and specificities are high.”

Underlying prevalence matters

Suppose that we have 1,000 people in a town, and an 8% prevalence rate. That means, of the 1,000 people, 80 have been infected by the coronavirus and developed antibodies.

Let’s also suppose that our hypothetical test has both a sensitivity and specificity of 95%, which sounds pretty good.

That is, in 95% of patients who have antibodies, it’ll detect them, and in 95% of patients who don’t have antibodies, it’ll correctly say they haven’t been exposed.

Let’s say we test everyone in our town, and calculate the number of true positives and false positives. Our 95% sensitivity means that 95% of the 80 people who actually have antibodies will come back as true positives, giving us 76 positive results from that group.

But our 95% specificity means that we’ll have a 5% false positive rate. In other words, 5% of the people in our town who don’t have antibodies will wrongly be told that they in fact do have them.

Since only 80 people actually do have antibodies in our town, we know that the other 920 all don’t.

But our 5% false positive rate means that 46 people out of that group of 920 will be wrongly told that they have antibodies.

Putting together our 76 true positives and our 46 false positives, we have a total of 122 positive results. But this means that we end up with a positive predictive value of just 76/122, or 62%.

A bit better than a coin toss

So if you’re a citizen of our town, and the test says that you have antibodies, the chance that you really do is just 62%. 

That’s a bit better than a coin toss, but not by much.

And if you’re relying on a positive test for reassurance that you’re safe from infection and can return to work, it may not give quite the level of security you’re hoping for.

Of course, these numbers are just hypothetical, and one big unanswered question is how many people have actually had the coronavirus.

An early survey of New York City residents in April suggested that around 20% of the population had been infected and developed antibodies, according to Gov. Andrew Cuomo.

One chart that summarizes the accuracy of antibody screening

If we do a similar calculation as above, with a hypothetical test that has a 95% sensitivity and specificity, but assume a 20% prevalence rate, the positive predictive value jumps to a much better 83%. Still far from perfect, but a big step up from our near coin-toss. 

New York is using a test from BioReference with a listed sensitivity of 91.2% and specificity of 97.3% for some of the screenings.

To see the relationship between sensitivity, specificity, and prevalence, this chart illustrates what positive predictive value you get at different underlying shares of the population who actually have coronavirus antibodies for the above hypothetical test, and for two actually available tests.

One test, from BioReference, has a listed sensitivity of 91.2% and specificity of 97.3%, and a test from Abbott Laboratories has a sensitivity of 100% and a specificity of 99%. Even the very low false positive rate of 1% for the latter still means that a positive test isn’t a slam dunk if the underlying prevalence is very low:

ppv illustrations

Click here for original post on Business Insider