Question: In a place like DC where the positivity rate has hovered in the 1–3% range, what percentage of the positive tests reported are actually probably false positives? Is there any accounting for that in the reporting? And if I were to get a positive test result without having direct exposure (like I tested after flying just to be sure), what are the odds that the test result is a true positive?
Answer: Good questions! On the false positivity front, the answer depends on a few variables:
- Test you’re using — PCR have higher sensitivity and specificity (they are more precise) as compared with antigen tests. (see Q&A of 9/4 #Test Types)
- True incidence within the population being tested — the higher the incidence within the population tested, the lower the false positivity rate. (see Q&A of 4/15 #Sensitivity)
- Timing of when you get tested — No test is able to detect the virus if you get tested very soon after exposure. Antigen tests are less likely to detect the infection if you are tested too late in the course of the disease. And PCR tests can detect the virus long after your body has cleared it. (see Q&A of 10/24 #Antigen)
- Your health history — If you have symptoms or have recently had known exposure to someone infected with COVID, the possibility of false positive is minimized.
- Specimen quality and lab quality — Did the health provider correctly conduct the nasal/throat swab? Did the laboratory follow all quality controls?
When it comes to DC, it looks like molecular PCR tests are still the main tests being used. The Foundation for New Innovative Diagnostics has been independently evaluating the sensitivity/specificity of COVID-19 molecular (PCR) tests and modelers who recently published in Lancet Infectious Diseases pooled these results to estimate that molecular tests have a sensitivity at symptom onset of 90% (range: 80%-95%) and specificity of 100%. 100% specificity seems overly optimistic, so I’m going to go with 99%, which is well within the 95% confidence interval. Reminder: sensitivity is the ability of the test to identify all people who are actually infected; specificity is the ability of the test to identify all the people who are actually not infected. Table 1 shows how the false positive rate changes based on different levels of incidence; if we estimate that true incidence among people getting tested in DC is 3%, then we’d expect 27% of positive results to be false positive. However, I’d like to remind you that these sensitivity/specificity estimates are based on testing at symptom onset. Because lots of folks are *not* getting tested at symptom onset, because a substantial proportion of cases remain asymptomatic, and because of the many other variables listed above, it’s really hard to estimate the proportion of cases in DC that are actually false positive. And as I said before, you’re very likely to get a negative test if you test too early in the course of infection.
On the aggregate, as incidence increases (see 7% and 10% scenarios below), the number of false positives and false negatives begin to balance each other out, and as incidence increases beyond 10%, the number of false negatives outweighs the number of false positives. To my knowledge, no state accounts for false negatives/positives in their reporting. In my opinion, that’s for the best, since there are so many assumptions required for making false positive/negative estimates.
Table 1. PCR Testing False Positive Rates under different Incidence Assumptions.