Skip to main content
SearchLoginLogin or Signup

Review 1: "The Prevalence of SARS-CoV-2 Infection and Uptake of COVID-19 Antiviral Treatments During the BA.2/BA.2.12.1 Surge, New York City, April-May 2022"

Overall, reviewers note that the manuscript doesn't report details such as response rates or seem to control for potential confounders, which limits the manuscript's believability.

Published onJul 06, 2022
Review 1: "The Prevalence of SARS-CoV-2 Infection and Uptake of COVID-19 Antiviral Treatments During the BA.2/BA.2.12.1 Surge, New York City, April-May 2022"
1 of 2
key-enterThis Pub is a Review of
The prevalence of SARS-CoV-2 infection and uptake of COVID-19 antiviral treatments during the BA.2/BA.2.12.1 surge, New York City, April-May 2022
The prevalence of SARS-CoV-2 infection and uptake of COVID-19 antiviral treatments during the BA.2/BA.2.12.1 surge, New York City, April-May 2022
Description

Abstract Importance Routine case surveillance data for SARS-CoV-2 are incomplete, biased, missing key variables of interest, and may be unreliable for both timely surge detection and understanding the burden of infection.Objective To determine the prevalence of SARS-CoV-2 infection during the Omicron BA.2/BA.2.12.1 surge in relation to official case counts, and to assess the epidemiology of infection and uptake of SARS-CoV-2 antivirals.Design Cross-sectional survey of a representative sample of New York City (NYC) adult residents, conducted May 7-8, 2022.Setting NYC, April 23-May 8, 2022, during which the official SARS-CoV-2 case count was 49,253 and BA.2.12.2 comprised 20% of reported cases.Participants A representative sample of 1,030 NYC adult residents >18 years.Exposure(s) Vulnerability to severe COVID-19, including vaccination/booster status, prior COVID, age, and presence of comorbidities.Main Outcome(s) and Measure(s) Prevalence of SARS-CoV-2 infection during a 14-day period, weighted to represent the NYC adult population. Respondents self-reported on SARS-CoV-2 testing (including at-home rapid antigen tests), testing outcomes, COVID-like symptoms, and contact with confirmed/probable cases. Individuals with SARS-CoV-2 were asked about awareness/use of antiviral medications.Results An estimated 22.1% (95%CI 17.9%-26.2%) of respondents had SARS-CoV-2 infection during the study period, corresponding to ∼1.5 million adults (95%CI 1.3-1.8 million). Prevalence was estimated at 34.9% (95%CI 26.9%-42.8%) among individuals with co-morbidities, 14.9% (95% CI 11.0%-18.8%) among those 65+ years, and 18.9% (95%CI 10.2%-27.5%) among unvaccinated persons. Hybrid protection against severe disease (i.e., from both vaccination and prior infection) was 66.2% (95%CI 55.7%-76.7%) among those with COVID and 46.3% (95%CI 40.2-52.2) among those without. Among individuals with COVID, 55.9% (95%CI 44.9%-67.0%) were not aware of the antiviral nirmatrelvir/ritonavir (Paxlovid™), and 15.1% (95%CI 7.1%-23.1%) reported receiving it.Conclusions and Relevance The true magnitude of NYC’s BA.2/BA.2.12.1 surge was vastly underestimated by routine SARS-CoV-2 surveillance. Until there is more certainty that the impact of future pandemic surges on severe population health outcomes will be diminished, representative surveys are needed for timely surge detection, and to estimate the true burden of infection, hybrid protection, and uptake of time-sensitive treatments.

RR:C19 Evidence Scale rating by reviewer:

  • Potentially informative. The main claims made are not strongly justified by the methods and data, but may yield some insight. The results and conclusions of the study may resemble those from the hypothetical ideal study, but there is substantial room for doubt. Decision-makers should consider this evidence only with a thorough understanding of its weaknesses, alongside other evidence and theory. Decision-makers should not consider this actionable, unless the weaknesses are clearly understood and there is other theory and evidence to further support it.

***************************************

Review:

Summary and strengths

The true burden of COVID-19 infection at any point during the pandemic has always been difficult to estimate. Research that clarifies true burden is even more urgent since the advent and popularity of at-home testing in early 2022, wherein neither tests nor results are reported. Public health officials have been left with essentially no method of reliably calculating the true burden of infection and thus no reliable data upon which to activate public health response(s), which carry their own economic, productivity and political costs. The preprint by Qasmieh and colleagues uses an imperfect but convenient and common sampling method to estimate testing behaviors and period prevalence during the 2022 Omicron BA.2 wave in New York. The authors’ use of survey weighting methods to correct for a limited set of demographic differences between their respondents and the Census American Community survey.

The authors calculate a 31-fold difference in actual BA.2 infections above the cases reported through official public health sources at the same time. This is slightly high, though not too far off an expected multiplier that combines under-ascertainment estimates from seroprevalence studies early on (pre-OTC/rapid antigen test (RAT)), NYC multiplier of 6x (https://academic.oup.com/cid/article/73/10/1831/6152134) in combination with estimated RAT:NAAT ratio of 5:1 (5x) would be roughly a 30x multiplier, using back-of -the envelope calculations. There is no authoritative source of the RAT:NAAT ratio, but during Omicron it may have ranged from 3:2 to 5:1 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8979595/); John Brownstein’s group at Harvard is doing some work in this area.


Limitations

Selection bias is a commonly encountered challenge with telephone-based sampling which has worsened over time, as has been borne out in erroneous polling projections in the past several U.S. election cycles; this effect may also be stronger around surveys with politically controversial topics, which sadly has come to include COVID-19 and educational attainment as a key confounder (https://www.pewresearch.org/methods/2017/05/15/what-low-response-rates-mean-for-telephone-surveys). The contact mode (cell/landline) response rates could also introduce selection bias; other studies with similar designs have found that cell phone respondents answering internet surveys more likely to be younger, higher income, better educated, and white (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4994958/). It would have been much more transparent if the authors included the response rate to their text messages and robocalls -- overall, and by demographics, by contact mode (cell/landline), and also probed the extent to which their main outcomes differed by communication modality if at all. It is reasonable to imagine that persons with recent relevant experience with COVID, including infection and testing, would be more interested in participating in a survey about their experience; that said, this bias is at least partially mitigated because the topic of COVID is not actually mentioned until the first question is asked. 

The case definition used by the authors is not consistent with official COVID-19 case definitions (https://ndc.services.cdc.gov/case-definitions/coronavirus-disease-2019-2020/). For example, the authors appear to include “possible” cases of COVID in their prevalence estimates (this is not always 100% clear throughout), and their definition of “possible” is less stringent than the CDC definition of “probable” which uses a more specific combination of clinical symptoms. Caution is therefore warranted when making direct comparison of this study’s rates or percentages to the official sources like the NYC official Department of Health data, which does not include such “possible” cases. 


It’s sometimes unclear in various tables what values are weighted and which are unweighted – for example, in the methods text it states that the survey weights were applied to both the sample characteristics and the prevalence estimates, but it appears that in table 1, only the prevalence estimates were weighted. Better in-table footnotes about weighting would have been appreciated throughout. 

There are also no survey weighting factors to correct for key confounding variables like vaccinated vs. unvaccinated status.

Regarding ACS estimates, while weighting the survey population was an important way to increase the robustness of their results, the ACS estimates available only go up to 2019, which are pre-pandemic estimates. During the pandemic, large migrations of the population occurred between urban, suburban, and rural areas, especially in New York (https://comptroller.nyc.gov/reports/the-pandemics-impact-on-nyc-migration-patterns/), that could have affected the accuracy of weights used. 

All results described in this study are bivariate associations. This study would have benefitted from some kind of adjustment model, such as survey-weighted logistic regression with an outcome of positive COVID test. This would allow for simultaneous adjustment of the various variables that were shown to be statistically associated with higher prevalence which would help characterize which explanatory factors may in fact be driving other apparent ones found in the simple bivariate cross-tabulations. Although such a model is not strictly necessary, and might be fit for a future paper, the lack of such adjustment make the types and directionality of associations reported in this paper confusing and difficult to interpret.



Comments
0
comment

No comments here

Why not start the discussion?