Skip to main content
SearchLoginLogin or Signup

Review 1: "Did People Really Drink Bleach to Prevent COVID-19? A Tale of Problematic Respondents and a Guide for Measuring Rare Events in Survey Data"

Published onApr 14, 2022
Review 1: "Did People Really Drink Bleach to Prevent COVID-19? A Tale of Problematic Respondents and a Guide for Measuring Rare Events in Survey Data"
1 of 2
key-enterThis Pub is a Review of
Did people really drink bleach to prevent COVID-19? A tale of problematic respondents and a guide for measuring rare events in survey data
Description

AbstractSociety is becoming increasingly dependent on survey research. However, surveys can be impacted by participants who are non-attentive, respond randomly to survey questions, and misrepresent who they are and their true attitudes. The impact that such respondents can have on public health research has rarely been systematically examined. In this study we examine whether Americans began to engage in dangerous cleaning practices to avoid Covid-19 infection. Prior findings reported by the CDC have suggested that people began to engage in highly dangerous cleaning practices during the Covid-19 pandemic, including ingesting household cleansers such as bleach. In a series of studies totaling close to 1400 respondents, we show that 80-90% of reports of household cleanser ingestion are made by problematic respondents. These respondents report impossible claims such as ‘recently having had a fatal heart attack’ and ‘eating concrete for its iron content’ at a similar rate to ingesting household cleaners. Additionally, respondents’ frequent misreading or misinterpreting the intent of questions accounted for the rest of such claims. Once inattentive, mischievous, and careless respondents are taken out of the analytic sample we find no evidence that people ingest cleansers to prevent Covid-19 infection. The relationship between dangerous cleaning practices and health outcomes also becomes non-significant once problematic respondents are taken out of the analytic sample. These results show that reported ingestion of household cleaners and other similar dangerous practices are an artifact of problematic respondent bias. The implications of these findings for public health and medical survey research, as well as best practices for avoiding problematic respondents in surveys are discussed.

RR:C19 Evidence Scale rating by reviewer:

  • Strong. The main study claims are very well-justified by the data and analytic methods used. There is little room for doubt that the study produced has very similar results and conclusions as compared with the hypothetical ideal study. The study’s main claims should be considered conclusive and actionable without reservation.

***************************************

Review:

Although possibly a bit too long, this is an interesting paper that provides useful examples of the dangers of uncritically accepting self-report data for low incidence events. It goes further, though, by also providing examples of useful strategies for evaluating these self-reported events and providing recommendations. The paper is also well reasoned, well written and, most importantly, hypothesis driven. A few suggestions are listed below for further improving this contribution.

  • The description of the Chandler et al (2019) screening method (pager 17) is not at all clear. It would be helpful if example items could be presented.

  • The methods employed in Study 2 for demographic validation are also unclear. On page 27 of the paper, the authors indicate that extreme outliers, defined as 10+ standard deviations from population means, were used to identify problematic cases. How this would work with nominal level measures is not clear. It would be helpful to have these procedures more carefully described, including a list of the specific socio-demographics examined in this regard.

  • Can more information be provided regarding the national samples employed? Were these from survey panels? If so, were they probability or non-probability based? What were the response rates for the surveys, and could those response rates be reported using the standard formulas provided by the American Association for Public Opinion Research (AAPOR) or some similar group?

  • Could it also be clarified which analyses are based on weighted vs. unweighted survey data?  Could it further be clarified whether or not the study was reviewed and approved by an IRB?

  • Although the paper’s general reflections are very much appreciated, the authors should be cautioned regarding their recommendation to use “validated” measures, at least without providing a clear definition. The literature is bursting with measures that have been used in single studies, often with unrepresentative samples, in which a measurement scale is reported to have produced an acceptable alpha reliability coefficient. Many investigators erroneously consider such measures to be “validated” because they were once deemed adequate enough to be published and were demonstrated to have some reliability. This is of course far from qualifying as having been validated and I would urge the authors of this paper to think carefully about what they mean when using the term “validated measure” and make it clear to readers what they consider them to be.

  • The detective work in this paper demonstrating that there were no credible cases of cleaning fluid ingestion among survey respondents is very convincing. At the same time, there was a large increase in calls to poison centers nationally during the early months of the COVID pandemic, suggesting that there might well have been actual cases in the general population. The authors might want to acknowledge this in their discussion.

Comments
0
comment
No comments here
Why not start the discussion?