Skip to main content
SearchLogin or Signup

Review 2: "Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online"

Paying attention to the accuracy of information will increase sharing discernment on social media, which reduces misinformation spread online. Both reviewers found the paper potentially informative, but one reviewer was concerned about the claims made based on its methodology.

Published onMay 07, 2021
Review 2: "Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online"
1 of 2
key-enterThis Pub is a Review of
Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online

Recent research suggests that shifting users’ attention to accuracy increases the quality of news they subsequently share online. Here we help develop this initial observation into a suite of deployable interventions for practitioners. We ask (i) how prior results generalize to other approaches for prompting users to consider accuracy, and (ii) for whom these prompts are more versus less effective. In a large survey experiment examining participants’ intentions to share true and false headlines about COVID-19, we identify a variety of different accuracy prompts that successfully increase sharing discernment across a wide range of demographic subgroups while maintaining user autonomy. Research questions•There is mounting evidence that inattention to accuracy plays an important role in the spread of misinformation online. Here we examine the utility of a suite of different accuracy prompts aimed at increasing the quality of news shared by social media users.•Which approaches to shifting attention towards accuracy are most effective? •Does the effectiveness of the accuracy prompts vary based on social media user characteristics? Assessing effectiveness across subgroups is practically important for examining the generalizability of the treatments, and is theoretically important for exploring the underlying mechanism.Essay summary•Using survey experiments with N=9,070 American social media users (quota-matched to the national distribution on age, gender, ethnicity, and geographic region), we compared the effect of different treatments designed to induce people to think about accuracy when deciding what news to share. Participants received one of the treatments (or were assigned to a control condition), and then indicated how likely they would be to share a series of true and false news posts about COVID-19. •We identified three lightweight, easily-implementable approaches that each increased sharing discernment (the quality of news shared, measured as the difference in sharing probability of true versus false headlines) by roughly 50%, and a slightly more lengthy approach that increased sharing discernment by close to 100%. We also found that another approach that seemed promising ex ante (descriptive norms) was ineffective. Further-more, gender, race, partisanship, and concern about COVID-19 did not moderate effectiveness, suggesting that the accuracy prompts will be effective for a wide range of demographic subgroups. Finally, helping to illuminate the mechanism behind the effect, the prompts were more effective for participants who were more attentive, reflective, engaged with COVID-related news, concerned about accuracy, college-educated, and middle-aged. •From a practical perspective, our results suggest a menu of accuracy prompts that are effective in our experimental setting and that technology companies could consider testing on their own services.

RR:C19 Evidence Scale rating by reviewer:

  • Potentially informative. The main claims made are not strongly justified by the methods and data, but may yield some insight. The results and conclusions of the study may resemble those from the hypothetical ideal study, but there is substantial room for doubt. Decision-makers should consider this evidence only with a thorough understanding of its weaknesses, alongside other evidence and theory. Decision-makers should not consider this actionable, unless the weaknesses are clearly understood and there is other theory and evidence to further support it.



1.     Please expand your abstract. The current information provided in the abstract is uncertain and not clear. What type of news is the toolkit evaluating? Health-related, pandemic-related, or personal news?

2.     Did the authors validate or pilot test the tool they developed before applying the intervention? Please provide more details of its validity.

3.     How is your approach appropriate? Did you have a standard or any other method that you used as a comparison?

4.     I do not understand what is less versus more effective? How is the toolkit evaluated? What is the cut-off scale used to estimate the effectiveness?

5.     The conclusion is being justified with future implications.

6.     How was the participants' consent taken and what was the process of randomization of the sample in the study?

7.     The methodology was not clear and needed to provide more information concerning sample size, validation of the study, inclusion and exclusion criteria, and accuracy.

8.     Is the study period long enough to answer the hypothesis?

9.     The methodology should be divided into sections to better understand the process.


No comments here