COVID-19 Lockdown and Online Surveys

Covercovid_vendors2.png
Short descriptionResponse quality from online survey panels declined in 2020.
TagsData QualitySurvey Research
ForAmerican Association for Public Opinion Research
Year2022

TL;DR

The COVID-19 pandemic had a profound impact on numerous elements of academic and industry research, one of which is surveys. In this paper, we explore how the U.S. lockdown coincided with a decline in response quality of survey data collected from online panels. Poor quality survey data can reduce the reliability of our estimates and, ultimately, our assertions about what we know. To illustrate this, we analyze a multi-wave cross-sectional survey conducted on Amazon Mechanical Turk (MTurk), Lucid, and Prolific from November 2019 to June 2020. Our findings reveal a sharp decline in response quality during this time period. We emphasize the importance of using a variety of metrics to track response quality when using online panels.

Study Overview

WaveDateVendorParticipantsCompensation
1 (Pilot)Nov. 15, 2019MTurk, Lucid166$1.75
2Jan. 22-23, 2020MTurk, Lucid, Prolific598$2.00
3Feb. 4-5, 2020MTurk, Lucid, Prolific601$2.00
4Apr.16-17, 2020MTurk, Lucid, Prolific600$2.00
5May 20-21, 2020MTurk, Lucid, Prolific599$2.50
6Jun. 15-16, 2020MTurk, Lucid, Prolific600$2.50

The 10-minute survey contained questions on politics and current events as well as standard demographic measures. It also featured an experiment where participants read a vignette and provided their opinions thereafter. Although the experimental design was modified slightly between waves, the quality check items remained consistent.

The six quality checks included in the design were:

  1. Attention checks: Two: (1) The respondent was asked to select “Strongly disagree” for the third item in an agree-disagree grid; (2) The respondent was asked to respond to the statement “I have never used the internet” with True or False. The respondent received one flag for each attention check that they failed.
  1. Inconsistent responses: The respondent was asked their age (in years) at the beginning of the survey. They were then asked for their year of birth at the end of the survey. If the two answers were more than 2 years apart, the respondent received a flag.
  1. Open-ended answers: The open-ended question was: “What do you think is the most important issue facing the U.S. today?” Respondents received flags for non-response; “don’t know” or similar; irrelevant answer; gibberish or keyboard mashing; or copy-pasted text.
  1. Speeding: The respondent received a flag if they took less than ½ the median response time to complete the survey.
  1. Straightlining: The respondent received a flag if they chose the same response option in a question grid, despite some grid items being reverse-coded.
  1. Drop-offs: If the respondent exited the survey before the end or clicked through to the final page without responding to any questions, they received a flag for ‘incomplete.’

Study Findings

For all quality metrics, with the exception of drop-offs, a larger share of the panel sample were flagged following U.S. lockdown in mid-March of 2020. In May and June 2020, an average of 21-22% of the sample received two or more flags for response quality issues. This contrasts with an average of 5-6% of the sample in January and February that year. Response quality declined most noticeably for the Lucid panel, where more than a third were flagged in June 2020.

The figures below display the breakdown for each of the six quality metrics by online panel. Failing attention checks and speeding increased most after lockdown, especially for the Lucid sample. Prolific saw the least declines in response quality compared to Lucid and MTurk, but it was still significant. Unexpectedly, however, the proportion of drop-offs—respondents exiting the survey early—declined in the waves following lockdown.