Name File Type Size Last Modified
Codebook_PredictorsOfWillingnessToParticipateInSurveyInterviewsConductedByLiveVideo.pdf application/pdf 165.1 KB 11/02/2022 06:06:PM
Dataset_PredictorsOfWillingnessToParticipateInSurveyInterviewsConductedByLiveVideo.csv text/csv 582 KB 11/02/2022 06:05:PM
Table3_and_4.R text/x-rsrc 3.7 KB 11/02/2022 06:07:PM

Project Citation: 

Schober, Michael F., and Conrad, Frederick G. Predictors of Willingness to Participate in Survey Interviews Conducted by Live Video August 2021 [United States]. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2022-11-02. https://doi.org/10.3886/E181584V1

Project Description

Summary:  View help for Summary As people increasingly communicate via live video--even more since the COVID-19 pandemic--how willing will they be to participate via video in large-scale standardized surveys that inform public policy, many of which have historically been carried out in person? This registered report tests three potential predictors of willingness to participate in a live video interview: how (1) easy, (2) useful, and (3) enjoyable respondents find live video to use in other contexts. A potential survey-specific moderator of these effects is also tested: the extent to which respondents report that they would be uncomfortable answering a particular question on a sensitive topic via live video relative to other survey modes. In the study, 598 online US respondents rated their willingness to take part in a hypothetical live video survey that might ask about personal information, in the context of also rating their willingness to take part in four other survey modes, two interviewer-administered (in-person and telephone) and two self-administered (a text-only web survey and a “prerecorded-video” web survey in which respondents play videos of interviewers reading questions and then enter answers). Findings demonstrate that willingness to participate in a live video interview is significantly predicted by the extent to which respondents perceive live video as useful and enjoyable in other contexts and by their relative discomfort disclosing in live video vs. other modes.
Funding Sources:  View help for Funding Sources National Science Foundation. Directorate for Social, Behavioral and Economic Sciences (SES-1825194); National Science Foundation. Directorate for Social, Behavioral and Economic Sciences (SES-1825113)

Scope of Project

Subject Terms:  View help for Subject Terms survey interview mode; live video; technology acceptance; participation; sensitive questions
Geographic Coverage:  View help for Geographic Coverage United States
Time Period(s):  View help for Time Period(s) 8/23/2020 – 8/24/2021
Collection Date(s):  View help for Collection Date(s) 8/23/2021 – 8/24/2021
Universe:  View help for Universe People 18 and over
Data Type(s):  View help for Data Type(s) survey data
Collection Notes:  View help for Collection Notes 600 online panelists were recruited through CloudResearch Prime Panels to match 2018 US Current Population Survey distributions on age (< 65, >= 65), gender (male, female), race (White, non-White) and education (<= high school, > high school).  Recruits were instructed to respond on their own device in a time and place of their choosing. They were first presented with consent language approved by The New School Human Research Protection Program (protocol 2020-124) and only proceeded to the study if they agreed. All recruits were asked an age screening question. To be eligible one needed to be 18 years of age or older. Participants who completed the survey received compensation in the amount they had agreed to with the platform from which PrimePanels recruited them into the study.

Methodology

Response Rate:  View help for Response Rate The final sample size from our non-probability sample source is reported here. The percentages should not be confused with AAPOR standard response rates, but more appropriately characterized as “participation rate” (The American Association for Public Opinion Research 2015) or “completion rate” (Callegaro and Disogra 2008). We do not know how many people were exposed to the study invitation(s), but only how many were issued unique links to the survey instrument after responding to the invitation, indicating interest in participating. 
  • 909 participants entered the PrimePanels pre-survey quality control system (Sentry)
    • 176 Failed / 38 Dropped
  • 695 participants passed Sentry and entered our survey
  • 598 participants completed the full survey
The corresponding percentages are: 
  • 76.46% started survey (695 starts/909 links issued)
  • 86.04% completion rate among those who started (598 completes of 695 starts)
  • 65.78% completion rate for those issued links (598 completes of 909 links issued)
Sampling:  View help for Sampling A non-probability sample of adults was recruited through CloudResearch Prime Panels (https://www.cloudresearch.com/) to match demographic characteristics from the 2018 US Current Population Survey distributions.

Participants recruited were not intended to represent the US population.
Data Source:  View help for Data Source The data for this project were collected in an original survey of adults conducted in August 2021.
Collection Mode(s):  View help for Collection Mode(s) web-based survey
Scales:  View help for Scales 11 Likert-type items that asked people about their perceptions of participating in videomediated interactions were adapted from Venkatesh and Davis’ (2000) studies testing the Technology Acceptance Model across five different technologies in the workplace. 4 items tested perceived ease of use, 4 tested perceived usefulness, and 3 tested perceived enjoyment.  For these questions the categorical responses were on a 7-point scale with labeled categories “Strongly agree,” “Agree,” “Somewhat agree,” “Neither agree nor disagree,” “Somewhat disagree,” “Disagree,” and “Strongly Disagree.” 

Questions about respondents’ demographic characteristics (gender, Hispanicity, race/ethnicity, and education) were drawn from the US 2020 Decennial Census, using the Census Bureau’s categorical response options and adding “rather not say” as well as a nonbinary gender category.  Age and zip code were asked using an open numerical response format.

Weights:  View help for Weights No weights were calculated as our primary goal was to test for predictors of mode preferences, not to generalize to a population beyond our convenience sample. In addition, there was no evidence that nonresponse affected the composition of the respondent sample so no weights were calculated to adjust for nonresponse.
Unit(s) of Observation:  View help for Unit(s) of Observation People 18 and older
Geographic Unit:  View help for Geographic Unit United States

Related Publications

Published Versions

Export Metadata

Report a Problem

Found a serious problem with the data, such as disclosure risk or copyrighted content? Let us know.

This material is distributed exactly as it arrived from the data depositor. ICPSR has not checked or processed this material. Users should consult the investigator(s) if further information is desired.