Name File Type Size Last Modified
Precision and Disclosure in Text and Voice Interviews on Smartphones Codebook application/pdf 862.4 KB 05/03/2015 11:53:AM
Precision_and_Disclosure_in_Text_and_Voice_Interviews_on_Smartphones_Dataset text/csv 610.7 KB 05/03/2015 11:59:AM
Precision_and_Disclosure_in_Text_and_Voice_Interviews_on_Smartphones_Dataset text/plain 610.7 KB 05/03/2015 11:59:AM

Project Citation: 

Conrad, Frederick G., and Schober, Michael F. Precision and Disclosure in Text and Voice Interviews on Smartphones: 2012 [United States]. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2015-05-03. https://doi.org/10.3886/E31912V1

Project Description

Summary:  View help for Summary As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. This dataset contains 1,282 cases, 634 cases that completed an interview and 648 cases that were invited to participate, but did not start or complete an interview on their iPhone. Participants were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data—fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information—than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.
Funding Sources:  View help for Funding Sources National Science Foundation. Directorate for Social, Behavioral and Economic Sciences (SES-1026225); National Science Foundation. Directorate for Social, Behavioral and Economic Sciences (SES-1025645)

Scope of Project

Subject Terms:  View help for Subject Terms survey interviewing; rounding; automated interviewing; response rates; straightlining; completion; nondifferentiation; statisficing; iPhone; breakoff; sensitive questions; data quality; text message interviewing; survey methodology; IVR; precision; text message; speech IVR; heaping; nonresponse; interview satisfaction; multitasking; mobile devices; disclosure; smartphone; SMS; mode comparison
Geographic Coverage:  View help for Geographic Coverage United States
Time Period(s):  View help for Time Period(s) 3/28/2012 – 5/3/2012 (March - May 2012)


Related Publications

Published Versions

Export Metadata

Report a Problem

Found a serious problem with the data, such as disclosure risk or copyrighted content? Let us know.

This material is distributed exactly as it arrived from the data depositor. ICPSR has not checked or processed this material. Users should consult the investigator(s) if further information is desired.