The Relationship Between Response Propensity and Data Quality in the Current Population Survey and the American Time Use Survey
Files
Publication or External Link
Date
Authors
Advisor
Citation
DRUM DOI
Abstract
An important theoretical question in survey research over the past fifty years has been: How does bringing in late or reluctant respondents affect total survey error? Does the effort and expense of obtaining interviews from difficult to contact or reluctant respondents significantly decrease the nonresponse error of survey estimates? Or do these late respondents introduce enough measurement error to offset any reductions in nonresponse bias? This dissertation attempted to address these questions by examining nonresponse and data quality in two national household surveys--the Current Population Survey (CPS) and the American Time Use Survey (ATUS). Response propensity models were first developed for each survey, and busyness and social capital explanations of nonresponse were evaluated in light of the results. Using respondents' predicted probability of response, simulations were carried out to examine whether nonresponse bias was linked to response rates. Next, data quality in each survey was assessed by a variety of indirect indicators of response error--e.g., item missing data rates, round value reports, interview-reinterview response inconsistencies, etc.--and the causal roles of various household, respondent, and survey design attributes on the level of reporting error were explored. The principal analyses investigated the relationship between response propensity and the data quality indicators in each survey, and examined the effects of potential common causal factors when there was evidence of covariation. The implications of the findings from this study for survey practitioners and for nonresponse and measurement error studies are discussed.