Joint Program in Survey Methodology

Permanent URI for this communityhttp://hdl.handle.net/1903/2251

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    Item
    Understanding the Mechanism of Panel Attrition
    (2009) Lemay, Michael; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Nonresponse is of particular concern in longitudinal surveys (panels) for several reasons. Cumulative nonresponse over several waves can substantially reduce the proportion of the original sample that remains in the panel. Reduced sample size increases the variance of the estimates and reduces the possibility for subgroup analysis. Also, the higher the attrition, the greater the concern that error (bias) will arise in the survey estimates. The fundamental purpose of most panel surveys is to allow analysts to estimate dynamic behavior. However, current research on attrition in panel surveys focuses on the characteristics of respondents at wave 1 to explain attrition in later waves, essentially ignoring the role of life events as determinants of panel attrition. If the dynamic behaviors that panel surveys are designed to examine are also prompting attrition, estimates of those behaviors and correlates of those behaviors may be biased. Also, current research on panel attrition generally does not differentiate between attrition through non-contacts and attrition through refusals. As these two source of nonresponse have been shown to have different determinants, they can also be expected to have different impacts on data quality. The goal of this research is to examine these issues. Data for this research comes from the Panel Survey of Income Dynamics (PSID) conducted by the University of Michigan. The PSID is an ongoing longitudinal survey that began in 1968 and with a focus on the core topics of income, employment, and health.
  • Thumbnail Image
    Item
    The Relationship Between Response Propensity and Data Quality in the Current Population Survey and the American Time Use Survey
    (2007-04-26) Fricker, Scott; Tourangeau, Roger; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    An important theoretical question in survey research over the past fifty years has been: How does bringing in late or reluctant respondents affect total survey error? Does the effort and expense of obtaining interviews from difficult to contact or reluctant respondents significantly decrease the nonresponse error of survey estimates? Or do these late respondents introduce enough measurement error to offset any reductions in nonresponse bias? This dissertation attempted to address these questions by examining nonresponse and data quality in two national household surveys--the Current Population Survey (CPS) and the American Time Use Survey (ATUS). Response propensity models were first developed for each survey, and busyness and social capital explanations of nonresponse were evaluated in light of the results. Using respondents' predicted probability of response, simulations were carried out to examine whether nonresponse bias was linked to response rates. Next, data quality in each survey was assessed by a variety of indirect indicators of response error--e.g., item missing data rates, round value reports, interview-reinterview response inconsistencies, etc.--and the causal roles of various household, respondent, and survey design attributes on the level of reporting error were explored. The principal analyses investigated the relationship between response propensity and the data quality indicators in each survey, and examined the effects of potential common causal factors when there was evidence of covariation. The implications of the findings from this study for survey practitioners and for nonresponse and measurement error studies are discussed.
  • Thumbnail Image
    Item
    STATISTICAL ESTIMATION METHODS IN VOLUNTEER PANEL WEB SURVEYS
    (2004-11-17) Lee, Sunghee; Valliant, Richard; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Data collected through Web surveys, in general, do not adopt traditional probability-based sample designs. Therefore, the inferential techniques used for probability samples may not be guaranteed to be correct for Web surveys without adjustment, and estimates from these surveys are likely to be biased. However, research on the statistical aspect of Web surveys is lacking relative to other aspects of Web surveys. Propensity score adjustment (PSA) has been suggested as an alternative for statistically surmounting inherent problems, namely nonrandomized sample selection, in volunteer Web surveys. However, there has been a minimal amount of evidence for its applicability and performance, and the implications are not conclusive. Moreover, PSA does not take into account problems occurring from uncertain coverage of sampling frames in volunteer panel Web surveys. This study attempted to develop alternative statistical estimation methods for volunteer Web surveys and evaluate their effectiveness in adjusting biases arising from nonrandomized selection and unequal coverage in volunteer Web surveys. Specifically, the proposed adjustment used a two-step approach. First, PSA was utilized as a method to correct for nonrandomized sample selection, and secondly calibration adjustment was used for uncertain coverage of the sampling frames. The investigation found that the proposed estimation methods showed a potential for reducing the selection and coverage bias in estimates from volunteer panel Web surveys. The combined two-step adjustment not only reduced bias but also mean square errors to a greater degree than each individual adjustment. While the findings from this study may shed some light on Web survey data utilization, there are additional areas to be considered and explored. First, the proposed adjustment decreased bias but did not completely remove it. The adjusted estimates showed a larger variability than the unadjusted ones. The adjusted estimator was no longer in the linear form, but an appropriate variance estimator has not been developed yet. Finally, naively applying the variance estimator for linear statistics highly overestimated the variance, resulting in understating the efficiency of the survey estimates.