Joint Program in Survey Methodology

Permanent URI for this communityhttp://hdl.handle.net/1903/2251

Browse

Search Results

Now showing 1 - 4 of 4
  • Thumbnail Image
    Item
    A COMPARISON OF EX-ANTE, LABORATORY, AND FIELD METHODS FOR EVALUATING SURVEY QUESTIONS
    (2014) Maitland, Aaron; Presser, Stanley; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    A diverse range of evaluation methods is available for detecting measurement error in survey questions. Ex-ante question evaluation methods are relatively inexpensive, because they do not require data collection from survey respondents. Other methods require data collection from respondents either in the laboratory or in the field setting. Research has explored how effective some of these methods are at identifying problems with respect to one another. However, a weakness of most of these studies is that they do not compare the range of question evaluation methods that are currently available to researchers. The purpose of this dissertation is to understand how the methods researchers use to evaluate survey questions influence the conclusions they draw about the questions. In addition, the dissertation seeks to identify more effective ways to use the methods together. It consists of three studies. The first study examines the extent of agreement between ex-ante and laboratory methods in identifying problems and compares the methods in how well they predict differences between questions whose validity has been estimated in record-check studies. The second study evaluates the extent to which ex-ante and laboratory methods predict the performance of questions in the field as measured by indirect assessments of data quality such as behavior coding, response latency and item nonresponse. The third study evaluates the extent to which ex-ante, laboratory, and field methods predict the reliability of answers to survey questions as measured by stability over time. The findings suggest (1) that a multiple method approach to question evaluation is the best strategy given differences in the ability to detect different types of problems between the methods and (2) how to combine methods more effectively in the future.
  • Thumbnail Image
    Item
    Classifying Mouse Movements and Providing Help in Web Surveys
    (2013) Horwitz, Rachel; Conrad, Frederick G; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Survey administrators go to great lengths to make sure survey questions are easy to understand for a broad range of respondents. Despite these efforts, respondents do not always understand what the questions ask of them. In interviewer-administrated surveys, interviewers can pick up on cues from the respondent that suggest they do not understand or know how to answer the question and can provide assistance as their training allows. However, due to the high costs of interviewer administration, many surveys are moving towards other survey modes (at least for some respondents) that do not include costly interviewers, and with that a valuable source for clarification is gone. In Web surveys, researchers have experimented with providing real-time assistance to respondents who take a long time to answer a question. Help provided in such a fashion has resulted in increased accuracy, but some respondents do not like the imposition of unsolicited help. There may be alternative ways to provide help that can refine or overcome the limitations to using response times. This dissertation is organized into three separate studies that each use a set of independently collected data to identify a set of indicators survey administrators can use to determine when a respondent is having difficulty answering a question and proposes alternative ways of providing real-time assistance that increase accuracy as well as user satisfaction. The first study identifies nine movements that respondents make with the mouse cursor while answering survey questions and hypothesizes, using exploratory analyses, which movements are related to difficulty. The second study confirms use of these movements and uses hierarchical modeling to identify four movements which are the most predictive. The third study tests three different of providing unsolicited help to respondents: text box, audio recording, and chat. Accuracy and respondent satisfaction are evaluated for each mode. There were no differences in accuracy across the three modes, but participants reported a preference for receiving help in a standard text box. These findings allow survey designers to identify difficult questions on a larger scale than previously possible and to increase accuracy by providing real-time assistance while maintaining respondent satisfaction.
  • Thumbnail Image
    Item
    Adjustments for Nonresponse, Sample Quality Indicators, and Nonresponse Error in a Total Survey Error Context
    (2012) Ye, Cong; Tourangeau, Roger; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The decline in response rates in surveys of the general population is regarded by many researchers as one of the greatest threats to contemporary surveys. Much research has focused on the consequences of nonresponse. However, because the true values for the non-respondents are rarely known, it is difficult to estimate the magnitude of nonresponse bias or to develop effective methods for predicting and adjusting for nonresponse bias. This research uses two datasets that have records on each person in the frame to evaluate the effectiveness of adjustment methods aiming to correct nonresponse bias, to study indicators of sample quality, and to examine the relative magnitude of nonresponse bias under different modes. The results suggest that both response propensity weighting and GREG weighting, are not effective in reducing nonresponse bias present in the study data. There are some reductions in error, but the reductions are limited. The comparison between response propensity weighting and GREG weighting shows that with the same set of auxiliary variables, the choice between response propensity weighting and GREG weighting makes little difference. The evaluation of the R-indicators and the penalized R-indicators using the study datasets and from a simulation study suggests that the penalized R-indicators perform better than the R-indicators in terms of assessing sample quality. The penalized R-indicator shows a pattern that has a better match to the pattern for the estimated biases than the R-indicator does. Finally, the comparison of nonresponse bias to other types of errors finds that nonresponse bias in these two data sets may be larger than sampling error and coverage bias, but measurement bias can be bigger in turn than nonresponse bias, at least for sensitive questions. And postsurvey adjustments do not result in substantial reduction in the total survey error. We reach the conclusion that 1) efforts put into dealing with nonresponse bias are warranted; 2) the effectiveness of weighting adjustments for nonresponse depends on the availability and quality of the auxiliary variables, and 3) the penalized R-indicator may be more helpful in monitoring the quality of the survey than the R-indicator.
  • Thumbnail Image
    Item
    Neighborhood Characteristics and Participation in Household Surveys
    (2010) Casas-Cordero Valencia, Carolina; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Declining response rates in household surveys continue to demand not only a better understanding of the mechanisms underlying nonresponse, but also the identification of auxiliary variables that can help assess, reduce, and hopefully correct for this source of error in survey estimates. Using data from L.A. Family and Neighborhood Study (L.A. FANS), this dissertation shows that observable characteristics of the sampled neighborhoods have the potential to advance both survey research topics. Paper 1 of this dissertation advances our understanding of the role that local neighborhood processes play in survey participation. The measures of social and physical environments are shown to be significant predictors of household cooperation in the L.A.FANS, even after controlling for the socio-economic composition of households and neighborhoods. A nice feature of the indicators of the physical environment is that they can be observed without performing the actual interview. Thus they are available for both respondents and nonrespondents. However, survey interviewers charged with this task might make errors that can limit the usability of these observations. Paper 2 uses a multilevel framework to examine 25 neighborhood items rated by survey interviewers. The results show that errors vary by type of item and that interviewer perceptions are largely driven by characteristics of the sampled areas -- not by characteristics of the interviewers themselves. If predictive of survey participation, neighborhood characteristics can be useful for survey fieldwork decisions aimed at increasing response rates. If neighborhood characteristics are also related to survey outcome variables they furthermore can be used to inform strategies aimed at reducing nonresponse bias. Paper 3 compares the effectiveness of several different neighborhood characteristics in nonresponse adjustments for the L.A.FANS, and shows that interviewer observations perform similar to Census variables when used for weighting key estimates of L.A. FANS. Results of this dissertation can be relevant for those who want to increase response rates by tailoring efforts according to neighborhood characteristics. The most important contribution of this dissertation, however, lies in re-discovering intersections between survey methodology and urban sociology.