Joint Program in Survey Methodology

Permanent URI for this communityhttp://hdl.handle.net/1903/2251

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    Item
    Beyond Response Rates: The Effect of Prepaid Incentives on Measurement Error
    (2012) Medway, Rebecca; Tourangeau, Roger; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    As response rates continue to decline, survey researchers increasingly offer incentives as a way to motivate sample members to take part in their surveys. Extensive prior research demonstrates that prepaid incentives are an effective tool for doing so. If prepaid incentives influence behavior at the stage of deciding whether or not to participate, they also may alter the way that respondents behave while completing surveys. Nevertheless, most research has focused narrowly on the effect that incentives have on response rates. Survey researchers should have a better empirical basis for assessing the potential tradeoffs associated with the higher responses rates yielded by prepaid incentives. This dissertation describes the results of three studies aimed at expanding our understanding of the impact of prepaid incentives on measurement error. The first study explored the effect that a $5 prepaid cash incentive had on twelve indicators of respondent effort in a national telephone survey. The incentive led to significant reductions in item nonresponse and interview length. However, it had little effect on the other indicators, such as response order effects and responses to open-ended items. The second study evaluated the effect that a $5 prepaid cash incentive had on responses to sensitive questions in a mail survey of registered voters. The incentive resulted in a significant increase in the proportion of highly undesirable attitudes and behaviors to which respondents admitted and had no effect on responses to less sensitive items. While the incentive led to a general pattern of reduced nonresponse bias and increased measurement bias for the three voting items where administrative data was available for the full sample, these effects generally were not significant. The third study tested for measurement invariance in incentive and control group responses to four multi-item scales from three recent surveys that included prepaid incentive experiments. There was no evidence of differential item functioning; however, full metric invariance could not be established for one of the scales. Generally, these results suggest that prepaid incentives had minimal impact on measurement error. Thus, these findings should be reassuring for survey researchers considering the use of prepaid incentives to increase response rates.
  • Thumbnail Image
    Item
    Neighborhood Characteristics and Participation in Household Surveys
    (2010) Casas-Cordero Valencia, Carolina; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Declining response rates in household surveys continue to demand not only a better understanding of the mechanisms underlying nonresponse, but also the identification of auxiliary variables that can help assess, reduce, and hopefully correct for this source of error in survey estimates. Using data from L.A. Family and Neighborhood Study (L.A. FANS), this dissertation shows that observable characteristics of the sampled neighborhoods have the potential to advance both survey research topics. Paper 1 of this dissertation advances our understanding of the role that local neighborhood processes play in survey participation. The measures of social and physical environments are shown to be significant predictors of household cooperation in the L.A.FANS, even after controlling for the socio-economic composition of households and neighborhoods. A nice feature of the indicators of the physical environment is that they can be observed without performing the actual interview. Thus they are available for both respondents and nonrespondents. However, survey interviewers charged with this task might make errors that can limit the usability of these observations. Paper 2 uses a multilevel framework to examine 25 neighborhood items rated by survey interviewers. The results show that errors vary by type of item and that interviewer perceptions are largely driven by characteristics of the sampled areas -- not by characteristics of the interviewers themselves. If predictive of survey participation, neighborhood characteristics can be useful for survey fieldwork decisions aimed at increasing response rates. If neighborhood characteristics are also related to survey outcome variables they furthermore can be used to inform strategies aimed at reducing nonresponse bias. Paper 3 compares the effectiveness of several different neighborhood characteristics in nonresponse adjustments for the L.A.FANS, and shows that interviewer observations perform similar to Census variables when used for weighting key estimates of L.A. FANS. Results of this dissertation can be relevant for those who want to increase response rates by tailoring efforts according to neighborhood characteristics. The most important contribution of this dissertation, however, lies in re-discovering intersections between survey methodology and urban sociology.
  • Thumbnail Image
    Item
    GRICEAN EFFECTS IN SELF-ADMINSTERED SURVEYS
    (2005-10-31) Yan, Ting; Tourangeau, Roger; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Despite the best efforts of questionnaire designers, survey respondents don't always interpret questions as the question writers intended. Researchers have used Grice's conversational maxims to explain some of these discrepancies. This dissertation extends this work by reviewing studies on the use of Grice's maxims by survey respondents and describing six new experiments that looked for direct evidence that respondents apply Grice's maxims. The strongest evidence for respondents' use of the maxims came from an experiment that varied the numerical labels on a rating scale; the mean shift in responses to the right side of the rating scale induced by negative numerical labels was robust across items and fonts. Process measures indicated that respondents applied the maxim of relation in interpreting the questions. Other evidence supported use of the maxim of quantity -- as predicted, correlations between two highly similar items were lower when they were asked together. Reversing the wording of one of the items didn't prevent respondents from applying the maxim of quantity. Evidence was weaker for the application of Grice's maxim of manner; respondents still seemed to use definitions (as was apparent from the reduced variation in their answers), even though the definitions were designed to be uninformative. That direct questions without filters induced significantly more responses on the upper end of the scale -- presumably because of the presuppositions direct questions carried -- supported respondents' application of the maxim of quality. There was little support for respondents' use of the maxim of relation from an experiment on the physical layout of survey questions; the three different layouts didn't influence how respondents perceived the relation among items. These results provided some evidence that both survey "satisficers" and survey "optimizers" may draw automatic inferences based on Gricean maxims, but that only "optimizers" will carry out the more controlled processes requiring extra effort. Practical implications for survey practice include the need for continued attention to secondary features of survey questions in addition to traditional questionnaire development issues. Additional experiments that incorporate other techniques such as eye tracking or cognitive interviews may help to uncover other subtle mechanisms affecting survey responses.