Beyond Response Rates: The Effect of Prepaid Incentives on Measurement Error

dc.contributor.advisorTourangeau, Rogeren_US
dc.contributor.authorMedway, Rebeccaen_US
dc.contributor.departmentSurvey Methodologyen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2013-02-07T07:15:53Z
dc.date.available2013-02-07T07:15:53Z
dc.date.issued2012en_US
dc.description.abstractAs response rates continue to decline, survey researchers increasingly offer incentives as a way to motivate sample members to take part in their surveys. Extensive prior research demonstrates that prepaid incentives are an effective tool for doing so. If prepaid incentives influence behavior at the stage of deciding whether or not to participate, they also may alter the way that respondents behave while completing surveys. Nevertheless, most research has focused narrowly on the effect that incentives have on response rates. Survey researchers should have a better empirical basis for assessing the potential tradeoffs associated with the higher responses rates yielded by prepaid incentives. This dissertation describes the results of three studies aimed at expanding our understanding of the impact of prepaid incentives on measurement error. The first study explored the effect that a $5 prepaid cash incentive had on twelve indicators of respondent effort in a national telephone survey. The incentive led to significant reductions in item nonresponse and interview length. However, it had little effect on the other indicators, such as response order effects and responses to open-ended items. The second study evaluated the effect that a $5 prepaid cash incentive had on responses to sensitive questions in a mail survey of registered voters. The incentive resulted in a significant increase in the proportion of highly undesirable attitudes and behaviors to which respondents admitted and had no effect on responses to less sensitive items. While the incentive led to a general pattern of reduced nonresponse bias and increased measurement bias for the three voting items where administrative data was available for the full sample, these effects generally were not significant. The third study tested for measurement invariance in incentive and control group responses to four multi-item scales from three recent surveys that included prepaid incentive experiments. There was no evidence of differential item functioning; however, full metric invariance could not be established for one of the scales. Generally, these results suggest that prepaid incentives had minimal impact on measurement error. Thus, these findings should be reassuring for survey researchers considering the use of prepaid incentives to increase response rates.en_US
dc.identifier.urihttp://hdl.handle.net/1903/13646
dc.subject.pqcontrolledPsychologyen_US
dc.subject.pquncontrolledincentivesen_US
dc.subject.pquncontrolledmeasurement erroren_US
dc.subject.pquncontrolledmeasurement invarianceen_US
dc.subject.pquncontrolledsatisficingen_US
dc.subject.pquncontrolledsensitive questionsen_US
dc.subject.pquncontrolledsurveyen_US
dc.titleBeyond Response Rates: The Effect of Prepaid Incentives on Measurement Erroren_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Medway_umd_0117E_13833.pdf
Size:
1.77 MB
Format:
Adobe Portable Document Format