Clarifying Survey Questions

dc.contributor.advisorTourangeau, Rogeren_US
dc.contributor.authorRedline, Cleo D.en_US
dc.contributor.departmentSurvey Methodologyen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2011-07-07T05:35:19Z
dc.date.available2011-07-07T05:35:19Z
dc.date.issued2011en_US
dc.description.abstractAlthough comprehension is critical to the survey response process, much about it remains unknown. Research has shown that concepts can be clarified through the use of definitions, instructions or examples, but respondents do not necessarily attend to these clarifications. This dissertation presents the results of three experiments designed to investigate where and how to present clarifying information most effectively. In the first experiment, eight study questions, modeled after questions in major federal surveys, were administered as part of a Web survey. The results suggest that clarification improves comprehension of the questions. There is some evidence from that initial experiment that respondents anticipate the end of a question and are more likely to ignore clarification that comes after the question than before it. However, there is considerable evidence to suggest that clarifications are most effective when they are incorporated into a series of questions. A second experiment was conducted in both a Web and Interactive Voice Response (IVR) survey. IVR was chosen because it controlled for the effects of interviewers. The results of this experiment suggest that readers appear no more capable of comprehending complex clarification than listeners. In both channels, instructions were least likely to be followed when they were presented after the question, more likely to be followed when they were placed before the question, and most likely to be followed when they were incorporated into a series of questions. Finally, in a third experiment, five variables were varied to examine the use of examples in survey questions. Broad categories elicited higher reports than narrow categories and frequently consumed examples elicited higher reports than infrequently consumed examples. The implication of this final study is that the choice of categories and examples require careful consideration, as this choice will influence respondents' answers, but it does not seem to matter where and how a short list of examples are presented.en_US
dc.identifier.urihttp://hdl.handle.net/1903/11645
dc.subject.pqcontrolledCognitive Psychologyen_US
dc.subject.pquncontrolledmixed mode surveysen_US
dc.subject.pquncontrolledresponse processen_US
dc.subject.pquncontrolledsurvery comprehensionen_US
dc.subject.pquncontrolledsurvey methodologyen_US
dc.titleClarifying Survey Questionsen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Redline_umd_0117E_12022.pdf
Size:
2.59 MB
Format:
Adobe Portable Document Format