Joint Program in Survey Methodology
Permanent URI for this communityhttp://hdl.handle.net/1903/2251
Browse
Item Classifying Mouse Movements and Providing Help in Web Surveys(2013) Horwitz, Rachel; Conrad, Frederick G; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Survey administrators go to great lengths to make sure survey questions are easy to understand for a broad range of respondents. Despite these efforts, respondents do not always understand what the questions ask of them. In interviewer-administrated surveys, interviewers can pick up on cues from the respondent that suggest they do not understand or know how to answer the question and can provide assistance as their training allows. However, due to the high costs of interviewer administration, many surveys are moving towards other survey modes (at least for some respondents) that do not include costly interviewers, and with that a valuable source for clarification is gone. In Web surveys, researchers have experimented with providing real-time assistance to respondents who take a long time to answer a question. Help provided in such a fashion has resulted in increased accuracy, but some respondents do not like the imposition of unsolicited help. There may be alternative ways to provide help that can refine or overcome the limitations to using response times. This dissertation is organized into three separate studies that each use a set of independently collected data to identify a set of indicators survey administrators can use to determine when a respondent is having difficulty answering a question and proposes alternative ways of providing real-time assistance that increase accuracy as well as user satisfaction. The first study identifies nine movements that respondents make with the mouse cursor while answering survey questions and hypothesizes, using exploratory analyses, which movements are related to difficulty. The second study confirms use of these movements and uses hierarchical modeling to identify four movements which are the most predictive. The third study tests three different of providing unsolicited help to respondents: text box, audio recording, and chat. Accuracy and respondent satisfaction are evaluated for each mode. There were no differences in accuracy across the three modes, but participants reported a preference for receiving help in a standard text box. These findings allow survey designers to identify difficult questions on a larger scale than previously possible and to increase accuracy by providing real-time assistance while maintaining respondent satisfaction.Item Design and Effectiveness of Multimodal Definitions in Online Surveys(2020) Spiegelman, Maura; Conrad, Frederick G; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)If survey respondents do not interpret a question as it was intended, they may, in effect, answer the wrong question, increasing the chances of inaccurate data. Researchers can bring respondents’ interpretations into alignment with what is intended by defining the terms that respondents might misunderstand. This dissertation explores strategies to increase response alignment with definitions in online surveys. In particular, I compare the impact of unimodal (either spoken or textual) to multimodal (both spoken and textual) definitions on question interpretation and, indirectly, response quality. These definitions can be further categorized as conventional or optimized for the mode in which they are presented (for textual definitions, fewer words than in conventional definitions with key information made visually salient and easier for respondents to grasp; for spoken definitions, a shorter, more colloquial style of speaking). The effectiveness of conventional and optimized definitions are compared, as well as the effectiveness of unimodal and multimodal definitions. Amazon MTurk workers were randomly assigned to one of six definition conditions in a 2x3 design: conventional or optimized definitions, presented in a spoken, textual, or multimodal (both spoken and textual) format. While responses for unimodal optimized and conventional definitions were similar, multimodal definitions, and particularly multimodal optimized definitions, resulted in responses with greater alignment with definitions. Although complementary information presented in different modes can increase comprehension and lead to increased data quality, redundant or otherwise untailored multimodal information may not have the same positive effects. Even as not all respondents complied with instructions to read and/or listen to definitions, the compliance rates and effectiveness of multimodal presentation were sufficiently high to show improvements in data quality, and the effectiveness of multimodal definitions increased when only compliant observations were considered. Multimodal communication in a typically visual medium (such as web surveys) may increase the amount of time needed to complete a questionnaire, but respondents did not consider their use to be burdensome or otherwise unsatisfactory. While further techniques could be used to help increase respondent compliance with instructions, this study suggests that multimodal definitions, when thoughtfully designed, can improve data quality without negatively impacting respondents.Item Effects of Acoustic Perception of Gender on Nonsampling Errors in Telephone Surveys(2012) Kenney McCulloch, Susan; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Many telephone surveys require interviewers to observe and record respondents' gender based solely on respondents' voice. Researchers may rely on these observations to: (1) screen for study eligibility; (2) determine skip patterns; (3) foster interviewer tailoring strategies; (4) contribute to nonresponse assessment and adjustments; (5) inform post-stratification weighting; and (6) design experiments. Gender is also an important covariate to understand attitudes and behavior in many disciplines. Yet, despite this fundamental role in research, survey documentation suggests there is significant variation in how gender is measured and collected across organizations. Variations of collecting respondent gender may include: (1) asking the respondent; (2) interviewer observation only; (3) a combination of observation aided by asking when needed; or (4) another method. But what is the efficacy of these approaches? Are there predictors of observational errors? What are the consequences of interviewer misclassification of respondent gender to survey outcomes? Measurement error in interviewer's observations of respondent gender has never been examined by survey methodologists. This dissertation explores the accuracy and utility of interviewer judgments specifically with regard to gender observations. Using the recent paradata work and linguistics literature as a foundation to explore acoustic gender determination, the goal of my dissertation is to identify implications for survey research of using interviewers' observations collected in a telephone interviewing setting. Organized into three journal-style papers, through a survey of survey organizations, the first paper finds that more than two-thirds of firms collect respondent gender by some form of interviewer observation. Placement of the observation, rationale for chosen collection methods, and uses of these paradata are documented. In paper two, utilizing existing recording of survey interviews, the experimental research finds that the accuracy of interviewer observations improves with increased exposure. The noisy environment of a centralized phone room does not appear to threaten the quality of gender observations. Interviewer and respondent level covariates of misclassification are also discussed. Analyzing secondary data, the third paper finds there are some consequences of incorrect interviewer observations of respondents' gender on survey estimates. Findings from this dissertation will contribute to the paradata literature and provide survey practitioners guidance in the use and collection of interviewer observations, specifically gender, to reduce sources of nonsampling error.Item Nonparticipation Issues Related to Passive Data Collection(2024) Breslin, Alexandra Marie Brown; Presser, Stanley; Antoun, Chris; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)New passive data collection techniques on smartphones allow for the direct observation of a participant’s behavior and environment in place of self-reported information. However, such studies are not appealing to all people, especially those with higher security concerns. The current work explores the mechanisms that impact a sample member’s decision to participate in a passive data collection using three different online panels. The first study explores nonparticipation bias in a financial tracking study and finds evidence of bias in the self-reported measures of financial behaviors, and that prior experience with the research organization positively impacts a sample member’s decision to participate. Studies two and three employ deception studies (i.e., the passive data collections were presented as real rather than hypothetical, but no data was passively collected) in which respondents received experimentally varied invitations to participate in a smartphone-based passive data collection. The second study varies the type of data requested and the study topic to understand better how these study components interact. The findings suggest that the type of data requested impacts participation while the study topic does not. The second study utilized video messages presented to all sample members who chose not to participate. These videos asked the sample member to reconsider, varying whether or not they reiterated the data and security measures of the study from the initial invitation. The results suggest that offering a follow-up video increased participation. Finally, the third study experimentally varied the level of control the sample member would have over what data is shared with researchers during a passive data collection. The findings suggest that an offer of control may not increase participation in app-based passive data collection. The three studies suggest that sample members are more likely to participate in a survey when they have prior experience with such a request and may be converted to participate with a video message, but that the type of data requested greatly impacts the decision to participate. Future work should include replicating these studies with different requested data types and shifting to samples not drawn from online panels.Item Rapport and Its Impact on the Disclosure of Sensitive Information in Standardized Interviews(2014) Sun, Hanyu; Conrad, Frederick G.; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Although there is no universally accepted way to define and operationalize rapport, the general consensus is that it can have an impact on survey responses, potentially affecting their quality. Moderately sensitive information is often asked in the interviewer-administered mode of data collection. Although rapport-related verbal behaviors have been found to increase the disclosure of moderately sensitive information in face-to-face interactions, it is unknown if rapport can be established to the same extent in video-mediated interviews, leading to similar levels of disclosure. Highly sensitive information is usually collected via self-administered modes of data collection. For some time, audio computer-assisted self-interviewing (ACASI) has been seen as one of the best methods for collecting sensitive information. Typically, the respondent first answers questions about nonsensitive topics in computer-assisted personal interviewing (CAPI) and is then switched to ACASI for sensitive questions. None of the existing research has investigated the possibility that the interviewer-respondent interaction, prior to the ACASI questions, may affect disclosures in ACASI. This dissertation used a laboratory experiment that was made up of two related studies, aiming at answering these questions. The first study compares video-mediated interviews with CAPI to investigate whether rapport can be similarly established in video-mediated interviews, leading to similar levels of disclosure. There was no significant difference in rapport ratings between video-mediated and CAPI interviews, suggesting no evidence that rapport is any better established in CAPI than video-mediated interviews. Compared with CAPI, higher disclosure of moderately sensitive information was found in video-mediated interviews, though the effects were only marginally significant. The second study examines whether the interviewer-respondent interaction, prior to the ACASI questions, may affect disclosure in ACASI. There was no significant difference on disclosure between the same voice and the different voice condition. However, there were marginally significant carryover effects of rapport in the preceding module on disclosure in the subsequent ACASI module. Respondents who experienced high rapport in the preceding module gave more disclosure in the subsequent ACASI module. Furthermore, compared with ACASI, the percentage of reported sensitive behaviors was higher for video-mediated interviews for some of the highly sensitive questions.Item The Use of Email in Establishment Surveys(2019) Langeland, Joshua Lee; Abraham, Katharine; Wagner, James; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation evaluates the effectiveness of using Email for survey solicitation, nonresponse follow-up, and notifications for upcoming scheduled interviews in an establishment survey setting. Reasons for interest in the use of Email include the possibility that it could reduce printing and postage expenses, speed responses and encourage online reporting. To date, however, there has been limited research on the extent to which these benefits can in fact be realized in an establishment survey context. In order to send an Email for survey purposes, those administering a survey must have Email addresses for the units in the sample. One method for collecting Email addresses is to send a prenotification letter to sampled businesses prior to the initial survey invitation, informing respondents about the upcoming survey and requesting they provide contact information for someone within the organization who will have knowledge of the survey topic. Relatively little is known, however, about what makes a prenotification letter more or less effective. The first experiment on which this dissertation reports varies the content of prenotification letters sent to establishments selected for participation in a business survey in order to identify how different features affect the probability of obtaining a respondent's Email address. In this experiment, neither survey sponsorship, appeal type, nor a message about saving taxpayer dollars had a significant impact on response. The second experiment is a pilot study designed to compare the results of sending an initial Email invitation to participate in an establishment survey to the results of sending a standard postal invitation. Sampled businesses that provided an Email address were randomized into two groups. Half of the units in the experiment received the initial survey invitation by Email and the other half received the standard survey materials through postal mail; all units received the same nonresponse follow-up treatments. The analysis of this experiment focuses on response rates, timeliness of response, mode of response and cost per response. In this production environment, Email invitations achieved an equivalent response rate at reduced cost per response. Units receiving the Email invitation were more likely to report online, but it took them longer on average to respond. The third experiment built on the second and was an investigation into nonresponse follow-up procedures. In the second experiment, at the point when the cohort that received the initial survey invitation by Email received their first nonresponse follow-up, there was a large increase in response. The third experiment tests whether this large increase in response can be achieved by sending a follow-up Email instead of a postal reminder. Sampled units that provided an Email address were randomized into three groups. All units received the initial survey invitation by Email and all units also received nonresponse follow-ups by Email. The treatments varied in the point in the nonresponse follow-up period at which the Emails were augmented with a postal mailing. The analysis focuses on how this timing affects response rates and mode of response. The sequence that introduced postal mail early in nonresponse follow-up achieved the highest final response rate. All mode sequences were successful in encouraging online data reporting. The fourth and final experiment studies the use of Email in a monthly business panel survey conducted through Computer Assisted Telephone Interviewing (CATI). After the first month in which an interviewer in this survey collects data from a business, she schedules a date to call and collect data the following month. The current procedure is to send a postcard to the business a few days prior to the scheduled appointment to serve as a reminder of the upcoming interview. The fourth experiment investigates the effects of replacing this reminder postcard with an Email. Businesses in a sample that included both businesses for which the survey organization had an Email address and businesses for which no Email address was available were randomized into three groups. The first group acts as the control and received the standard postcard; the second group was designated to receive an Email reminder, provided an Email address was available, instead of the postcard; and the third group received an Email reminder with an iCalendar attachment instead of the postcard, again provided an Email address was available. Results focus on response rates, call length, percent of units reporting on time, and number of calls to respondents. The experiment found that the use of Email as a reminder for a scheduled interview significantly increased response rates and decreased the effort required to collect data.