Joint Program in Survey Methodology
Permanent URI for this communityhttp://hdl.handle.net/1903/2251
Browse
33 results
Search Results
Item The Use of Email in Establishment Surveys(2019) Langeland, Joshua Lee; Abraham, Katharine; Wagner, James; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation evaluates the effectiveness of using Email for survey solicitation, nonresponse follow-up, and notifications for upcoming scheduled interviews in an establishment survey setting. Reasons for interest in the use of Email include the possibility that it could reduce printing and postage expenses, speed responses and encourage online reporting. To date, however, there has been limited research on the extent to which these benefits can in fact be realized in an establishment survey context. In order to send an Email for survey purposes, those administering a survey must have Email addresses for the units in the sample. One method for collecting Email addresses is to send a prenotification letter to sampled businesses prior to the initial survey invitation, informing respondents about the upcoming survey and requesting they provide contact information for someone within the organization who will have knowledge of the survey topic. Relatively little is known, however, about what makes a prenotification letter more or less effective. The first experiment on which this dissertation reports varies the content of prenotification letters sent to establishments selected for participation in a business survey in order to identify how different features affect the probability of obtaining a respondent's Email address. In this experiment, neither survey sponsorship, appeal type, nor a message about saving taxpayer dollars had a significant impact on response. The second experiment is a pilot study designed to compare the results of sending an initial Email invitation to participate in an establishment survey to the results of sending a standard postal invitation. Sampled businesses that provided an Email address were randomized into two groups. Half of the units in the experiment received the initial survey invitation by Email and the other half received the standard survey materials through postal mail; all units received the same nonresponse follow-up treatments. The analysis of this experiment focuses on response rates, timeliness of response, mode of response and cost per response. In this production environment, Email invitations achieved an equivalent response rate at reduced cost per response. Units receiving the Email invitation were more likely to report online, but it took them longer on average to respond. The third experiment built on the second and was an investigation into nonresponse follow-up procedures. In the second experiment, at the point when the cohort that received the initial survey invitation by Email received their first nonresponse follow-up, there was a large increase in response. The third experiment tests whether this large increase in response can be achieved by sending a follow-up Email instead of a postal reminder. Sampled units that provided an Email address were randomized into three groups. All units received the initial survey invitation by Email and all units also received nonresponse follow-ups by Email. The treatments varied in the point in the nonresponse follow-up period at which the Emails were augmented with a postal mailing. The analysis focuses on how this timing affects response rates and mode of response. The sequence that introduced postal mail early in nonresponse follow-up achieved the highest final response rate. All mode sequences were successful in encouraging online data reporting. The fourth and final experiment studies the use of Email in a monthly business panel survey conducted through Computer Assisted Telephone Interviewing (CATI). After the first month in which an interviewer in this survey collects data from a business, she schedules a date to call and collect data the following month. The current procedure is to send a postcard to the business a few days prior to the scheduled appointment to serve as a reminder of the upcoming interview. The fourth experiment investigates the effects of replacing this reminder postcard with an Email. Businesses in a sample that included both businesses for which the survey organization had an Email address and businesses for which no Email address was available were randomized into three groups. The first group acts as the control and received the standard postcard; the second group was designated to receive an Email reminder, provided an Email address was available, instead of the postcard; and the third group received an Email reminder with an iCalendar attachment instead of the postcard, again provided an Email address was available. Results focus on response rates, call length, percent of units reporting on time, and number of calls to respondents. The experiment found that the use of Email as a reminder for a scheduled interview significantly increased response rates and decreased the effort required to collect data.Item A Unifying Parametric Framework for Estimating Finite Population Totals from Complex Samples(2019) Flores Cervantes, Ismael; Brick, J. Michael; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We propose a unifying framework for improving the efficiency of design-based estimators of finite population characteristics in the presence of full response. We call the framework a Parametric (PA) approach. The PA framework, an extension of the model-assisted theory, uses an algorithmic approach driven by the observed data. The algorithm identifies the relevant subset of auxiliary variables related to the outcome, and the known population totals of these variables are used to compute the PA estimator. We apply the PA framework to three important estimation problems: the identification of the functional form of a design-based estimator based on the observed data; the identification working or assisting model; and the development of the methodology for creating new design-based estimators. The PA estimators are theoretically justified and evaluated by simulations. This dissertation is limited to single-stage sample designs with full response, but the framework can be extended to other sample designs and for estimation with nonresponse.Item Selection Bias in Nonprobability Surveys: A Causal Inference Approach(2018) Mercer, Andrew William; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Many in the survey research community have expressed concern at the growing popularity of nonprobability surveys. The absence of random selection prompts justified concerns about self-selection producing biased results and means that traditional, design-based estimation is inappropriate. The Total Survey Error (TSE) paradigm’s designations of selection bias as attributable to undercoverage or nonresponse are not especially helpful for nonprobability surveys as they are based on an implicit assumption that selection and inferences rely on randomization. This dissertation proposes an alternative classification for sources of selection bias for nonprobability surveys based on principles borrowed from the field of causal inference. The proposed typology describes selection bias in terms of the three conditions that are required for a statistical model to correct or explain systematic differences between a realized sample and the target population: exchangeability, positivity, and composition. We describe the parallels between causal and survey inference and explain how these three sources of bias operate in both probability and nonprobability survey samples. We then provide a critical review of current practices in nonprobability data collection and estimation viewed through the lens of the causal bias framework. Next, we show how net selection bias can be decomposed into separate additive components associated with exchangeability, positivity, and composition respectively. Using 10 parallel nonprobability surveys from different sources, we estimate these components for six measures of civic engagement using the 2013 Current Population Survey Civic Engagement Supplement as a reference sample. We find that a large majority of the bias can be attributed to a lack of exchangeability. Finally, using the same six measures of civic engagement, we compare the performance of four approaches to nonprobability estimation based on Bayesian additive regression trees. These are propensity weighting (PW), outcome regression (OR), and two types of doubly-robust estimators: outcome regression with a residual bias correction (OR-RBC) and outcome regression with a propensity score covariate (OR-PSC). We find that OR-RBC tends to have the lowest bias, variance, and RMSE, with PW only slightly worse on all three measures.Item Model-Assisted Estimators for Time-to-Event Data(2017) Reist, Benjamin Martin; Valliant, Richard; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In this dissertation, I develop model-assisted estimators for estimating the proportion of a population that experienced some event by time t. I provide the theoretical justification for the new estimators using time-to-event models as the underlying framework. Using simulation, I compared these estimators to traditional methods, then I applied the estimators to a study of nurses’ health, where I estimated the proportion of the population that had died after a certain period of time. The new estimators performed as well if not better than existing methods. Finally, as this work assumes that all units are censored at the same point in time, I propose an extension that allows units censoring time to vary.Item INVESTIGATION OF ALTERNATIVE CALIBRATION ESTIMATORS IN THE PRESENCE OF NONRESPONSE(2017) Han, Daifeng; Valliant, Richard; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Calibration weighting is widely used to decrease variance, reduce nonresponse bias, and improve the face validity of survey estimates. In the purely sampling context, Deville and Särndal (1992) demonstrate that many alternative forms of calibration weighting are asymptotically equivalent, so for variance estimation purposes, the generalized regression (GREG) estimator can be used to approximate some general calibration estimators with no closed-form solutions such as raking. It is unclear whether this conclusion holds when nonresponse exists and single-step calibration weighting is used to reduce nonresponse bias (i.e., calibration is applied to the basic sampling weights directly without a separate nonresponse adjustment step). In this dissertation, we first examine whether alternative calibration estimators may perform differently in the presence of nonresponse. More specifically, properties of three widely used calibration estimations, the GREG with only main effect covariates (GREG_Main), poststratification, and raking, are evaluated. In practice, the choice between poststratification and raking are often based on sample sizes and availability of external data. Also, the raking variance is often approximated by a linear substitute containing residuals from a GREG_Main model. Our theoretical development and simulation work demonstrate that with nonresponse, poststratification, GREG_Main, and raking may perform differently and survey practitioners should examine both the outcome model and the response pattern when choosing between these estimators. Then we propose a distance measure that can be estimated for raking or GREG_Main from a given sample. Our analytical work shows that the distance measure follows a Chi-square probability distribution when raking or GREG_Main is unbiased. A large distance measure is a warning sign of potential bias and poor confidence interval coverage for some variables in a survey due to omitting a significant interaction term in the calibration process. Finally, we examine several alternative variance estimators for raking with nonresponse. Our simulation results show that when raking is model-biased, none of the linearization variance estimators under evaluation is unbiased. In contrast, the jackknife replication method performs well in variance estimation, although the confidence interval may still be centered in the wrong place if the point estimate is inaccurate.Item Enhancing the Understanding of the Relationship between Social Integration and Nonresponse in Household Surveys(2015) Amaya, Ashley Elaine; Presser, Stanley; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Nonresponse and nonresponse bias remain fundamental concerns for survey researchers as understanding them is critical to producing accurate statistics. This dissertation tests the relationship between social integration, nonresponse, and nonresponse bias. Using the rich frame information available on the American Time Use Survey (ATUS) and the Survey of Health, Ageing, and Retirement in Europe (SHARE) Wave II, structural equation models were employed to create latent indicators of social integration. The resulting variables were used to predict nonresponse and its components (e.g., noncontact). In both surveys, social integration was significantly predictive of nonresponse (regardless of type of nonresponse) with integrated individuals more likely to respond. However, the relationship was driven by different components of integration across the two surveys. Full sample estimates were compared to respondent estimates on a series of 40 dichotomous and categorical variables to test the hypothesis that variables measuring social activities and roles would suffer from nonresponse bias. The impact of nonresponse on multivariate models predicting social outcomes was also evaluated. Nearly all of the 40 assessed variables suffered from significant nonresponse bias resulting in the overestimation of social activity and role participation. In general, civic and political variables suffered from higher levels of bias, but the differences were not significant. Multivariate models were not exempt; beta coefficients were frequently biased. Although, the direction was inconsistent and often small. Finally, an indicator of social integration was added to the weighting methodology with the goal of eliminating the observed nonresponse bias. While the addition significantly reduced the bias in most instances compared to both the base- and traditionally-weighted estimates, the improvements were small and did little to eliminate the bias.Item Rapport and Its Impact on the Disclosure of Sensitive Information in Standardized Interviews(2014) Sun, Hanyu; Conrad, Frederick G.; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Although there is no universally accepted way to define and operationalize rapport, the general consensus is that it can have an impact on survey responses, potentially affecting their quality. Moderately sensitive information is often asked in the interviewer-administered mode of data collection. Although rapport-related verbal behaviors have been found to increase the disclosure of moderately sensitive information in face-to-face interactions, it is unknown if rapport can be established to the same extent in video-mediated interviews, leading to similar levels of disclosure. Highly sensitive information is usually collected via self-administered modes of data collection. For some time, audio computer-assisted self-interviewing (ACASI) has been seen as one of the best methods for collecting sensitive information. Typically, the respondent first answers questions about nonsensitive topics in computer-assisted personal interviewing (CAPI) and is then switched to ACASI for sensitive questions. None of the existing research has investigated the possibility that the interviewer-respondent interaction, prior to the ACASI questions, may affect disclosures in ACASI. This dissertation used a laboratory experiment that was made up of two related studies, aiming at answering these questions. The first study compares video-mediated interviews with CAPI to investigate whether rapport can be similarly established in video-mediated interviews, leading to similar levels of disclosure. There was no significant difference in rapport ratings between video-mediated and CAPI interviews, suggesting no evidence that rapport is any better established in CAPI than video-mediated interviews. Compared with CAPI, higher disclosure of moderately sensitive information was found in video-mediated interviews, though the effects were only marginally significant. The second study examines whether the interviewer-respondent interaction, prior to the ACASI questions, may affect disclosure in ACASI. There was no significant difference on disclosure between the same voice and the different voice condition. However, there were marginally significant carryover effects of rapport in the preceding module on disclosure in the subsequent ACASI module. Respondents who experienced high rapport in the preceding module gave more disclosure in the subsequent ACASI module. Furthermore, compared with ACASI, the percentage of reported sensitive behaviors was higher for video-mediated interviews for some of the highly sensitive questions.Item Testing for Phase Capacity in Surveys with Multiple Waves of Nonrespondent Follow-Up(2014) Lewis, Taylor Hudson; Lahiri, Partha; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)To mitigate the potentially harmful effects of nonresponse, many surveys repeatedly follow up with nonrespondents, often targeting a particular response rate or predetermined number of completes. Each additional recruitment attempt generally brings in a new wave of data, but returns gradually diminish over the course of a fixed data collection protocol. This is because each subsequent wave tends to contain fewer and fewer new responses, thereby resulting in smaller and smaller changes on (nonresponse-adjusted) point estimates. Consequently, these estimates begin to stabilize. This is the notion of phase capacity, suggesting some form of design change is in order, such as switching modes, increasing the incentive, or, as is considered exclusively in this research, discontinuing the nonrespondent follow-up campaign altogether. This dissertation consists of three methodological studies proposing and assessing various techniques survey practitioners can use to formally test for phase capacity. One of the earliest known phase capacity testing methods proposed in the literature calls for multiply imputing nonrespondents' missing data to assess, retrospectively, whether the most recent wave of data significantly altered a key estimate. The first study introduces an adaptation of this test amenable to surveys that instead reweight the observed data to compensate for nonresponse. A general limitation of methods discussed in the first study is that they are applicable to a single point estimate. The second study evaluates two extensions, each with the aim of producing a universal, yes-or-no phase capacity determination for a battery of point estimates. The third study builds upon ideas of a prospective phase capacity test recently proposed in the literature attempting to address the question of whether an imminent wave of data will significantly alter a key estimate. All three studies include a simulation study and application using data from the 2011 Federal Employee Viewpoint Survey.Item A COMPARISON OF EX-ANTE, LABORATORY, AND FIELD METHODS FOR EVALUATING SURVEY QUESTIONS(2014) Maitland, Aaron; Presser, Stanley; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A diverse range of evaluation methods is available for detecting measurement error in survey questions. Ex-ante question evaluation methods are relatively inexpensive, because they do not require data collection from survey respondents. Other methods require data collection from respondents either in the laboratory or in the field setting. Research has explored how effective some of these methods are at identifying problems with respect to one another. However, a weakness of most of these studies is that they do not compare the range of question evaluation methods that are currently available to researchers. The purpose of this dissertation is to understand how the methods researchers use to evaluate survey questions influence the conclusions they draw about the questions. In addition, the dissertation seeks to identify more effective ways to use the methods together. It consists of three studies. The first study examines the extent of agreement between ex-ante and laboratory methods in identifying problems and compares the methods in how well they predict differences between questions whose validity has been estimated in record-check studies. The second study evaluates the extent to which ex-ante and laboratory methods predict the performance of questions in the field as measured by indirect assessments of data quality such as behavior coding, response latency and item nonresponse. The third study evaluates the extent to which ex-ante, laboratory, and field methods predict the reliability of answers to survey questions as measured by stability over time. The findings suggest (1) that a multiple method approach to question evaluation is the best strategy given differences in the ability to detect different types of problems between the methods and (2) how to combine methods more effectively in the future.Item TOPICS IN MODEL-ASSISTED POINT AND VARIANCE ESTIMATION IN CLUSTERED SAMPLES(2013) Kennel, Timothy; Valliant, Richard; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation describes three distinct research papers. Although each research topic is different and there is very little binding some of the chapters together, all three deal with innovations to model-assisted estimators. Moreover, all three papers explore different aspects of estimating totals, means, and rates from clustered samples. New estimators are presented. Their theoretical properties are explored; and, simulations are used to explore their design-based properties in realistic situations. After an introductory chapter, we show how leverage adjustments can be made to sandwich variance estimators to improve variance estimates of Generalized Regression estimators in two-staged samples. In the third chapter, we explore multinomial logistic-assisted estimators of finite population totals in clustered samples. In the final chapter, we use generalized linear models to assist estimating finite population totals in cluster samples.