Joint Program in Survey Methodology
Permanent URI for this communityhttp://hdl.handle.net/1903/2251
Browse
17 results
Search Results
Item Optimizing stratified sampling allocations to account for heteroscedasticity and nonresponse(2023) Mendelson, Jonathan; Elliott, Michael R; Lahiri, Partha; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Neyman's seminal paper in 1934 and subsequent developments of the next two decades transformed the practice of survey sampling and continue to provide the underpinnings of today's probability samples, including at the design stage. Although hugely useful, the assumptions underlying classic theory on optimal allocation, such as complete response and exact knowledge of strata variances, are not always met, nor is the design-based approach the only way to identify good sample allocations. This thesis develops new ways to allocate samples for stratified random sampling (STSRS) designs. In Papers 1 and 2, I provide a Bayesian approach for optimal STSRS allocation for estimating the finite population mean via a univariate regression model with heteroscedastic errors. I use Bayesian decision theory on optimal experimental design, which accommodates uncertainty in design parameters. By allowing for heteroscedasticity, I aim for improved realism in some establishment contexts, compared with some earlier Bayesian sample design work. Paper 1 assumes that the level of heteroscedasticity is known, which facilitates analytical results. Paper 2 relaxes this assumption, which results in an analytically intractable problem. Thus, I develop a computational approach that uses Monte Carlo sampling to estimate the loss for a given allocation, in conjunction with a stochastic optimization algorithm that accommodates noisy loss functions. In simulation, the proposed approaches performed as well or better than the design-based and model-assisted strategies considered, while having clearer theoretical justification. Paper 3 changes focus toward addressing how to account for nonresponse when designing samples. Existing theory on optimal STSRS allocation generally assumes complete response. A common practice is to allocate sample under complete response, then to inflate the sample sizes by the inverse of the anticipated response rates. I show that this practice overcorrects for nonresponse, leading to excessive costs per effective interview. I extend the existing design-based framework for STSRS allocation to accommodate scenarios with incomplete response. I provide theoretical comparisons between my allocation and common alternatives, which illustrate how response rates, population characteristics, and cost structure can affect the methods' relative efficiency. In an application to a self-administered survey of military personnel, the proposed allocation resulted in a 25% increase in effective sample size compared with common alternatives.Item BAYESIAN METHODS FOR PREDICTION OF SURVEY DATA COLLECTION PARAMETERS IN ADAPTIVE AND RESPONSIVE DESIGNS(2020) Coffey, Stephanie Michelle; Elliott, Michael R; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Adaptive and responsive survey designs rely on estimates of survey data collection parameters (SDCPs), such as response propensity, to make intervention decisions during data collection. These interventions are made with some data collection goal in mind, such as maximizing data quality for a fixed cost or minimizing costs for a fixed measure of data quality. Data quality may be defined by response rate, sample representativeness, or error in survey estimates. Therefore, the predictions of SDCPs are extremely important. Predictions within a data collection period are most commonly generated using fixed information about sample cases, and accumulating paradata and survey response data. Interventions occur during the data collection period, however, meaning they are applied based on predictions from incomplete accumulating data. There is evidence that the incomplete accumulating data can lead to biased and unstable predictions, particularly early in data collection. This dissertation explores the use of Bayesian methods to improve predictions of SDCPs during data collection, by providing a mathematical framework for combining priors, based on external data about covariates in the prediction models, with the current accumulating data to generate posterior predictions of SDCPs for use in intervention decisions.This dissertation includes three self-contained papers, each focused on the use of Bayesian methods to improve predictions of SDCPs for use in adaptive and responsive survey designs. The first paper predicts time to first contact, where priors are generated from historical survey data. The second paper implements expert elicitation, a method for prior construction when historical data is not available. The last paper describes a data collection experiment conducted using a Bayesian framework, which attempts to minimize data collection costs without reducing the quality of a key survey estimate. In all three papers, the use of Bayesian methods introduces modest improvements in the predictions of SDCPs, especially early in data collection, when interventions would have the largest effect on survey outcomes. Additionally, the experiment in the last paper resulted in significant data collection cost savings without having a significant effect on a key survey estimate. This work suggests that Bayesian methods can improve predictions of SDCPs that are critical for adaptive and responsive data collection interventions.Item Improving External Validity of Epidemiologic Analyses by Incorporating Data from Population-Based Surveys(2020) Wang, Lingxiao; Li, Yan; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Many epidemiologic studies forgo probability sampling and turn to volunteer-based samples because of cost, confidentiality, response burden, and invasiveness of biological samples. However, the volunteers may not represent the underlying target population mainly due to self-selection bias. Therefore, standard epidemiologic analyses may not be generalizable to the target population, which is called lack of “external validity.” In survey research, propensity score (PS)-based approaches have been developed to improve representativeness of the nonprobability samples by using population-based surveys as references. These approaches create a set of “pseudo-weights” to weight the nonprobability sample up to the target population. There are two main types of PS-based approaches: (1) PS-based weighting methods using PSs to estimate participation rates of the nonprobability sample; for example, the inverse of PS weighting (IPSW); (2) PS-based matching methods using PSs to measure similarity between the units in the nonprobability sample and the reference survey sample, such as PS adjustment by subclassification (PSAS). Although the PS-based weighting methods reduce the bias, they are sensitive to propensity model misspecification and can be inefficient. The PS-based matching methods are more robust to the propensity model misspecification and can avoid extreme weights. However, matching methods such as PSAS are less effective at bias reduction. This dissertation proposes a novel PS-based matching method, named the kernel weighting (KW) approach, to improve the external validity of epidemiologic analyses that gain a better bias–variance tradeoff. A unifying framework is established for PS-based methods to provide three advances. First, the KW method is proved to provide consistent estimates, yet generally has smaller mean-square error than the IPSW. Second, the framework reveals a fundamental strong exchangeability assumption (SEA) underlying the existing PS-based matching methods that has previously been unknown. The SEA is relaxed to a weak exchangeability assumption that is more realistic for data analysis. Third, survey weights are scaled in propensity estimation to reduce the variance of the estimated PS and improve efficiency of all PS-based methods under the framework. The performance of the proposed PS-based methods is evaluated for estimating prevalence of diseases and associations between risk factors and disease in the finite population.Item A Unifying Parametric Framework for Estimating Finite Population Totals from Complex Samples(2019) Flores Cervantes, Ismael; Brick, J. Michael; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We propose a unifying framework for improving the efficiency of design-based estimators of finite population characteristics in the presence of full response. We call the framework a Parametric (PA) approach. The PA framework, an extension of the model-assisted theory, uses an algorithmic approach driven by the observed data. The algorithm identifies the relevant subset of auxiliary variables related to the outcome, and the known population totals of these variables are used to compute the PA estimator. We apply the PA framework to three important estimation problems: the identification of the functional form of a design-based estimator based on the observed data; the identification working or assisting model; and the development of the methodology for creating new design-based estimators. The PA estimators are theoretically justified and evaluated by simulations. This dissertation is limited to single-stage sample designs with full response, but the framework can be extended to other sample designs and for estimation with nonresponse.Item Selection Bias in Nonprobability Surveys: A Causal Inference Approach(2018) Mercer, Andrew William; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Many in the survey research community have expressed concern at the growing popularity of nonprobability surveys. The absence of random selection prompts justified concerns about self-selection producing biased results and means that traditional, design-based estimation is inappropriate. The Total Survey Error (TSE) paradigm’s designations of selection bias as attributable to undercoverage or nonresponse are not especially helpful for nonprobability surveys as they are based on an implicit assumption that selection and inferences rely on randomization. This dissertation proposes an alternative classification for sources of selection bias for nonprobability surveys based on principles borrowed from the field of causal inference. The proposed typology describes selection bias in terms of the three conditions that are required for a statistical model to correct or explain systematic differences between a realized sample and the target population: exchangeability, positivity, and composition. We describe the parallels between causal and survey inference and explain how these three sources of bias operate in both probability and nonprobability survey samples. We then provide a critical review of current practices in nonprobability data collection and estimation viewed through the lens of the causal bias framework. Next, we show how net selection bias can be decomposed into separate additive components associated with exchangeability, positivity, and composition respectively. Using 10 parallel nonprobability surveys from different sources, we estimate these components for six measures of civic engagement using the 2013 Current Population Survey Civic Engagement Supplement as a reference sample. We find that a large majority of the bias can be attributed to a lack of exchangeability. Finally, using the same six measures of civic engagement, we compare the performance of four approaches to nonprobability estimation based on Bayesian additive regression trees. These are propensity weighting (PW), outcome regression (OR), and two types of doubly-robust estimators: outcome regression with a residual bias correction (OR-RBC) and outcome regression with a propensity score covariate (OR-PSC). We find that OR-RBC tends to have the lowest bias, variance, and RMSE, with PW only slightly worse on all three measures.Item Model-Assisted Estimators for Time-to-Event Data(2017) Reist, Benjamin Martin; Valliant, Richard; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In this dissertation, I develop model-assisted estimators for estimating the proportion of a population that experienced some event by time t. I provide the theoretical justification for the new estimators using time-to-event models as the underlying framework. Using simulation, I compared these estimators to traditional methods, then I applied the estimators to a study of nurses’ health, where I estimated the proportion of the population that had died after a certain period of time. The new estimators performed as well if not better than existing methods. Finally, as this work assumes that all units are censored at the same point in time, I propose an extension that allows units censoring time to vary.Item INVESTIGATION OF ALTERNATIVE CALIBRATION ESTIMATORS IN THE PRESENCE OF NONRESPONSE(2017) Han, Daifeng; Valliant, Richard; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Calibration weighting is widely used to decrease variance, reduce nonresponse bias, and improve the face validity of survey estimates. In the purely sampling context, Deville and Särndal (1992) demonstrate that many alternative forms of calibration weighting are asymptotically equivalent, so for variance estimation purposes, the generalized regression (GREG) estimator can be used to approximate some general calibration estimators with no closed-form solutions such as raking. It is unclear whether this conclusion holds when nonresponse exists and single-step calibration weighting is used to reduce nonresponse bias (i.e., calibration is applied to the basic sampling weights directly without a separate nonresponse adjustment step). In this dissertation, we first examine whether alternative calibration estimators may perform differently in the presence of nonresponse. More specifically, properties of three widely used calibration estimations, the GREG with only main effect covariates (GREG_Main), poststratification, and raking, are evaluated. In practice, the choice between poststratification and raking are often based on sample sizes and availability of external data. Also, the raking variance is often approximated by a linear substitute containing residuals from a GREG_Main model. Our theoretical development and simulation work demonstrate that with nonresponse, poststratification, GREG_Main, and raking may perform differently and survey practitioners should examine both the outcome model and the response pattern when choosing between these estimators. Then we propose a distance measure that can be estimated for raking or GREG_Main from a given sample. Our analytical work shows that the distance measure follows a Chi-square probability distribution when raking or GREG_Main is unbiased. A large distance measure is a warning sign of potential bias and poor confidence interval coverage for some variables in a survey due to omitting a significant interaction term in the calibration process. Finally, we examine several alternative variance estimators for raking with nonresponse. Our simulation results show that when raking is model-biased, none of the linearization variance estimators under evaluation is unbiased. In contrast, the jackknife replication method performs well in variance estimation, although the confidence interval may still be centered in the wrong place if the point estimate is inaccurate.Item Rapport and Its Impact on the Disclosure of Sensitive Information in Standardized Interviews(2014) Sun, Hanyu; Conrad, Frederick G.; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Although there is no universally accepted way to define and operationalize rapport, the general consensus is that it can have an impact on survey responses, potentially affecting their quality. Moderately sensitive information is often asked in the interviewer-administered mode of data collection. Although rapport-related verbal behaviors have been found to increase the disclosure of moderately sensitive information in face-to-face interactions, it is unknown if rapport can be established to the same extent in video-mediated interviews, leading to similar levels of disclosure. Highly sensitive information is usually collected via self-administered modes of data collection. For some time, audio computer-assisted self-interviewing (ACASI) has been seen as one of the best methods for collecting sensitive information. Typically, the respondent first answers questions about nonsensitive topics in computer-assisted personal interviewing (CAPI) and is then switched to ACASI for sensitive questions. None of the existing research has investigated the possibility that the interviewer-respondent interaction, prior to the ACASI questions, may affect disclosures in ACASI. This dissertation used a laboratory experiment that was made up of two related studies, aiming at answering these questions. The first study compares video-mediated interviews with CAPI to investigate whether rapport can be similarly established in video-mediated interviews, leading to similar levels of disclosure. There was no significant difference in rapport ratings between video-mediated and CAPI interviews, suggesting no evidence that rapport is any better established in CAPI than video-mediated interviews. Compared with CAPI, higher disclosure of moderately sensitive information was found in video-mediated interviews, though the effects were only marginally significant. The second study examines whether the interviewer-respondent interaction, prior to the ACASI questions, may affect disclosure in ACASI. There was no significant difference on disclosure between the same voice and the different voice condition. However, there were marginally significant carryover effects of rapport in the preceding module on disclosure in the subsequent ACASI module. Respondents who experienced high rapport in the preceding module gave more disclosure in the subsequent ACASI module. Furthermore, compared with ACASI, the percentage of reported sensitive behaviors was higher for video-mediated interviews for some of the highly sensitive questions.Item Testing for Phase Capacity in Surveys with Multiple Waves of Nonrespondent Follow-Up(2014) Lewis, Taylor Hudson; Lahiri, Partha; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)To mitigate the potentially harmful effects of nonresponse, many surveys repeatedly follow up with nonrespondents, often targeting a particular response rate or predetermined number of completes. Each additional recruitment attempt generally brings in a new wave of data, but returns gradually diminish over the course of a fixed data collection protocol. This is because each subsequent wave tends to contain fewer and fewer new responses, thereby resulting in smaller and smaller changes on (nonresponse-adjusted) point estimates. Consequently, these estimates begin to stabilize. This is the notion of phase capacity, suggesting some form of design change is in order, such as switching modes, increasing the incentive, or, as is considered exclusively in this research, discontinuing the nonrespondent follow-up campaign altogether. This dissertation consists of three methodological studies proposing and assessing various techniques survey practitioners can use to formally test for phase capacity. One of the earliest known phase capacity testing methods proposed in the literature calls for multiply imputing nonrespondents' missing data to assess, retrospectively, whether the most recent wave of data significantly altered a key estimate. The first study introduces an adaptation of this test amenable to surveys that instead reweight the observed data to compensate for nonresponse. A general limitation of methods discussed in the first study is that they are applicable to a single point estimate. The second study evaluates two extensions, each with the aim of producing a universal, yes-or-no phase capacity determination for a battery of point estimates. The third study builds upon ideas of a prospective phase capacity test recently proposed in the literature attempting to address the question of whether an imminent wave of data will significantly alter a key estimate. All three studies include a simulation study and application using data from the 2011 Federal Employee Viewpoint Survey.Item A COMPARISON OF EX-ANTE, LABORATORY, AND FIELD METHODS FOR EVALUATING SURVEY QUESTIONS(2014) Maitland, Aaron; Presser, Stanley; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A diverse range of evaluation methods is available for detecting measurement error in survey questions. Ex-ante question evaluation methods are relatively inexpensive, because they do not require data collection from survey respondents. Other methods require data collection from respondents either in the laboratory or in the field setting. Research has explored how effective some of these methods are at identifying problems with respect to one another. However, a weakness of most of these studies is that they do not compare the range of question evaluation methods that are currently available to researchers. The purpose of this dissertation is to understand how the methods researchers use to evaluate survey questions influence the conclusions they draw about the questions. In addition, the dissertation seeks to identify more effective ways to use the methods together. It consists of three studies. The first study examines the extent of agreement between ex-ante and laboratory methods in identifying problems and compares the methods in how well they predict differences between questions whose validity has been estimated in record-check studies. The second study evaluates the extent to which ex-ante and laboratory methods predict the performance of questions in the field as measured by indirect assessments of data quality such as behavior coding, response latency and item nonresponse. The third study evaluates the extent to which ex-ante, laboratory, and field methods predict the reliability of answers to survey questions as measured by stability over time. The findings suggest (1) that a multiple method approach to question evaluation is the best strategy given differences in the ability to detect different types of problems between the methods and (2) how to combine methods more effectively in the future.