Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
4 results
Search Results
Item Beyond Response Rates: The Effect of Prepaid Incentives on Measurement Error(2012) Medway, Rebecca; Tourangeau, Roger; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)As response rates continue to decline, survey researchers increasingly offer incentives as a way to motivate sample members to take part in their surveys. Extensive prior research demonstrates that prepaid incentives are an effective tool for doing so. If prepaid incentives influence behavior at the stage of deciding whether or not to participate, they also may alter the way that respondents behave while completing surveys. Nevertheless, most research has focused narrowly on the effect that incentives have on response rates. Survey researchers should have a better empirical basis for assessing the potential tradeoffs associated with the higher responses rates yielded by prepaid incentives. This dissertation describes the results of three studies aimed at expanding our understanding of the impact of prepaid incentives on measurement error. The first study explored the effect that a $5 prepaid cash incentive had on twelve indicators of respondent effort in a national telephone survey. The incentive led to significant reductions in item nonresponse and interview length. However, it had little effect on the other indicators, such as response order effects and responses to open-ended items. The second study evaluated the effect that a $5 prepaid cash incentive had on responses to sensitive questions in a mail survey of registered voters. The incentive resulted in a significant increase in the proportion of highly undesirable attitudes and behaviors to which respondents admitted and had no effect on responses to less sensitive items. While the incentive led to a general pattern of reduced nonresponse bias and increased measurement bias for the three voting items where administrative data was available for the full sample, these effects generally were not significant. The third study tested for measurement invariance in incentive and control group responses to four multi-item scales from three recent surveys that included prepaid incentive experiments. There was no evidence of differential item functioning; however, full metric invariance could not be established for one of the scales. Generally, these results suggest that prepaid incentives had minimal impact on measurement error. Thus, these findings should be reassuring for survey researchers considering the use of prepaid incentives to increase response rates.Item Neighborhood Characteristics and Participation in Household Surveys(2010) Casas-Cordero Valencia, Carolina; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Declining response rates in household surveys continue to demand not only a better understanding of the mechanisms underlying nonresponse, but also the identification of auxiliary variables that can help assess, reduce, and hopefully correct for this source of error in survey estimates. Using data from L.A. Family and Neighborhood Study (L.A. FANS), this dissertation shows that observable characteristics of the sampled neighborhoods have the potential to advance both survey research topics. Paper 1 of this dissertation advances our understanding of the role that local neighborhood processes play in survey participation. The measures of social and physical environments are shown to be significant predictors of household cooperation in the L.A.FANS, even after controlling for the socio-economic composition of households and neighborhoods. A nice feature of the indicators of the physical environment is that they can be observed without performing the actual interview. Thus they are available for both respondents and nonrespondents. However, survey interviewers charged with this task might make errors that can limit the usability of these observations. Paper 2 uses a multilevel framework to examine 25 neighborhood items rated by survey interviewers. The results show that errors vary by type of item and that interviewer perceptions are largely driven by characteristics of the sampled areas -- not by characteristics of the interviewers themselves. If predictive of survey participation, neighborhood characteristics can be useful for survey fieldwork decisions aimed at increasing response rates. If neighborhood characteristics are also related to survey outcome variables they furthermore can be used to inform strategies aimed at reducing nonresponse bias. Paper 3 compares the effectiveness of several different neighborhood characteristics in nonresponse adjustments for the L.A.FANS, and shows that interviewer observations perform similar to Census variables when used for weighting key estimates of L.A. FANS. Results of this dissertation can be relevant for those who want to increase response rates by tailoring efforts according to neighborhood characteristics. The most important contribution of this dissertation, however, lies in re-discovering intersections between survey methodology and urban sociology.Item A General Method for Estimating the Classification Reliability of Complex Decisions Based on Configural Combinations of Multiple Assessment Scores(2007-01-24) Douglas, Karen; Mislevy, Robert; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study presents a general method for estimating the classification reliability of complex decisions based on multiple scores from a single test administration. The proposed method consists of four steps that can be applied to a variety of measurement models and configural rules for combining test scores: Step 1: Fit a measurement model to the observed data. Step 2: Simulate replicate distributions of plausible observed scores based on the measurement model. Step 3: Construct a contingency table that shows the congruence between true and replicate scores for decision accuracy, and two replicate scores for decision consistency. Step 4: Calculate measures to characterize agreement in the contingency tables. Using a classical test theory model, a simulation study explores the effect of increasing the number of tests, strength of relationship among tests, and number of opportunities to pass on classification accuracy and consistency. Next the model is applied to actual data from the GED Testing Service to illustrate the utility of the method for informing practical decisions. Simulation results support the validity of the method for estimating classification reliability, and the method provides credible estimation of classification reliability for the GED Tests. Application of configural rules results in complex findings which sometimes show different results for classification accuracy and consistency. Unexpected findings support the value of using the method to explore classification reliability as a means of improving decision rules. Highlighted findings: 1) The compensatory rule (in which test scores are added) performs consistently well across almost all conditions; 2) Conjunctive and complementary rules frequently show opposite results; 3) Including more tests in the decision rule influences classification reliability differently depending on the rule; 4) Combining scores from highly-related tests increases classification reliability; 5) Providing multiple opportunities to pass yields mixed results. Future studies are suggested to explore use of other measurement models, varying levels of test reliability, modeling multiple attempts in which learning occurs between testings; and in-depth study of incorrectly classified examinees.Item GRICEAN EFFECTS IN SELF-ADMINSTERED SURVEYS(2005-10-31) Yan, Ting; Tourangeau, Roger; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Despite the best efforts of questionnaire designers, survey respondents don't always interpret questions as the question writers intended. Researchers have used Grice's conversational maxims to explain some of these discrepancies. This dissertation extends this work by reviewing studies on the use of Grice's maxims by survey respondents and describing six new experiments that looked for direct evidence that respondents apply Grice's maxims. The strongest evidence for respondents' use of the maxims came from an experiment that varied the numerical labels on a rating scale; the mean shift in responses to the right side of the rating scale induced by negative numerical labels was robust across items and fonts. Process measures indicated that respondents applied the maxim of relation in interpreting the questions. Other evidence supported use of the maxim of quantity -- as predicted, correlations between two highly similar items were lower when they were asked together. Reversing the wording of one of the items didn't prevent respondents from applying the maxim of quantity. Evidence was weaker for the application of Grice's maxim of manner; respondents still seemed to use definitions (as was apparent from the reduced variation in their answers), even though the definitions were designed to be uninformative. That direct questions without filters induced significantly more responses on the upper end of the scale -- presumably because of the presuppositions direct questions carried -- supported respondents' application of the maxim of quality. There was little support for respondents' use of the maxim of relation from an experiment on the physical layout of survey questions; the three different layouts didn't influence how respondents perceived the relation among items. These results provided some evidence that both survey "satisficers" and survey "optimizers" may draw automatic inferences based on Gricean maxims, but that only "optimizers" will carry out the more controlled processes requiring extra effort. Practical implications for survey practice include the need for continued attention to secondary features of survey questions in addition to traditional questionnaire development issues. Additional experiments that incorporate other techniques such as eye tracking or cognitive interviews may help to uncover other subtle mechanisms affecting survey responses.