College of Education

Permanent URI for this communityhttp://hdl.handle.net/1903/1647

The collections in this community comprise faculty research works, as well as graduate theses and dissertations..

Browse

Search Results

Now showing 1 - 2 of 2
  • Item
    Performance of Propensity Score Methods in the Presence of Heterogeneous Treatment Effects
    (2016) Stepien, Kathleen Maria; Stapleton, Laura M; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Estimating an average treatment effect assumes that individuals and groups are homogeneous in their responses to a treatment or intervention. However, treatment effects are often heterogeneous. Selecting the most effective treatment, generalizing causal effect estimates to a population, and identifying subgroups for which a treatment is effective or harmful are factors that motivate the study of heterogeneous treatment effects. In observational studies, treatment effects are often estimated using propensity score methods. This dissertation adds to the literature on the analysis of heterogeneous treatment effects using propensity score methods. Three propensity score methods were compared using Monte Carlo simulation: single propensity score with exact matching on subgroup, matching using group propensity scores, and multinomial propensity scores using generalized boosted modeling. Methods were evaluated under various group distributions, sample sizes, effect sizes, and selection models. An empirical analysis using data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 (ECLS-K) is included to demonstrate the methods studied. Simulation results showed that estimating group propensity scores provided the smallest MSE, MNPS performance was comparable to GBM, and including the group indicator in the propensity score model improved treatment effect estimates regardless of whether group membership influenced selection. In addition, subclassification performed poorly when one group was more prevalent in the extremes of the propensity score distribution.
  • Item
    MODELING CLUSTERED DATA WITH FEW CLUSTERS: A CROSS-DISCIPLINE COMPARISON OF SMALL SAMPLE METHODS
    (2015) McNeish, Daniel; Hancock, Gregory R.; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Small sample inference with clustered data has received increased attention recently in the methodological literature with several simulation studies being presented on the small sample behavior of various methods. There are several different classes of methods that can be implemented to account for clustering and disciplinary allegiances are quite rigid: for instance, recent reviews have found that 94% of psychology studies use multilevel models whereas only 3% of economics studies use multilevel models. In economics, fixed effects models are far more popular and in biostatistics there is a tendency to employ generalized estimating equations. As a result of these strong disciplinary preferences, methodological studies tend to focus only a single class of methods (e.g., multilevel models in psychology) while largely ignoring other possible methods. Therefore, the performance of small sample methods have been investigated within classes of methods but studies have not expanded investigations across disciplinary boundaries to more broadly compare the performance of small sample methods that exist in the various classes of methods to accommodate clustered data. Motivated by an applied educational psychology study with a few clusters, in this dissertation the various methods to accommodate clustered data and their small sample extensions are introduced. Then a wide ranging simulation study is conducted to compare 12 methods to model clustered data with a small number of clusters. Many small sample studies generate data from fairly unrealistic models that only feature a single predictor at each level so this study generates data from a more complex model with 8 predictors that is more reminiscent of data researchers might have in an applied study. Few studies have also investigated extremely small numbers of clusters (less than 10) that are quite common in many researchers areas where clusters contain many observations and are there expensive to recruit (e.g., schools, hospitals) and the simulation study lowers the number of clusters well into the single digits. Results show that some methods such as fixed effects models and Bayes estimation clearly perform better than others and that researchers may benefit from considering methods outside those typically employed in their specific discipline.