Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 2 of 2
  • Thumbnail Image
    Item
    IRT vs. Factor Analysis Approaches in Analyzing Multigroup Multidimensional Binary Data: The Effect of Structural Orthogonality, and the Equivalence in Test Structure, Item Difficulty, & Examinee Groups
    (2008-05-30) Lin, Peng; Lissitz, Robert W; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The purpose of this study was to investigate the performance of different approaches in analyzing multigroup multidimensional binary data under different conditions. Two multidimensional Item Response Theory (MIRT) methods (concurrent MIRT calibration and separate MIRT calibration with linking) and one factor analysis method (concurrent factor analysis calibration) were examined. The performance of the unidimensional IRT method compared to its multidimensional counterparts was also investigated. The study was based on simulated data. Common-item nonequivalent groups design was employed with the manipulation of four factors: the structural orthogonality, the equivalence of test structure, the equivalence of item difficulty, and the equivalence of examinee groups. The performance of the methods was evaluated based on the recovery of the item parameters and the estimation of the true score of the examinees. The results indicated that, in general, the concurrent factor analysis method performed as well as, sometimes even better than, the two MIRT methods in recovering the item parameters. However, in estimating the true score of examinees, the concurrent MIRT method usually performed better than the concurrent factor analysis method. The results also indicated that the unidimensional IRT method was quite robust to the violation of unidimensionality assumption.
  • Thumbnail Image
    Item
    EFFECT OF CATEGORIZATION ON TYPE I ERROR AND POWER IN ORDINAL INDICATOR LATENT MEANS MODELS FOR BETWEEN-SUBJECTS DESIGNS
    (2006-07-28) Choi, Jaehwa; Hancock, Gregory R; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Due to the superiority of latent means models (LMM) over the modeling of means on a single measured variable (ANOVA) or on a composite (MANOVA) in terms of power and effect size estimation, LMM is starting to be recognized as a powerful modeling technique. Conducting a group difference (e.g., a treatment effect) testing at the latent level, LMM enables us to analyze the consequence of the measurement error on measured level variable(s). And, this LMM has been developed for both interval indicators (IILMM; Jöreskog & Goldberger, 1975, Muthén, 1989, Sörbom, 1974) and ordinal indicators (OILMM; Jöreskog, 2002). Recently, effect size estimates, post hoc power estimates, and a priori sample size determination for LMM have been developed for interval indicators (Hancock, 2001). Considering the frequent analysis of ordinal data in the social and behavior sciences, it seems most appropriate that these measures and methods be extended to LMM involving such data, OILMM. However, unlike IILMM, the OILMM power analysis involves various additional issues regarding the ordinal indicators. This research starts with illustrating various aspects of the OILMM: options for handling ordinal variables' metric level, options of estimating OILMM, and the nature of ordinal data (e.g., number of categories, categorization rules). Also, this research proposes a test statistic of the OILMM power analysis parallel to the IILMM results by Hancock (2001). The main purpose of this research is to examine the effect of categorization (mostly focused on the options handling ordinal indicators, and number of ordinal categories) on Type I error and power in OILMM based on the proposed measures and OILMM test statistic. A simulation study is conducted particularly for the two-populations between-subjects design case. Also, a numerical study is provided using potentially useful statistics and indices to help understanding the consequence of the categorization especially when one treats ordinal data as if they had metric properties.