Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
2 results
Search Results
Item Exploring Unidimensional Proficiency Classification Accuracy From Multidimensional Data in a Vertical Scaling Context(2010) Kroopnick, Marc Howard; Mislevy, Robert J; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)When Item Response Theory (IRT) is operationally applied for large scale assessments, unidimensionality is typically assumed. This assumption requires that the test measures a single latent trait. Furthermore, when tests are vertically scaled using IRT, the assumption of unidimensionality would require that the battery of tests across grades measures the same trait, just at different levels of difficulty. Many researchers have shown that this assumption may not hold for certain test batteries and, therefore, the results from applying a unidimensional model to multidimensional data may be called into question. This research investigated the impact on classification accuracy when multidimensional vertical scaling data are estimated with a unidimensional model. The multidimensional compensatory two-parameter logistic model (MC2PL) was the data-generating model for two levels of a test administered to simulees of correspondingly different abilities. Simulated data from the MC2PL model was estimated according to a unidimensional two-parameter logistic (2PL) model and classification decisions were made from a simulated bookmark standard setting procedure based on the unidimensional estimation results. Those unidimensional classification decisions were compared to the "true" unidimensional classification (proficient or not proficient) of simulees in multidimensional space obtained by projecting a simulee's generating two-dimensional theta vector onto a unidimensional scale via a number correct transformation on the entire test battery (i.e. across both grades). Specifically, conditional classification accuracy measures were considered. That is, the proportion of truly not proficient simulees classified correctly and the proportion of truly proficient simulees classified correctly were the criterion variables. Manipulated factors in this simulation study included the confound of item difficulty with dimensionality, the difference in mean abilities on both dimensions of the simulees taking each test in the battery, the choice of common items used to link the exams, and the correlation of the two abilities. Results suggested that the correlation of the two abilities and the confound of item difficulty with dimensionality both had an effect on the conditional classification accuracy measures. There was little or no evidence that the choice of common items or the differences in mean abilities of the simulees taking each test had an effect.Item IRT vs. Factor Analysis Approaches in Analyzing Multigroup Multidimensional Binary Data: The Effect of Structural Orthogonality, and the Equivalence in Test Structure, Item Difficulty, & Examinee Groups(2008-05-30) Lin, Peng; Lissitz, Robert W; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The purpose of this study was to investigate the performance of different approaches in analyzing multigroup multidimensional binary data under different conditions. Two multidimensional Item Response Theory (MIRT) methods (concurrent MIRT calibration and separate MIRT calibration with linking) and one factor analysis method (concurrent factor analysis calibration) were examined. The performance of the unidimensional IRT method compared to its multidimensional counterparts was also investigated. The study was based on simulated data. Common-item nonequivalent groups design was employed with the manipulation of four factors: the structural orthogonality, the equivalence of test structure, the equivalence of item difficulty, and the equivalence of examinee groups. The performance of the methods was evaluated based on the recovery of the item parameters and the estimation of the true score of the examinees. The results indicated that, in general, the concurrent factor analysis method performed as well as, sometimes even better than, the two MIRT methods in recovering the item parameters. However, in estimating the true score of examinees, the concurrent MIRT method usually performed better than the concurrent factor analysis method. The results also indicated that the unidimensional IRT method was quite robust to the violation of unidimensionality assumption.