Human Development & Quantitative Methodology

Permanent URI for this communityhttp://hdl.handle.net/1903/2248

The departments within the College of Education were reorganized and renamed as of July 1, 2011. This department incorporates the former departments of Measurement, Statistics & Evaluation; Human Development; and the Institute for Child Study.

Browse

Search Results

Now showing 1 - 2 of 2
  • Item
    CROSS-CLASSIFIED MODELING OF DUAL LOCAL ITEM DEPENDENCE
    (2014) Xie, Chao; Jiao, Hong; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Previous studies have mainly focused on investigating one source of local item dependence (LID). However, in some cases, such as scenario-based science assessments, LID might be caused by two possible sources simultaneously. In this study, such kind of LID that is caused by two factors simultaneously is named as dual local item dependence (DLID). This study proposed a cross-classified model to account for DLID. Two simulation studies were conducted with the primary purpose of evaluating the performance of the proposed cross-classified model. Data sets with DLID were simulated with both testlet effects and content clustering effects. The second purpose of this study was to investigate the potential factors affecting the need to use the more complex cross-classified modeling of DLID over the simplified multilevel modeling of LID by ignoring cross-classification structure. For both simulation studies, five factors were manipulated, including sample size, number of testlets, testlet length, magnitude of the testlet effects represented by standard deviations (SDs), and magnitude of the content clustering effects represented by SDs. The difference between the two simulation studies was that, simulation study 1 constrained the SDs of the testlet effects and content clustering effects as the same across testlets and content areas, respectively; simulation study 2 released this constraint by having mixed SDs of the testlet effects and mixed SDs of the content clustering effects. Results of both simulation studies indicated that the proposed cross-classified model yielded more accurate parameter recovery, including item difficulty, persons' ability, and random effects' SD parameters with smaller estimation errors than the two multilevel models and the Rasch model which ignored one or both item clustering effects. The two manipulated variables, the magnitude of the testlet effects and the magnitude of the content clustering effects, determined the necessity of using the more complex cross-classified model over the simplified multilevel models and the Rasch model: the larger the magnitude of the testlet effects and the content clustering effects, the more necessary to use the proposed cross-classified model. Limitations are discussed and suggestions for future research are presented at the end.
  • Item
    IMPACTS OF LOCAL ITEM DEPENDENCE OF TESTLET ITEMS WITH THE MULTISTAGE TESTS FOR PASS-FAIL DECISIONS
    (2010) Lu, Ru; Jiao, Hong; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The primary purpose of this study is to investigate the impact of the local item dependence (LID) of testlet items on the performance of the multistage tests (MST) that make pass/fail decisions. In this study, LID is simulated in testlet items. Testlet items are those that physically share the same stimulus. In the MST design, the proportion of testlet items is a manipulated factor. Other studied factors include testlet item position, LID magnitude, and test length. The second purpose of this study is to use a testlet response model to account for LID in the context of MSTs. The possible gains of using a testlet model against a standard IRT model are evaluated. The results indicate that under the simulated conditions, the testlet item position has a very minimal effect on the precision of ability estimation and decision accuracy, while the item pool structure (the proportion of testlet items), the LID magnitude and test length have fairly substantial effects. Ignoring the LID effects and fitting a unidimensional 3PL model result in the loss of ability estimation precision and decision accuracy. The ability estimation is adversely impacted by larger proportion of testlet items, the moderate and high LID levels and short test lengths. As the LID condition gets worse (large LID magnitude, or large proportion of testlet items), the decision accuracy rates decrease. Fitting a 3PL testlet response model does not reach the same level of ability estimation precision under all simulations conditions. In fact, it proves that ignoring LID and fitting the 3PL model provides inflated ability estimation precision and the accuracy of decision accuracies.