Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
2 results
Search Results
Item A Multilevel Testlet Joint Model of Responses and Response Time(2020) Olson, Evan; Jiao, Hong; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In approaches to joint response and response time (RT) modeling there is an assumption of conditional independence of the responses and the RTs. Further, in IRT modeling of the responses, there is the assumption that the items and the persons have local independence, respectively. In practice, violations of the local item independence results from the bundling of items into testlets. Violation of the person independence are encountered in complex examinee sampling situations. A multilevel testlet joint responses and RT model is proposed and evaluated in this study that accounts for the dual local item and person dependence due to testlets and complex sampling. A simulation study is performed to investigate parameter recovery for the proposed model and provide comparison to models that do not model dual local dependencies. In addition to the simulation study, a study using empirical data is also conducted to evaluate relative model fit indices. Generally, results determined by statistical analyses and inspection of graphs developed from descriptive statistics supported the need to model local item dependency and local person dependency. Parameter recovery outcome measures in the simulation study showed interaction of factors included with the model factor when the comparison models were included. When deviance model fit criterion was applied the proposed model was selected as the best-fitting model. For the Bayesian model fit index DIC the proposed model was not selected as best-fitting in for either the simulation or the empirical data analyses. Limitations of the study and opportunities to refine joint response and RT modeling of this dual dependency were elaborated.Item INVESTIGATING ITEM PARAMETER DRIFT (IPD) AMPLIFICATION AND CANCELLATION AT THE TESTLET-LEVEL ON MODEL PARAMETER ESTIMATION IN A MULTISTAGE TEST(2018) Bryant, Rosalyn; Jiao, Hong; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A test fairness goal is to design testing systems where performance is measured at acceptable degrees of accuracy across a wide range of test taker ability levels and across subgroups of student population. Research has shown that with adaptive test, a high degree of test score accuracy is realized. Furthermore, standard item response theory (IRT) models, are the predominately used measurement models in educational testing for computerized multistage adaptive tests (MST). Moreover, item sets, or testlets, are widely used items types in MSTs. Fitting standard IRT models to response data comes with item invariance and local independence assumptions. In practice, unexpected shifts in parameter values, or item parameter drift (IPD), across test administrations have been reported. Moreover, testlet items have been known to exhibit local item dependence (LID) due to interactions between the test taker and the common testlet stimulus. When IPD and/or LID are exhibited, these are likely violations of standard IRT assumptions threatening ability estimate accuracy and test score validity. Furthermore, a conjecture in this study is that the accumulation of insignificant IPD may be significant at the testlet level due to amplification or become insignificant at the testlet level due to cancellation. To date, no studies have investigated the combined impact of IPD amplification or cancellation at the testlet level with LID on ability estimation accuracy in an MST system. In this study, MST ability estimates generated under the two-parameter logistic (2PL) IRT and 2PL testlet response theory (TRT) models are compared to determine if there are significant differences when the amplification and cancellation of IPD to the testlet-level and LID are exhibited and when LID is not exhibited. Further, this study examines the combined impact of amplification and cancellation of IPD to the testlet-level and/or LID on MST system routing performance. This study reveals that ability estimation, routing, and decision accuracy are not significantly impacted by combined amplification, cancellation, and/or LID effects. However routing accuracy is impacted by module difference, routing error stage, or testlet effects. Finally, moderate ability test takers are found to be more likely misclassified than low or high ability test takers.