Show simple item record

Mixed-format Test Equating: Effects of Test Dimensionality and Common-item Sets

dc.contributor.advisorLissitz, Roberten_US
dc.contributor.authorCao, Yien_US
dc.date.accessioned2009-01-24T07:13:28Z
dc.date.available2009-01-24T07:13:28Z
dc.date.issued2008-11-18en_US
dc.identifier.urihttp://hdl.handle.net/1903/8843
dc.description.abstractThe main purposes of this study were to systematically investigate the impact of representativeness and non-representativeness of common-item sets in terms of statistical, content, and format specifications in mixed-format tests using concurrent calibration with unidimensional IRT models, as well as to examine its robustness to various multidimensional test structures. In order to fulfill these purposes, a simulation study was conducted, in which five factors - test dimensionality structure, group ability distributions, statistical, content and format representativeness - were manipulated. The examinees' true and estimated expected total scores were computed and BIAS, RMSE and Classification Consistency indices over 100 replications were then compared. The major findings were summarized as follows: First, considering all of the simulation conditions, the most notable and significant effects on the equating results appeared to be those due to the factor of group ability distributions. The equivalent groups condition always outperformed the nonequivalent groups condition on the various evaluation indices. Second, regardless of the group ability differences, there were no statistically and practically significant interaction effects among the factors of the statistical, content and format representativeness. Third, under the unidimensional test structure, the content and format representativeness factors showed little significant impact on the equating results. Meanwhile, the statistical representativeness factor affected the performance of the concurrent calibration significantly. Fourth, regardless of the various levels of multidimensional test structure, the statistical representativeness factor showed more significant and systematic effects on the performance of the concurrent calibration than the content and format representativeness factors did. When the degree of multidimensionality due to multiple item formats increased, the format representativeness factor began to make significant differences especially under the nonequivalent groups condition. The content representativeness factor, however, showed minimum impact on the equating results regardless of the increase of the degree of multidimensionality due to different content areas. Fifth, the concurrent calibration was not quite robust to the violation of the unidimensionality since the performance of the concurrent calibration with the unidimensional IRT models declined significantly with the increase of the degree of multidimensionality.en_US
dc.format.extent6066409 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.titleMixed-format Test Equating: Effects of Test Dimensionality and Common-item Setsen_US
dc.typeDissertationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.contributor.departmentMeasurement, Statistics and Evaluationen_US
dc.subject.pqcontrolledEducation, Tests and Measurementsen_US
dc.subject.pquncontrolledmixed-format testen_US
dc.subject.pquncontrolledequatingen_US
dc.subject.pquncontrolleditem response theoryen_US
dc.subject.pquncontrolledtest dimensionalityen_US
dc.subject.pquncontrolledcommon-item seten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record