A Proposed Index to Detect Relative Item Performance when the Focal Group Sample Size is Small

dc.contributor.advisorStapleton, Laura Men_US
dc.contributor.advisorJiao, Hongen_US
dc.contributor.authorHansen, Karien_US
dc.contributor.departmentMeasurement, Statistics and Evaluationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2018-01-23T06:32:49Z
dc.date.available2018-01-23T06:32:49Z
dc.date.issued2017en_US
dc.description.abstractWhen developing educational assessments, ensuring that the test is fair to all groups of examinees is an essential part of the process. The primary statistical method for identifying potential bias in assessments is known as differential item functioning (DIF) analysis, where DIF refers to differences in performance on a specific test item between two groups assuming that the two groups have an overlap in their ability distribution. However, this requirement may be less likely to be feasible if the sample size for the focal group is small. A new index, relative item performance, is proposed to address the issue of small focal group sample sizes without the requirement of an overlap in ability distribution. This index is calculated by obtaining the effect size of the difference in item difficulty estimates between the two groups. A simulation study was conducted to compare the proposed method with the Mantel-Haenszel test with score group widths and the Differential Item Pair Functioning in terms of Type I error rates and power. The following factors were manipulated: the sample size of the focal group, the mean of the ability distribution, the amount of DIF, the number of items on the assessment, and the number of items that have different item difficulties. For all three methods, the main factors that affect the Type I error rates are the amount of item contamination, the size of the DIF, the ability mean for the focal group, and the item parameters. The sample size and the number of items were found not to have an effect on the Type I error rates for all methods. As the Type I error rate overall for the RI method is much lower than that of the MH1 and MH2 methods and not controlled across the simulation factors, power was only evaluated for the MH1 and MH2 methods. The median power of these methods were .203 and .181, respectively. It is recommended that the MH1 and MH2 methods be used only when the sample size is larger than 100 and in conjunction with expert and cognitive review of the items on the assessment.en_US
dc.identifierhttps://doi.org/10.13016/M2MS3K334
dc.identifier.urihttp://hdl.handle.net/1903/20282
dc.language.isoenen_US
dc.subject.pqcontrolledEducational tests & measurementsen_US
dc.subject.pqcontrolledStatisticsen_US
dc.subject.pquncontrolledDifferential item functioningen_US
dc.subject.pquncontrolleditem biasen_US
dc.subject.pquncontrolleditem responseen_US
dc.subject.pquncontrolledMantel-Haenszelen_US
dc.subject.pquncontrolledrelative item performanceen_US
dc.subject.pquncontrolledsmall sample sizesen_US
dc.titleA Proposed Index to Detect Relative Item Performance when the Focal Group Sample Size is Smallen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Hansen_umd_0117E_18469.pdf
Size:
3.8 MB
Format:
Adobe Portable Document Format