Reading Comprehension and its Assessment : Aligning Operationalization with Conceptualization of the Construct
Files
Publication or External Link
Date
Authors
Advisor
Citation
DRUM DOI
Abstract
The current study explored ways to improve reading comprehension assessments. Available assessments appeared misaligned with views of comprehension that are emerging in the reading research literature. Further, the measurement models as currently applied to comprehension assessment do not take into account the cognitive perspective of the construct when estimating proficiency. It has been argued that an assessment, an evidentiary argument, when based on a theory of the construct can offer more informative estimates of proficiency and improve validity of the inferences drawn from those estimates (Mislevy, Steinberg, & Almond, 2003).
For this study, the design and the analytic approach for an assessment of comprehension were grounded on a premise that comprehension is influenced by task attributes (e.g., text type or target mental representation) as well as reader attributes (e.g., prior knowledge or interest). Construction of the comprehension measure and the ensuing psychometric analyses were framed following Kintsch's (1998) Construction-Integration model and Alexander's (1997) Model of Domain Learning.
The resulting measure was administered to 160 eighth-grade students with no known status of receiving services for special education. In completing the comprehension task, the students read four text passages and answered a set of text passage-related questions, eight per passage. Those passages varied by text type and text topic, and questions varied by the target mental representations of a text and relations among the events of a situation described in the text. In addition, participants answered a set of questions for self-reporting about their familiarity with and interest in the topic of a text passage that they had read.
In synthesizing the data, a particular form of the Linear Logistic Test Model introduced by Fischer (1973) was applied within a Bayesian framework. When the attributes were incorporated in the measurement model, the comprehension proficiency estimates changed in a way that reflected positive effects of topic familiarity and topic interest. Further, the task and reader attributes considered in the study contributed to estimates of item difficulties. Thus, the study, based on empirical evidence, suggests that developing a comprehension assessment more aligned with views of the construct offered in the literature is indeed viable.