The Ability of Maryland English Teachers to Rate Holistically The Quality of Student Explanatory Writing

Loading...
Thumbnail Image

Files

1560496.pdf (88.79 MB)
No. of downloads: 23

Publication or External Link

Date

1988

Citation

Abstract

The purpose of this study was to determine the accuracy of Maryland English teachers in using the Maryland Writing Test scoring criteria to place modified holistic ratings on student explanatory writing . The performance of eight expert raters, who had previously demonstrated 80% rating accuracy in training, was compared with the performance of six novice raters, who had not been required to demonstrate accuracy in their training. Accuracy was determined by analyzing error frequency and patterns in error size and direction. Scores were further analyzed to determine writing features, both internal and external to the Maryland Writing Test scoring criteria, that served as predictors of scores assigned by the two groups of raters. Findings indicate that novice and expert raters were approximately 60% accurate in score assignments, with no significant difference in the accuracy level of the two groups. While scores assigned by both groups correlated highly, the size of their errors correlated moderately. Novice rater errors were more often one or more score points below the certified scores that compositions should have received while expert rater errors were equally distributed between overassessments and underassessments of writing quality. The results of stepwise regressions showed certified scores as well as scores assigned by the two groups of raters to be predicted by the number of words in the composition and by the frequency of syntax errors. While 39% of the variance in certified scores was explained by the number of words, around 50% of the variances in novice and expert scores were explained by the same feature. Likewise, syntax error frequencies were slightly stronger predictors of rater scores than of certified scores, contributing 11 % and 17% respectively to the variance in expert and novice rater scores. Of five features associated with the scoring guide, content was the strongest predictor of certified scores, explaining 99.4% of the variance in scores. However, organization was the strongest predictor of rater scores, explaining around 80% of the variance in scores.

Notes

Rights