Skip to content
University of Maryland LibrariesDigital Repository at the University of Maryland
    • Login
    View Item 
    •   DRUM
    • College of Education
    • Teaching, Learning, Policy & Leadership
    • Teaching, Learning, Policy & Leadership Theses and Dissertations
    • View Item
    •   DRUM
    • College of Education
    • Teaching, Learning, Policy & Leadership
    • Teaching, Learning, Policy & Leadership Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    The Ability of Maryland English Teachers to Rate Holistically The Quality of Student Explanatory Writing

    Thumbnail
    View/Open
    1560496.pdf (88.78Mb)
    No. of downloads: 5

    Date
    1988
    Author
    Peiffer, Ronald Aaron
    Advisor
    Jantz, Richard K.
    DRUM DOI
    https://doi.org/10.13016/nfwj-czs1
    Metadata
    Show full item record
    Abstract
    The purpose of this study was to determine the accuracy of Maryland English teachers in using the Maryland Writing Test scoring criteria to place modified holistic ratings on student explanatory writing . The performance of eight expert raters, who had previously demonstrated 80% rating accuracy in training, was compared with the performance of six novice raters, who had not been required to demonstrate accuracy in their training. Accuracy was determined by analyzing error frequency and patterns in error size and direction. Scores were further analyzed to determine writing features, both internal and external to the Maryland Writing Test scoring criteria, that served as predictors of scores assigned by the two groups of raters. Findings indicate that novice and expert raters were approximately 60% accurate in score assignments, with no significant difference in the accuracy level of the two groups. While scores assigned by both groups correlated highly, the size of their errors correlated moderately. Novice rater errors were more often one or more score points below the certified scores that compositions should have received while expert rater errors were equally distributed between overassessments and underassessments of writing quality. The results of stepwise regressions showed certified scores as well as scores assigned by the two groups of raters to be predicted by the number of words in the composition and by the frequency of syntax errors. While 39% of the variance in certified scores was explained by the number of words, around 50% of the variances in novice and expert scores were explained by the same feature. Likewise, syntax error frequencies were slightly stronger predictors of rater scores than of certified scores, contributing 11 % and 17% respectively to the variance in expert and novice rater scores. Of five features associated with the scoring guide, content was the strongest predictor of certified scores, explaining 99.4% of the variance in scores. However, organization was the strongest predictor of rater scores, explaining around 80% of the variance in scores.
    URI
    http://hdl.handle.net/1903/29741
    Collections
    • Teaching, Learning, Policy & Leadership Theses and Dissertations
    • UMD Theses and Dissertations

    DRUM is brought to you by the University of Maryland Libraries
    University of Maryland, College Park, MD 20742-7011 (301)314-1328.
    Please send us your comments.
    Web Accessibility
     

     

    Browse

    All of DRUMCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister
    Pages
    About DRUMAbout Download Statistics

    DRUM is brought to you by the University of Maryland Libraries
    University of Maryland, College Park, MD 20742-7011 (301)314-1328.
    Please send us your comments.
    Web Accessibility