skip to main content skip to footer

Contrasting Automated and Human Scoring of Essays AES PARCC SBAC

Author(s):
Zhang, Mo
Publication Year:
2013
Report Number:
RDC-21
Source:
R&D Connections, n21, Mar 2013
Document Type:
Periodical
Page Count:
11
Subject/Key Words:
Automated Scoring, Automated Essay Scoring (AES), Human Scoring, Common Core State Assessments, Large-Scale Testing, High-Stakes Decisions, Human Raters, Constructed-Response Item, Cognitive Limitations, Artificial Intelligence, Validity, Partnership for Assessment of Readiness for College and Careers (PARCC), SMARTER Balanced Assessment Consortium (SBAC), Eye-Tracking Technology, Electronic Essay Rater (E-rater)

Abstract

The author of this essay compares automated and human scoring of essays. The essay gives an overview of the current state-of-the-art of automated scoring and compare its strengths and weaknesses with those of human rating. Computer-assisted essay scoring is said to be fast, consistent, and objective, but it has limitations. Machines are for example not able to evaluate the quality of argumentation, nor consider rhetorical style, areas where human scoring has strengths. The interest in automated scoring has been growing fast in the past few years, partly due to the two common-core assessment consortia – PARCC and Smarter Balanced – and plans to use automated scoring to speed-up score turnaround and reduce cost. The essay describes several common ways to leverage the two scoring approaches in order to meet particular goals for an assessment. Knowing how automated scoring differs from human scoring can help policy makers and testing program directors make better decisions on test use and interpretation. This is the 21st issue in the R&D Connections series.

Read More