Contrasting Automated and Human Scoring of Essays

Author(s):
Zhang, Mo
Publication Year:
2013
Report Number:
RDC-21
Source:
R&D Connections, n21, Mar 2013
Document Type:
Periodical
Page Count:
11
Subject/Key Words:
Automated Scoring Artificial Intelligence Constructed-Response Item Cognitive Limitations Electronic Essay Rater (E-rater) Eye-Tracking Technology Human Raters SMARTER Balanced Assessment Consortium (SBAC) Human Scoring Validity Large-Scale Testing Partnership for Assessment of Readiness for College and Careers (PARCC) High-Stakes Decisions Automated Essay Scoring (AES) Common Core State Assessments

Abstract

The author of this essay compares automated and human scoring of essays. The essay gives an overview of the current state-of-the-art of automated scoring and compare its strengths and weaknesses with those of human rating. Computer-assisted essay scoring is said to be fast, consistent, and objective, but it has limitations. Machines are for example not able to evaluate the quality of argumentation, nor consider rhetorical style, areas where human scoring has strengths. The interest in automated scoring has been growing fast in the past few years, partly due to the two common-core assessment consortia – PARCC and Smarter Balanced – and plans to use automated scoring to speed-up score turnaround and reduce cost. The essay describes several common ways to leverage the two scoring approaches in order to meet particular goals for an assessment. Knowing how automated scoring differs from human scoring can help policy makers and testing program directors make better decisions on test use and interpretation. This is the 21st issue in the R&D Connections series.

Read More