angle-up angle-right angle-down angle-left close user menu open menu closed search globe bars phone store

Contrasting Automated and Human Scoring of Essays

Zhang, Mo
Publication Year:
Report Number:
R&D Connections, n21, Mar 2013
Document Type:
Page Count:
Subject/Key Words:
Artificial Intelligence Automated Essay Scoring (AES) Automated Scoring Cognitive Limitations Common Core State Assessments Constructed-Response Item Electronic Essay Rater (E-rater) Eye-Tracking Technology High-Stakes Decisions Human Raters Human Scoring Large-Scale Testing Partnership for Assessment of Readiness for College and Careers (PARCC) SMARTER Balanced Assessment Consortium (SBAC) Validity


The author of this essay compares automated and human scoring of essays. The essay gives an overview of the current state-of-the-art of automated scoring and compare its strengths and weaknesses with those of human rating. Computer-assisted essay scoring is said to be fast, consistent, and objective, but it has limitations. Machines are for example not able to evaluate the quality of argumentation, nor consider rhetorical style, areas where human scoring has strengths. The interest in automated scoring has been growing fast in the past few years, partly due to the two common-core assessment consortia – PARCC and Smarter Balanced – and plans to use automated scoring to speed-up score turnaround and reduce cost. The essay describes several common ways to leverage the two scoring approaches in order to meet particular goals for an assessment. Knowing how automated scoring differs from human scoring can help policy makers and testing program directors make better decisions on test use and interpretation. This is the 21st issue in the R&D Connections series.

Read More