Video Title: Contrasting Automated and Human Scoring of Essays

People in this video

Mo Zhang - Associate Research Scientist

Intro

[music playing]

On-screen:
R&D Connections No.21
Contrasting Human & Automated Scoring
Mo Zhang
Associate Research Scientist
ETS Research & Development Division

Mo Zhang: Hi, I am Mo Zhang and I work in the Research & Development division of ETS. I wrote an R&D Connections article about automated and human scoring of essays.

Interest in automated scoring has been growing fast in the past few years. This growth is partly because the two main common-core assessment consortia – PARCC and Smarter Balanced – both have expressed a strong desire to use it to speed-up score turnaround and reduce cost.

Machine scoring is said to be fast, consistent, and objective, but in reality, as for human rating, it has limitations. For example, machines are not able to evaluate the quality of argumentation, nor consider rhetorical style.

In this article, I gave an overview of the current state-of-the-art of automated scoring and compared its strengths and weaknesses with those of human rating. I also described several common ways to leverage the two scoring approaches in order to meet particular goals for an assessment.

Knowing how automated scoring differs from human rating can help policy makers and testing program directors make better decisions on test use and interpretation. I hope you find the essay informative and useful.

Narrator: R&D Connections is a free publication, which you can download at no cost by visiting the Research section of ETS.ORG.

On-screen:
R&D Connections
ets.org/research

Total length of video: 1:51