The author of this essay compares automated and human scoring of essays. The essay gives an overview of the current state-of-the-art of automated scoring and compare its strengths and weaknesses with those of human rating. Computer-assisted essay scoring is said to be fast, consistent, and objective, but it has limitations. Machines are for example not able to evaluate the quality of argumentation, nor consider rhetorical style, areas where human scoring has strengths. The interest in automated scoring has been growing fast in the past few years, partly due to the two common-core assessment consortia – PARCC and Smarter Balanced – and plans to use automated scoring to speed-up score turnaround and reduce cost. The essay describes several common ways to leverage the two scoring approaches in order to meet particular goals for an assessment. Knowing how automated scoring differs from human scoring can help policy makers and testing program directors make better decisions on test use and interpretation. This is the 21st issue in the R&D Connections series.