Notable mean score differences for the e‐rater automated scoring engine and for humans for essays from certain demographic groups were observed for the GRE General Test in use before the major revision of 2012, called rGRE. The use of e‐rater as a check‐score model with discrepancy thresholds prevented an adverse impact on the examinee score at the item or test level. Despite this control, there remains a need to understand the root causes of these demographically based score differences and to identify potential mechanisms for avoiding future instances of discrepancy. In this study, we used a combination of statistical methods and human review to propose hypotheses about the root cause of score differences and whether such discrepancies reflect inadequacies of e‐rater, human scoring, or both. The human rating process was found to be influenced strongly by the scale structure and did not fully correspond to the e‐rater scoring mechanism. The human raters appeared to be using conditional logic and a rule‐based approach to their scoring, while e‐rater uses linear weighting of all the features. These analyses have implications for future research and operational policies for the scoring of the rGRE.