This report evaluates the impact of a potential scheme for score adjustment using data from the administrations of the TWE® (Test of Written English™) in 1994. It is shown that, assuming non-informative assignment of readers to essays, the adjustment due to reader differences reduces the mean squared error for all essays except those graded by readers with small work loads. The quality of the rating process, as described by the variances due to true scores, severity, and inconsistency, as well as the distribution of work loads, are similar across the administrations. This would make a reliable prediction of the optimal score adjustment in future administrations possible. Two approximations to the optimal adjustment are proposed.