Principles of skill acquisition dictate that raters should be provided with frequent feedback about their ratings. However, in current operational practice, raters rarely receive immediate feedback about their scores owing to the prohibitive effort required to generate such feedback. An approach for generating and administering feedback responses to raters is proposed. It consists of automatically designating some responses as feedback responses, sourcing scores, and elaborations for these responses from a group of raters as part of regular scoring and, finally, administering the same responses to all other raters with immediate feedback based on a summary of the available scores and elaborations. This approach allows raters to receive frequent immediate feedback on a regular basis in a sustainable way. In two experimental studies, the effect of frequent immediate feedback (in approximately 25% of responses) on rating accuracy of newly trained raters was investigated. A control condition of no feedback was compared with two types of feedback with elaboration: text explanations of the correct score and a structured form identifying the strengths and weaknesses of the response. Results indicate that feedback had a beneficial effect on rater accuracy and that structured feedback was either equally beneficial to or more beneficial than text explanations.