This study examined rater perceptions of the effectiveness of feedback practices used by testing programs at Educational Testing Service. Practices used in rater training and during scoring were examined. The study involved conducting one-on-one telephone surveys with trained and experienced raters. A total of 36 raters were surveyed, with 17 being raters for the English Language Proficiency Assessments for California, 10 for the GRE General test, and 9 for the TOEFL iBT test. Survey questions covered 4 categories: (a) feedback practices used during training and calibration, (b) feedback practices used during operational scoring, (c) information received from a scoring leader, and (d) information specific to the performance of the individual rater. Results indicate that the level, type, and frequency of feedback appear to define its usefulness to raters. To be useful, feedback on scoring accuracy needs to be immediate and concise and to provide specific information that indicates why a rater’s assigned score was incorrect. In addition, feedback on scoring rate needs to be provided in a context that it is easily interpretable and understandable by raters. Feedback from scoring leaders is perceived as valuable, regardless of how it is provided to raters. Raters with less experience desire feedback more frequently than those with more experience. Finally, the method of providing the feedback must be easily accessible while in the scoring system—either displayed on screen or easily obtained through a link.