In July 2009, the TOEFL® program began to automatically score one of the two tasks (the independent essay) that make up the TOEFL writing measure, thus eliminating one of two previously used human raters. The research described here was undertaken to probe test takers’ and test score users’ perceptions and understanding of several aspects of automated scoring. To do so, we conducted Internet surveys of both test takers and test score users to assess: (a) awareness of the introduction of automated scoring, (b) perceptions of reasons for using (and for not using) it, (c) knowledge of the factors considered in scoring TOEFL essays, and (d) reactions to alternative ways to employ automated scoring. Test takers were also asked if they had approached test taking differently, and test score users were asked about any changes in admissions practices. Test takers expressed a wide diversity of opinions about automated scoring. Faced with automated scoring, test takers clearly favored some test-taking strategies over others. They also had preferences as to what constituted good (and bad) reasons for employing automated scoring. In addition, both test takers and test users exhibited clear preferences with respect to how automated scoring should be deployed when evaluating TOEFL essays. Implications for the TOEFL program are discussed.