This study proposes an approach to automatically score the TOEIC Writing e-mail task. We focus on one component of the scoring rubric, which notes whether the test-takers have used particular speech acts such as requests, orders, or commitments. We developed a computational model for automated speech act identification and tested it on a corpus of TOEIC responses, achieving up to 79.28% accuracy. This model represents a positive first step toward the development of a more comprehensive scoring model. We also created a corpus of speech act-annotated native English workplace e-mails. Comparisons between these and the TOEIC data allow us to assess whether English learners are approximating native models and whether differences between native and non-native data can have negative consequences in the global workplace.