Automated Scoring of Nonnative Speech Using the SpeechRater v. 5.0 Engine ELA TOEFL TOEFL iBT
- Author(s):
- Chen, Lei; Zechner, Klaus; Yoon, Su-Youn; Evanini, Keelan; Wang, Xinhao; Loukina, Anastassia; Tao, Jidong; Davis, Lawrence; Lee, Chong Min; Ma, Min; Mundkowsky, Robert; Lu, Chi; Leong, Chee Wee; Gyawali, Binod
- Publication Year:
- 2018
- Report Number:
- RR-18-10
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 33
- Subject/Key Words:
- Automated Scoring and Natural Language Processing, Automatic Speech Recognition, Automated Scoring of Speech, SpeechRater, Scoring Models, English Language Assessment (ELA), Second Language Acquisition, Test of English as a Foreign Language (TOEFL), Speech Rhythm, TOEFL iBT
Abstract
This research report provides an overview of the R&D efforts at Educational Testing Service related to its capability for automated scoring of nonnative spontaneous speech with the SpeechRater automated scoring service since its initial version was deployed in 2006. While most aspects of this R&D work have been published in various venues in recent years, no comprehensive account of the current state of SpeechRater has been provided since the initial publications following its first operational use in 2006. After a brief review of recent related work by other institutions, we summarize the main features and feature classes that have been developed and introduced into SpeechRater in the past 10 years, including features measuring aspects of pronunciation, prosody, vocabulary, grammar, content, and discourse. Furthermore, new types of filtering models for flagging nonscorable spoken responses are described, as is our new hybrid way of building linear regression scoring models with improved feature selection. Finally, empirical results for SpeechRater 5.0 (operationally deployed in 2016) are provided.
Read More
- Request Copy (specify title and report number, if any)
- https://doi.org/10.1002/ets2.12198