skip to main content skip to footer

Performance of Automated Speech Scoring on Different Low- to Medium-Entropy Item Types for Low-Proficiency English Learners

Author(s):
Loukina, Anastassia; Zechner, Klaus; Yoon, Su-Youn; Zhang, Mo; Tao, Jidong; Wang, Xinhao; Lee, Chong Min; Mulholland, Matthew
Publication Year:
2017
Report Number:
RR-17-12
Source:
ETS Research Report
Document Type:
Report
Page Count:
19
Subject/Key Words:
Automated Scoring of Speech, English Language Learners (ELL), SpeechRater, Constructed-Response Scoring, Filtering Models, Automatic Speech Recognition

Abstract

This report presents an overview of the SpeechRater automated scoring engine model building and evaluation process for several item types with a focus on a low-English-proficiency test-taker population. We discuss each stage of speech scoring, including automatic speech recognition, filtering models for nonscorable responses, and scoring model building and evaluation and compare how the performance at each step differs between different item types. We conclude by discussing the effect of item type on automated scoring performance. We also give recommendations about what considerations should be taken into account when developing tests for low-proficiency English speakers to obtain reliable scores from an automatic scoring engine.

Read More