Automated Scoring of Speech

ETS's SpeechRater℠ engine is the world's most advanced spoken-response scoring application targeted to score spontaneous responses, in which the range of valid responses is open-ended rather than narrowly determined by the item stimulus. Test takers preparing to take the TOEFL® test have had their responses scored by the SpeechRater engine as part of the TOEFL® Practice Online test since 2006. Competing capabilities focus on assessing low-level aspects of speech production such as pronunciation by using restricted tasks in order to increase reliability. The SpeechRater engine, by contrast, is based on a broad conception of the construct of English-speaking proficiency, encompassing aspects of speech delivery (such as pronunciation and fluency), grammatical facility and higher-level abilities related to topical coherence and the progression of ideas.

The SpeechRater engine processes each response with an automated speech recognition system specially adapted for use with nonnative English. Based on the output of this system, natural language processing (NLP) and speech-processing algorithms are used to calculate a set of features that define a "profile" of the speech on a number of linguistic dimensions, including fluency, pronunciation, vocabulary usage, grammatical complexity and prosody. A model of speaking proficiency is then applied to these features in order to assign a final score to the response. While this model is trained on previously observed data scored by human raters, it is also reviewed by content experts to maximize its validity. Furthermore, if the response is found to be unscorable due to audio quality or other issues, the SpeechRater engine can set it aside for special processing.

ETS's research agenda related to automated scoring of speech includes the development of more extensive NLP features to represent pragmatic competencies and the discourse structure of spoken responses. The core capability has also been extended to apply across a range of item types used in different assessments of English proficiency, very restricted item types (such as passage read-alouds), or less restricted items (such as summarization tasks).

Featured Publications

Below are some recent or significant publications that our researchers have authored on the subject of automated scoring of speech.

2015

2014

2013

2012

  • A Comparison of Two Scoring Methods for an Automated Speech Scoring System
    X. Xi, D. Higgins, K. Zechner, & D. Williamson
    Language Testing, Vol. 29, No. 3, pp. 371–394

    In this paper, researchers compare two alternative scoring methods for an automated scoring system for speech. The authors discuss tradeoffs between multiple regression and classification tree models. Learn more about this publication >

  • Exploring Content Features for Automated Speech Scoring
    S. Xie, K. Evanini, & K. Zechner
    Proceedings of the 2012 Meeting of the North American Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)
    Association for Computational Linguistics

    Researchers explore content features for automated speech scoring in this paper about automated scoring of unrestricted spontaneous speech. The paper compares content features based on three similarity measures in order to understand how well content features represent the accuracy of the content of a spoken response. Learn more about this publication >

  • Assessment of ESL Learners' Syntactic Competence Based on Similarity Measures
    S. Yoon & S. Bhat
    Paper in Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

    In this paper, researchers present a method that measures English language learners' syntactic competence for the automated speech scoring systems. The authors discuss the advantage of the current natural-language processing technique-based and corpus-based measures over the conventional ELL measures. Learn more about this publication >

2011

2009

Find More Articles

View more research publications related to automated scoring of speech.