angle-up angle-right angle-down angle-left close user menu open menu closed search globe bars phone store

Automated Scoring of Speaking Tasks in the Test of English-for-Teaching (TEFT)

Author(s):
Zechner, Klaus; Chen, Lei; Davis, Larry; Evanini, Keelan; Lee, Chong Min; Leong, Chee Wee; Wang, Xinhao; Yoon, Su-Youn
Publication Year:
2015
Report Number:
RR-15-31
Source:
ETS Research Report
Document Type:
Report
Page Count:
19
Subject/Key Words:
Automated Scoring Automated Scoring of Speech English as a Foreign Language (EFL) Language Assessment Non-Native Speech Test of English for Teaching (TEFT)

Abstract

This research report presents a summary of research and development efforts devoted to creating scoring models for automatically scoring spoken item responses of a pilot administration of the Test of English-for-Teaching (TEFT™) within the ELTeach™ framework. The test consists of items for all four language modalities: reading, listening, writing, and speaking. This report only addresses the speaking items, which elicit responses ranging from highly predictable to semipredictable speech from nonnative English teachers or teacher candidates. We describe the components of the system for automated scoring, comprising an automatic speech recognition (ASR) system, a set of filtering models to flag nonscorable responses, linguistic measures relating to the various construct subdimensions, and multiple linear regression scoring models for each item type. Our system is set up to simulate a hybrid system whereby responses flagged as potentially nonscorable by any component of the filtering model are routed to a human rater, and all other responses are scored automatically by our system.

Read More