skip to main content skip to footer

Use of Automated Scoring in Spoken Language Assessments for Test Takers With Speech Impairments TOEFL iBT WER

Author(s):
Loukina, Anastassia; Buzick, Heather M.
Publication Year:
2017
Report Number:
RR-17-42
Source:
ETS Research Report
Document Type:
Report
Page Count:
12
Subject/Key Words:
Automated Scoring and Natural Language Processing, Spoken Language Assessment, Test-Takers with Disabilities, TOEFL iBT, SpeechRater, Test Fairness, Constructed-Response Scoring, Test Validity, Word Error Rate (WER), Speech Impairments, Hearing Impairments, Human Scoring

Abstract

This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses on one type of scoring technology, automatic speech scoring (the SpeechRater automated scoring engine); one type of assessment, spontaneous spoken English by nonnative adults (six TOEFL iBT test speaking items per test taker); and one category of disability, speech impairments. The results show discrepancies between human and SpeechRater scores for speakers with documented speech or hearing impairments who receive accommodations and for speakers whose responses were deferred to the scoring leader by human raters because the responses exhibited signs of a speech impairment. SpeechRater scores for these studied groups tended to be higher than the human scores. Based on a smaller subsample, the word error rate was higher for these groups relative to the control group, suggesting that the automatic speech recognition system contributed to the discrepancies between SpeechRater and human scores.

Read More