skip to main content skip to footer

Validity and Automated Scoring: It's Not Only the Scoring CAT

Author(s):
Bennett, Randy Elliot; Bejar, Isaac I.
Publication Year:
1997
Report Number:
RR-97-13
Source:
ETS Research Report
Document Type:
Report
Page Count:
32
Subject/Key Words:
Automation, Computer Assisted Testing, Scoring, Test Validity

Abstract

Early work on automated scoring predated the ready availability of mechanisms for inexpensively delivering computer-based tests and collecting responses. Hence, this work used responses to conventionally delivered tasks that had somehow been translated to machine-readable form. The necessity of operating in this manner focused attention initially on the empirical characteristics of automated scores. As the availability of computer-based testing environments grew, it became possible to implement entire operational exams and, thus, to think broadly about the implications of automated scoring for validity. In this paper we argue that a comprehensive discussion of validity and automated scoring includes the interplay among construct definition and test and task design, examinee interface, tutorial, test development tools, automated scoring, and reporting-for in the development process these components affect one another. As modern validity theory postulates, the validation argument must, therefore, ideally provide not only empirical evidence of score relationships but also theoretical rationales to support a variety of design decisions. We further argue that the interdependency among computer-based test components provides a unique opportunity to greatly improve educational and occupational assessment.

Read More