skip to main content skip to footer

Assessing Unusual Agreement Between the Incorrect Answers of Two Examinees Using the K-Index: Statistical Theory and Empirical Support

Author(s):
Holland, Paul W.
Publication Year:
1996
Report Number:
RR-96-07, PSRTR-96-04
Source:
ETS Research Report
Document Type:
Report
Page Count:
68
Subject/Key Words:
Cheating, K-Index, Response Style (Tests), Scholastic Assessment Test, Statistical Analysis, Test Security, Test Taking Behavior

Abstract

Test security and other concerns can lead to an interest in the assessment of how unusual it is for the answers of two different examinees to agree as much as they do. At Educational Testing Service, a measure called the K-index is used to assess "unusual agreement" between the incorrect answers of two examinees on a multiple-choice test. Here, I describe the K-index and give the results of an empirical study of some of the assumptions that underlie it and its use. The results of this study show that the K-index can be expected to give a conservative estimate of the probability of chance agreement in the typical situations for which it is used, and that several important assumptions underlying the K-index are supported by relevant data. In addition, the results presented here suggest a minor modification of the current (as of 1993) application of K-index to part of the SAT to better insure that it is a conservative measure of chance agreement

Read More