A Simulation Study of the Effect of Rater Designs on Ability Estimation IRT
- Author(s):
- Hombo, Catherine M.; Donoghue, John R.; Thayer, Dorothy T.
- Publication Year:
- 2001
- Report Number:
- RR-01-05
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 41
- Subject/Key Words:
- Ability, Performance Assessment, Ability Estimation, Human Rater, Bias, Item Response Theory (IRT)
Abstract
As more assessment programs move towards the use of constructed response and performance assessment items, the use of human raters to score these items necessarily will increase. Different designs can be created to assign raters to score examinee responses, and in this study some of these designs are evaluated in terms of their impact on the accuracy of examinee ability estimation. As expected, the optimum design where every rater judges every performance by every examinee results in ability estimates with minimum bias and small standard error. As this design is rarely, if ever, practical in real assessment situations, the results for nested and spiral rater designs are of more interest to practitioners. The nested rater designs result in biased ability estimates for the examinees judged by the most extreme raters, but the spiral rater designs examined prove surprisingly robust to rater tendencies.
Read More
- Request Copy (specify title and report number, if any)
- http://dx.doi.org/10.1002/j.2333-8504.2001.tb01847.x