An Analysis of Test Writers' Expertise: Modeling Analogy Item Difficulty GREB
- Author(s):
- Enright, Mary K.; Bejar, Isaac I.
- Publication Year:
- 1989
- Report Number:
- RR-89-35
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 30
- Subject/Key Words:
- Graduate Record Examinations Board, Analogies, Aptitude Tests, Difficulty Level, Test Construction
Abstract
In this study, the ability of test development staff to predict the difficulty of analogy items was explored. The nature of the item attributes that contributed to test writers' predictions of difficulty as well as actual item difficulty was also investigated. The expert test writers studied were quite good at predicting item difficulty. Item attributes such as vocabulary difficulty and rationale difficulty contributed to item difficulty. However, a statistical model of item difficulty did not capture all the information that test writers used to judge item difficulty. This research contributes to the construct validation of tests in two ways. First, identification of some item attributes that are associated with item difficulty clarifies what skills and processes are likely to be involved in solving analogies. Secondly, the expertise of test writers, a crucial ingredient in ensuring the validity of the test, is demonstrated. (30pp.)
Read More
- Request Copy (specify title and report number, if any)
- http://dx.doi.org/10.1002/j.2330-8516.1989.tb00149.x