A Generalizability Theory Study to Examine Sources of Score Variance in Third‐Party Evaluations Used in Decision‐Making for Graduate School Admissions PPI
- Author(s):
- McCaffrey, Daniel F.; Oliveri, Maria Elena; Holtzman, Steven
- Publication Year:
- 2018
- Report Number:
- GRE-18-03
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 19
- Subject/Key Words:
- Error of Measurement, Interrater Reliability, Generalizability Coefficient, Admissions Decisions, Graduate Admissions, Decision Study (D Study)
Abstract
Scores from noncognitive measures are increasingly valued for their utility in helping to inform postsecondary admissions decisions. However, their use has presented challenges because of faking, response biases, or subjectivity, which standardized third‐party evaluations (TPEs) can help minimize. Analysts and researchers using TPEs, however, need to be mindful of the potential for construct‐irrelevant differences that may arise in TPEs due to differences in evaluators' rating approaches, which introduces measurement error. Research on sources of construct‐irrelevant variance in TPEs is scarce. We address this paucity by conducting generalizability theory (G theory) analyses using TPE data that informs postsecondary admissions decisions. We also demonstrate an approach to assess the size of interevaluator variability and conduct a decision study to determine the number of evaluators necessary to achieve the desired generalizability coefficient. We illustrate these approaches using a TPE whereby applicants select their evaluators, leading to a situation where most evaluators solely rate one applicant. We conclude by presenting strategies to improve the design of TPEs to help increase confidence in their use.
Read More
- Request Copy (specify title and report number, if any)
- https://doi.org/10.1002/ets2.12225