skip to main content skip to footer

Comparisons Among Approaches to Link Tests Using Random Samples Selected Under Suboptimal Conditions NEAT LSA RMSE

Author(s):
Kim, Sooyeon; Walker, Michael
Publication Year:
2021
Report Number:
RR-21-14
Source:
ETS Research Report
Document Type:
Report
Page Count:
20
Subject/Key Words:
Equating, Random Groups, Test Linking, Anchor Test Equating, Non-Equivalent-Groups Anchor Test (NEAT) Design, Assessment Design, Large-Scale Assessments (LSA), Weighting, Equating Bias, Standard Error of Equating, Root Mean Square Error (RMSE), Subgroups

Abstract

Equating the scores from different forms of a test requires collecting data that link the forms. Problems arise when the test forms to be linked are given to groups that are not equivalent and the forms share no common items by which to measure or adjust for this group nonequivalence. We compared three approaches to adjusting for group nonequivalence in a situation where not only is randomization questionable, but the number of common items is small. Group adjustment through either subgroup weighting, a weak anchor, or a mix of both was evaluated in terms of linking accuracy using a resampling approach. We used data from a single test form to create two research forms for which the equating relationship was known. The results showed that both subgroup weighting and weak anchor approaches produced nearly equivalent linking results when group equivalence was not met. Direct (random groups) linking methods produced the least accurate result due to nontrivial bias. Use of subgroup weighting and linking using the anchor test only marginally improved linking accuracy compared to using the weak anchor alone when the degree of group nonequivalence was small.

Read More