Using self-reported but empirically verified repeater groups, we analyzed vast amounts of real test data across a wide range of administrations from a graduate admissions examination that was administered in a non-English language to investigate repeater effects on score equating using the nonequivalent groups with anchor test (NEAT) design. Both linear and nonlinear equating models were considered in deriving the equating functions for various study groups. We evaluated scaled score differences between equating in the total group, the repeater group, and the first-timer group using statistics of simple differences and subpopulation invariance measures developed and used widely in the last 10 years. Standard errors of statistics summarizing scaled score differences were estimated using a simulation approach to provide statistical criteria for evaluating the significance of equating differences. In addition, we used scaled score differences that were critical to admissions screening as criteria for evaluating the practical significance of equating differences. To put the investigation of repeater effects in proper perspective, we analyzed the repeater data for an in-depth understanding of repeater performance trends. Overall, we found no significant effects of repeater performance on score equating for the exam being studied. Although many of the equating differences were practically significant, most of the practically significant differences were not statistically significant. However, further research with larger repeater samples is recommended to help explain the practical significance of equating differences consistently observed in this study for the repeater group. Potential problems associated with small repeater study sample sizes, issues of the practical criterion for evaluating the significance of equating differences, and study limitations are also discussed.