Maintaining score stability is crucial for an ongoing testing program that administers several tests per year over many years. One way to stall the drift of the score scale is to use an equating design with multiple links. In this study, we use the operational and experimental SAT data collected from 44 administrations to investigate the effect of accumulated equating error in equating conversions and the effect of the use of multiple links in equating. No equating error is directly observed or calculated in the study. Instead, we focus on the behavior of the equating conversions after a series of equatings under the nonequivalent groups with anchor test design and analyze the effect of equating error on conversions. It is observed that the single-link equating conversions drift further away from the operational ones as more equatings are carried out. Analysis of variance is used to decompose the scale score means and the conversion into two major factors: administration month and year for both single- and multiple-link equating results. Seasonality is seen in the data. In addition, the single-link conversions exhibit a certain instability that is not obvious for the operational data. A statistical random walk model is offered to explain the mechanism of scale drift in equating caused by random equating error.