Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of administrations yearly, are not always adequate to detect changes in a complex, rapid flow of scores. To address this issue, Educational Testing Service has been conducting research into applying data mining and quality control tools from manufacturing, biology, and text analysis to scaled scores and other relevant assessment variables. Data mining tools can identify patterns in the data and quality control techniques can detect trends. This type of data analysis of scaled scores is relatively new, and this paper gives a brief overview of the theoretical and practical implications of the issues. More in-depth analyses to refine the approaches for matching the type of data from educational assessments are needed.