Learning progressions (LPs) have seen a growing interest in recent years due to their potential benefits in the development of formative assessments for classroom use. Using an LP as the backbone of an assessment can yield diagnostic classifications of students that can guide instruction and remediation. In operationalizing an LP, assessment items are classified as measuring specific LP levels and, through the application of a measurement model, students are classified as masters of specific LP levels. To support the use of LPs in instructional planning and formative assessment, the reliability and validity of both item and student classifications should stand up to scrutiny. Reliability of classifications refers to their consistency. Validity of classifications refers to alignment of these classifications with test data. A framework for testing these classifications is proposed and implemented in a validation study of a rational number LP for elementary school mathematics. As part of this study, 400 items were classified in terms of LP level of understanding, a cognitive diagnostic model of student mastery level within the LP was fitted to the data, and analyses were conducted to assess the reliability and validity of these classifications.