Using Technology to Improve Assessment
At the core of the consortia assessment designs is a unifying idea: In order to accelerate the learning of our nation's students, more powerful assessment systems must support improved instruction while yielding timely, useful information for students, educators, parents and policymakers.
Both PARCC and Smarter Balanced are pressing for the inclusion of more complex tasks to assess the types of skills called for in the CCSS, for faster results and for cost efficiencies. They also plan to transition to computer-based testing (CBT), a format that is intended to enhance the fairness and quality of assessments, the range of skills and constructs that can be assessed, and the educational utility of the results.
Technological advances provide opportunities to better align assessments to instruction; improve the precision of measuring difficult-to-measure constructs; engage and motivate students; improve accessibility of assessments for students with disabilities and English-language learners; expedite return of test results; and improve the ease of interpreting test results.
At ETS, we have been creating computer-based tests (CBT) since 1986, when the College Placement Tests (now the College Board’s ACCUPLACER) were launched. Since that time, we have introduced many other CBTs, including item-level adaptive tests, multi-stage adaptive tests, and simulation-based tests in education as well as in the professions. In addition, we have invested substantially in research on developing computer-based systems of balanced assessment that provide useful information for both accountability and classroom instruction. Our goal is to improve the validity of score results, have positive impact on teaching and learning practice, and reduce the cost and effort involved in test administration and constructed-response scoring.
Transitioning to CBT can present challenges to states because of a variety of factors, including existing infrastructure, local bandwidth as well as connectivity, and IT training for school staff. ETS's white paper, Practical Considerations in Computer-Based Testing, explains these challenges and several CBT options, including item look and function, test forms and score reporting.
Many testing programs that we've developed now feature CBT, so our staff is highly qualified to provide a range of customized solutions to consortia as they design their own technology-enhanced assessments.
Innovation in Action
One of ETS's major research initiatives, called Cognitively Based Assessment of, for, and as Learning, or the CBAL™ initiative, is intended to create a model for an innovative K–12 assessment system that documents what students have achieved (of learning); helps identify how to plan instruction (for learning); and offers a worthwhile educational experience in and of itself (as learning).
A video (Flash) describes how CBAL assessment prototypes inform lesson planning and engage students in tasks that are worthwhile learning experiences. The CBAL English Language Arts competency model and learning progressions offer a comprehensive approach to designing summative and formative assessment, instruction, and professional development aligned with the Common Core State Standards. Results from piloting CBAL summative assessments are described in a report by CBAL team leader Randy E. Bennett, the ETS Frederiksen Chair in Assessment Innovation.
Bennett also was invited to advise the U.S. Education Department in its planning of the Race to the Top Assessment Program, which later funded several consortia to produce Common Core State Assessments. His recommendations, based on CBAL experience, are given below:
- Recommendations for the RTTT Program: presents suggestions, in response to U.S. ED questions, for what the Race to the Top Assessment Program should include and what should be required of bidders.
- Recommendations for Deploying Innovative Technologies to Create Better Assessments: offers suggestions for where to use technology to improve assessment and what strategies to use to achieve that improvement.
- Recommendations for Platform Functionality: offers ideas for the characteristics of technology delivery platforms that would best support innovative assessment.
- Recommendations for Supporting Interim Assessments: presents suggestions for the design of technology platforms for interim and formative assessment, as well as teacher scoring of constructed-response questions.
Innovating Item Development
ETS has conducted fundamental research on methods for developing item models, which can be used to create a large number of comparable items addressing the same content area. For example, we currently use the Mathematics Test Creation Assistant as part of the item development process for multiple assessments. Another tool relevant to the item development process is ETS's text complexity evaluation system, now known as TextEvaluatorSM capability tool. This fully automated text analysis system is designed to provide linguistically motivated feedback about text complexity and comparability. Its feedback includes grade-level classifications aligned with the CCSS and linguistic analyses designed to help users evaluate a wide array of construct-relevant text features.
The K–12 Center at ETS offers a variety of resources on the assessment consortia, including summaries of their designs and future plans, videos and presentations.
ETS has assisted the NAEP program in introducing numerous psychometric and assessment design innovations over the years. Learn more >