Reading for Understanding — Framework & Design Principles
Many researchers and educators see a need for a system of reading comprehension assessments that improves on existing assessments in a number of ways. For example, some see a need to broaden the construct representation of such assessments — to define the knowledge, skills or abilities that the tests purport to measure in ways that are more realistic and support better instruction.
The Reading for Understanding initiative aims to establish a framework and a set of design principles for an integrated, multilevel assessment system — not just a collection of discrete assessments. Such a system would include a broader range of constructs and assessment items (test questions) that cover different levels of difficulty.
The system will assess reading components as well as global comprehension. The global comprehension assessments will use a scenario-based design approach known as GISA, which stands for Global, Integrated, Scenario-based Assessments.
What are Reading Components?
When defining what it means to be a proficient reader, researchers have long sought to break reading down into different component parts, or subskills, that may affect one’s overall reading proficiency. Component reading skills commonly identified in the literature include, among others, vocabulary development, decoding, word recognition, morphology and reading fluency. The Reading for Understanding framework will include reading components to determine if components present a barrier for the high-level skills tested in GISA.
What is GISA Design?
GISA stands for "global, integrated, scenario-based assessments." The term represents a fundamental principle behind the Reading for Understanding framework, which in addition to including reading components also aims to address global comprehension in the assessment of reading proficiency.
Traditional reading comprehension assessments present students with a series of passages and items (test questions) where the purpose for reading is to do well on the items. Reading for Understanding researchers believe that this traditional approach does not adequately represent the variety of purposes for which students read outside of a testing situation as they prepare for the workforce or higher education.
The Reading for Understanding framework creates common guidelines for assessments that are built around realistic scenarios with aims such as these:
- Providing a standard of coherence, or overall purpose, for reading
- Promoting coherence among a collection of materials
- Gathering more information about test takers
- Promoting collaboration
- Simulating valid literacy contexts
- Promoting interest and engagement
A key challenge to using scenario-based assessments is accounting for the potential effect of performance moderators. A performance moderator is knowledge, skill or a disposition, such as motivation, that may explain why a student performed in such a way. For example, if our measures of student motivation reveal that the test taker did not take the test seriously, we can report this information to teachers. In an assessment context, performance moderators may have an impact on how someone interprets results, and thus they may affect claims about what the test takers know and can do. Examples of potential performance moderators identified in the theoretical and empirical literature on reading comprehension include, for example, test-takers’:
- background knowledge
- use of strategies
- engagement or motivation
GISA designs include test items that attempt to gauge the role that performance moderators may be playing in one readers’ performance. For example, the Reading for Understanding framework calls for assessments to begin with items that assess the extent to which test takers possess prior background knowledge about the domain or topic of the passage in the assessment.
Find a Publication
Watch a presentation from ETS's R&D Forum about the Reading for Understanding initiative (Flash, 50:51).