Many researchers and educators see a need for a system of reading comprehension assessments that improves on existing assessments in a number of ways. Some call for broadening the construct representation — the knowledge, skills or abilities that the tests seek to measure. Many call for more realistic scenarios that support better instruction.
The Reading for Understanding initiative aims to establish a framework and a set of design principles for an integrated, multilevel assessment system, not just a collection of discrete assessments. Such a system would include a broader range of constructs and assessment items (test questions) covering different levels of difficulty.
This integrated system will assess reading components — the basic skills necessary to read. It will also assess global comprehension. The global comprehension assessments will use a scenario-based design approach known as GISA , which stands for Global, Integrated, Scenario-based Assessments.
What are Reading Components?
When defining what it means to be a proficient reader, researchers have long sought to break reading down into different component parts, or subskills, that may affect one’s overall reading proficiency. Component reading skills commonly identified in the literature include, among others, vocabulary development, decoding, word recognition, morphology and reading fluency. The Reading for Understanding framework will include reading components to determine if there are barriers to students obtaining the high-level skills tested in GISA.
What is GISA Design?
The term "GISA," which stands for Global, Integrated, Scenario-based Assessments represents the fundamental principles behind the Reading for Understanding framework in the assessment of reading proficiency.
Traditional reading comprehension assessments present students with a series of passages and items (test questions) where the reason to read them is to get a good score. This approach does not sufficiently measure comprehension or motivate students according to Reading for Understanding researchers. It does not take into account the variety of purposes for which students read outside of a testing situation as they prepare for the workforce or higher education.
The Reading for Understanding framework creates common guidelines for assessments that are built around realistic scenarios with aims such as the following:
- Providing a standard of coherence, or overall purpose, for reading
- Promoting coherence among a collection of materials
- Gathering more information about test takers
- Promoting collaboration
- Simulating valid literacy contexts
- Promoting interest and engagement
A key challenge to using scenario-based assessments is accounting for the potential effect of motivation and other factors that may affect how well the student performs. These are called performance moderators. A performance moderator may consist of knowledge, a skill or a disposition, such as motivation, that can explain why a student performed in a particular way. For example, if our measures of student motivation reveal the test taker did not take the test seriously, we can report this information to teachers. In an assessment context, performance moderators may have an impact on how someone interprets results, and thus they may affect claims about what the test takers know and can do. Examples of potential performance moderators for test takers include:
- Background knowledge
- Use of strategies
- Engagement or motivation
GISA designs include test items that attempt to gauge the impact performance moderators could have on readers’ performance. For example, the Reading for Understanding framework calls for assessments to begin with items that assess the extent to which test takers possess prior background knowledge about the domain or topic of the passage in the assessment.
Find a Publication
Watch a presentation from ETS's R&D Forum about the Reading for Understanding initiative (Flash, 50:51).