Generating Automated Text Complexity Classifications That Are Aligned With Targeted Text Complexity Standards

Author(s):
Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko; Flor, Michael
Publication Year:
2010
Report Number:
RR-10-28
Source:
Document Type:
Subject/Key Words:
text complexity readability genre reading comprehension

Abstract

The Common Core Standards call for students to be exposed to a much greater level of text complexity than has been the norm in schools for the past 40 years. Textbook publishers, teachers, and assessment developers are being asked to refocus materials and methods to ensure that students are challenged to read texts at steadily increasing complexity levels as they progress through school so that all students remain on track to achieve college and career readiness by the end of 12th grade. Although automated text analysis tools have been proposed as one method for helping educators achieve this goal, research suggests that existing tools are subject to three limitations: inadequate construct coverage, overly narrow criterion variables, and inappropriate treatment of genre effects. Modeling approaches developed to address these limitations are described. Recommended approaches are incorporated into a new text analysis system called SourceRater. Validity analyses implemented on an independent sample of texts suggest that, compared to existing approaches, SourceRater’s estimates of text complexity are more reflective of the complexity classifications given in the new Standards. Implications for the development of learning progressions designed to help educators organize curriculum, instruction, and assessment in reading are discussed.

Read More