ETS Opt-Out Research Forum July 2016
People in this Video:
Randy Bennett, Norman O. Frederiksen Chair in Assessment Innovation, ETS Wade Henderson, President and CEO, The Leadership Conference on Civil and Human Rights and The Leadership Conference Education Fund.
On-screen: [Randy Bennett, Educational Testing Service, Princeton, NJ, firstname.lastname@example.org. Presentation at ETS Research Forum, Washington, DC, July 2016.]
Speaker: Randy Bennett, Norman O. Frederiksen Chair in Assessment Innovation, ETS
Speaker: Randy Bennett - Thank you very much. It's a pleasure to be here. Can everybody hear? Okay, great. My talk today is titled, as you can see, Opt Out: An Examination of Issues. I'm going to begin by suggesting that one way to frame Opt Out is as a classic conflict between individual rights in which the right of the parents to decide whether their children should participate in state assessment and the collective good, in which it's the responsibility of the state to monitor and report upon the effectiveness of education. As I think we'll see, while this framing captures one aspect of the Opt Out phenomenon, that phenomenon is considerably more complicated than this simple framing might suggest.
In this talk, I'm going to try to unwind some of that complication by addressing the following questions. First, what is Opt Out and why does it matter? Second, how big is it? Third, does the public support it? Fourth, who's participating and why? Fifth, does Opt Out imply a general lack of support for testing among the public, and finally, what should we do in response to the movement and to public sentiment around testing?
Let's start with that first question—what is Opt Out? Opt Out is an organized effort to refuse to take standardized tests—primarily state-mandated K-12 assessments. The movement consists of national, state, and local groups like United Opt Out at the national level and Opt Out Washington among the states that have produced websites, engaged in media campaigns and undertaken lobbying efforts.
Although the movement goes back several years, it came to national attention in 2015 with headlines like the ones you see here. From CNN, Parents all over the United States ‘opting out' of standardized student testing, from the New York Times, ‘Opt Out' Becomes Anti-Test Rallying Cry in New York State, from the Washington Post, Why the Movement to opt out of Common Core tests is a big deal, and from PBS, What galvanized standardized testing's opt out movement?
The narrative presented in much of this media coverage, especially early on, was of a grassroots movement gone viral. The leaders of that effort were depicted as parents concerned about a variety of things, including lots of lost instructional time due to testing, the anxiety created by state assessment for students, the corporatization of American education by certain foundations and by testing companies, and of course, the Common Core.
As you probably know, this media attention combined with the movement's active advocacy generated significant political action. That political action has taken at least two forms. One form is to legitimize Opt Out, so the Every Student Succeeds Act, while retaining No Child Left Behind's 95% test participation requirement removes the sanctions for missing it—leaving it to states to decide for accountability purposes how to treat low participation districts and schools. A second form of political action has been to otherwise reduce testing requirements. Several states have eliminated tests most often at the secondary level like Texas, and other states have limited testing time. Florida has legislatively capped the amount of time that can be expended on state and district mandated assessment.
On-screen: [The Wall Street Journal: Obama Calls for Capping Class Time Devoted to Standardized Tests. Education Department to make it easier for states to satisfy federal mandates.]
The political pressure became so intense that even President Obama joined the discussion last October calling for limits on testing time, but also for no reduction to the federal participation requirement. Why no reduction? Simple—participation matters. It matters because state assessments are the only comparable measures of achievement at the building level, and, perhaps more importantly, they're the only measures of building level achievement that are disaggregated by demographic group. To the extent that students are not missing at random, opt out can distort results, preventing parents, educators, policy makers, and the public from understanding how effective education is at the system level as well as how effectively specific schools are at educating particular demographic groups of kids.
How big is Opt Out? States have not yet released the data from the 2016 Spring assessment cycle, but for 2015 we do have a reasonably good idea. In December 2015, the education department identified 13 states as not meeting participation requirements for the prior school year. A review of the notification letter sent to states, the available state documentation that's on their websites, and news coverage suggests that non-participation varied quite widely across those 13 cited states, across the districts within those states, and across school grades. For states that reported, or made it possible to estimate a rate over all tested grades, we find Idaho having a non-participation rate of only 2%, California—our most populous state—having a non-participation rate of only 3%, Oregon less than 4%, Connecticut 4%, but Colorado 10% and 11% depending upon which subject matter area you're talking about.
In cases where rates for grades 3-8 were available, there was also wide variation—Washington State having rates of 2% and 3%, Maine a little higher at 5% and 6%. Rhode Island had 10% and 11%, but most extreme of all was New York, which had a non-participation rate of 20%, or over 200,000 students refusing to test. With respect to districts in New York, the Patchogue-Medford District on Long Island had a rate of about 70% and upstate, the Onteora Central School District—a rate of 66%. But, the largest district in the state—the New York City schools—had a rate of 1.4%, very similar to the rate reported by the 66 urban districts composing the Great City Schools.
Individual districts proved to be the reason for U.S. ED citations for several states, including ones with acceptable overall rates, like California, Idaho, and Wisconsin. With respect to grades, the high school non-participation rates appear to be far higher than the single digit ones we find at the lower levels. For example, in Maine, the rates at 11th grade were 39% and 40%, and Washington 49% and 53%. High school rates appear to have been the sole cause of U.S. ED citations for Connecticut, Delaware, and North Carolina.
Why did non-participation appear to be so much higher in the upper grades? Probably because kids at that level were concerned with other things like college admissions tests, advanced placement exams, and probably senioritis. [Laughter.] In some, significant levels of non-participation appear to have occurred in a minority of states—13 to be exact—and except for New York, Colorado, and Rhode Island, these occurrences were restricted to relatively small subsets of the eligible test-taking population like students in particular districts and in the high school grades.
Does the public support Opt Out? The answer is generally no. The Education Next survey conducted in spring of 2015 found that 59% of the public was opposed to Opt Out with only 25% in favor. The PDK/Gallup national survey conducted at the same time found a much smaller margin of public opposition—44% to 41%. However, when asked if they would opt their own children out, public school parents opposed that idea by a very wide margin—59% to 31%. A third survey done by NWEA and Gallup found that only 15% of parents were planning to opt their own kids out. Finally, the National Parent-Teachers Association, with some four million members of the public recently declared its opposition, stating that the National PTA does not support state and district policies that allow students to opt out of state assessments, which some states do. All of that said, what the data do clearly suggest is a sizable core of support for test refusal—for programs that depend on grants of legitimacy, which state-assessment activities certainly do, these numbers should be very worrisome.
Who exactly is Opting Out? As the differences in refusal rates between New York City and the outlying districts suggest, it seems to be the case that Opt Outs represent a demographically particular population segment, at least in 2014-2015. For example, the Colorado Department of Education reported that its high school opt outs, which is where the vast majority of test refusal occurred in that state were somewhat more likely to be white and less likely to be low SES. In New York State, the distinctions were much clearer. The State Education Department reported that its 3rd-8th grade Opt Outs were much more likely to be white and to be from low to average need districts—that is, from more affluent locations. They were much less likely to be from low SES families and less likely to be English language learners. Finally, and interestingly, they were less likely to have reached proficiency on the previous year's test before Opt Out had become a significant phenomenon.
Although it might seem counterintuitive for Opt Outs to be, on average, both more advantaged and lower scoring, that result is understandable when we consider that SES and achievement are, in fact, far from perfectly related. Demographic differences in support for Opt Out were also apparent in the PDK/Gallup poll where 44% of Whites supported it, but only 35% of Hispanics and 28% of Blacks, suggesting rather striking demographic group differences.
What underlies these group differences? One possibility is that they're driven by attitudes toward testing more generally and, in fact, attitude differences do appear in national polls. For example, a PDK/Gallup poll found that the percentage of public school parents, who consider test scores important for measuring the effectiveness of their community schools was 72% for Black parents, but only 55% for White parents, with Hispanic parents falling somewhere in between. An earlier survey shows a similar divide for income with a percentage of parents considering regular assessment important being 85% for those with incomes under $50,000 per year, but only 63% for those earning more than $100,000 per year.
Why do minority group members and those of relatively modest means appear to have more favorable attitudes toward testing than the rest of the population? One possibility is that the benefits these groups accrue from state assessment are quite different. Under NCLB, schools were judged in significant part on test performance disaggregated by demographic group and, in fact, for families experiencing persistently failing schools in low SES areas, NCLB required states to provide relief. It required them to offer parents the opportunity to transfer their kids to non-failing schools in the same district. It required the school to pay for private tutoring. It required the restructuring of schools in extreme cases, and ultimately, it required their closure. On the other hand, families whose children attended more successful schools received little, if any, practical benefit from state assessment and occasionally disincentive when favorite teachers were moved to lower performing schools to better balance the distribution of high quality teachers across a district.
Given these differences among racial, ethnic, and SES groups, and attitudes in participation and in supposed benefits from state testing, it should not be surprising that Opt Out has become a civil rights issue. That issue was brought front and center in Spring 2015 with release of a statement by 12 organizations under the auspices of the Leadership Conference on Civil and Human Rights, which included representation from the African American community, the NAACP and National Urban League, the Hispanic community, the National Council of La Raza and the League of United Latin American Citizens, the Asian community, the Southeast Asia Resource Action Center, the Disability Community, the three organizations you see listed there, and the American Association of University Women.
On-screen: [Opt Out: Civil Rights Issue. Disability Organizations: Disability Education and Defense Fund; National Disability Rights Network; TASH.]
On-screen: [Leadership Conference (2015) Statement.]
Why were these groups so concerned? Let's take a look at what they said. The educational outcomes for the children we represent are unacceptable by almost every measurement. It's not just standardized tests. It's graduation rates, it's the skills these kids come out of school with, it's the rates at which they enter college, and it's the kinds of jobs they're able to get. We rely on the data provided by annual statewide assessments to advocate for better lives and outcomes for our children. Until federal law insisted that our children be included in these assessments, schools would try to sweep disparities under the rug. Hiding the achievement gaps meant that schools would not have to allocate time, effort, and resources to close them. Our communities had to fight for the simple right to be counted and we are standing by it. But, abolishing the tests or sabotaging the validity of their results only makes it harder to identify and fix the deep-seated problems in our schools.
How did we get here? What's behind Opt Out? One of the most common complaints is that too much time is spent on testing, thereby detracting from learning and instruction. Not surprisingly, a common political response to Opt Out has been to reduce or otherwise limit testing time, as recommended by President Obama himself in October 2015. That political response should motivate us to ask exactly how much time do state mandated tests take up. The Council of the Great City Schools inventoried the testing practices of its 66 large urban districts finding out on average about 2% of instructional time was going to mandated tests, or about 20 to 25 hours per year. In districts that administered them, only seven to nine hours was being devoted to PARCC or Smarter Balanced.
On-screen: [Testing Time. Teoh et al. (2014) studied selected grades in 32 large urban districts and their surrounding suburban districts.]
If less than half of total testing time is being taken up by the very longest of state assessments, what's that other time going to? For an answer, we turn to Lazarin, who studied pairs of urban districts and suburban districts in seven states finding that, on average, 1.6% of instructional time went to district and state testing—only a bit less than the Great City Schools' estimate, but most of that time was coming from districts that required from 1.6 to 3 times as many tests as states depending on the grade span in question. Equally interesting was that urban students, who we know have relatively low opt out rates spent more time on testing than suburban students, who we know have much higher opt out rates. Similar urban-suburban differences were found in a third study where urban districts had an average of 1.7% of instructional time spent on state and district tests in the two grades sampled, or about 17 hours and suburban districts spent only an average of 1.3% of instructional time on testing—about 13 hours.
As these studies suggest, the time actually spent taking mandated tests on average does not seem to be especially large, particularly, in the suburban locales where Opt Out tends to be higher. In ours, the averages are only about a third to a half of the 45-hour cap that was legislated by the state of Florida. In percentages, the averages for state and district testing combined are close to the 2% cap recommended by President Obama for state testing alone. These actual time allocations, of course, contrast dramatically with data on educator perceptions coming from polls like NWEA and Gallup. Educator perceptions may well be influenced by other factors, including very long testing windows and the totality of the mandated and the non-mandated tests used by districts, which, in many cases, is quite large. But, those perceptions also might be affected by at least one other factor—the extent and type of test preparation—a topic to which we now turn.
Three studies speak to this concern. The first, by Rogers and Mirra, who collected survey responses from almost 800 teachers in California high schools in the fall of 2013. They combined test preparation time for district, charter school, and state mandated tests with administration time for district and charter tests, but not for state tests. So add seven to nine hours to the figures below and you'll come close to a total estimate for California high schools. They found that in high poverty schools, 47 hours per year on average, or 4.5% of instructional time was being devoted to those sources, but in low and mixed poverty schools, only 26 and 28 hours, or 2.5 and 2.7 percent of instructional time was due to testing and test preparation. Interestingly, instructional time lost from teacher absence, disrupted days due to assemblies, special days like prom and pre and post vacation shut down and startup was about two times the amount that was due to test preparation and testing.
A second more recent national survey was conducted by the Center for Education Policy. They collected data from about 3,300 teachers in elementary through high school finding that, on average, teachers reported spending 14 days per year preparing students for state tests and another 12 days for district-mandated tests. More time was reportedly spent by teachers from high and medium poverty schools than from low poverty schools—again, consistent with greater time spent on testing itself in high poverty schools and the greater value ascribed to tests in those communities. Finally, most teachers felt too much time was being spent preparing for state and district assessments. Twenty-six days per year—that's the sum of 14 and 12—is a lot of time, but in order to judge the value of that time expenditure, it would be really good to know exactly what test preparation entailed.
We can find the beginnings of an answer to that question in our third study—this one by Teoh and colleagues, who collected survey responses from almost 400 teachers in grades pre-K through 12 in the Teach Plus network. They found that 57% of teachers said too much time was spent on test prep versus 43% who felt that the time was about right or too little. These two groups reported spending 16 hours per month for the "too much time" group and 12 hours per month for the "about right" group—values that aren't far from those identified by the Center for Education Policy Study I just mentioned. Notably, teachers in the "not too much" time spent on preparation group were more likely to report that their state tests and their district tests were well aligned with their curriculum, implying that teachers will, very understandably, view time spent on test preparation more favorably when it's in the service of curricula goals. That said, even in the more positive group, less than half of the teachers perceived good alignment between tests and curriculum—a result also found in several other recent analyses.
On-screen: [Test Preparation Time: Developed students' computer skills (93%); Ran writing workshops to improve use of evidence or paraphrasing (88%); Provided extension or challenging activities (86%); Answered text-dependent questions related to core content (86%); Provided students typing practice (84%); Provided interventions related to improving standards measured on tests (62%); Taught test-preparation-specific activities provided in the curriculum (59%).]
What exactly did test preparation involve for these teachers? Teoh asked the teachers to indicate how much time they spent on different preparation activities and whether each was a good use of time. For the subsets of teachers responding, a majority rated the most time-consuming preparation activities as worthwhile, including things like what you see here—developing students' computer skills, running writing workshops to improve use of evidence, providing extension or challenge activities, answering text-dependent questions related to core content, providing students with typing practice, providing interventions related to improving standards, and probably least relevant to standards of all, teaching test preparation-specific activities that are included in the curriculum. At least for these respondents, it seems as if some significant portion of what was called test preparation could, in fact, be viewed as learning activity the teachers not only valued, but that was important in and of itself.
Given the available evidence, then, I would suggest that actual testing time is not credible as a primary cause for Opt Out. At 2% of instructional time, it's simply not extreme enough to have incited such a movement. Moreover, test preparation that is well-aligned to the curriculum, no matter how extensive, would also seem unlikely as a root cause because such preparation should, at least in principal, be almost indistinguishable from instruction. If those propositions are true, how did Opt Out become such a big national phenomenon?
Any plausible path to explaining Opt Out must obviously go through New York because no other state had anywhere near that state's incidence rate. To get to New York, we must first return to No Child Left Behind. As you all know, No Child Left Behind mandated state accountability requirements in return for Title I funding. Those requirements included establishing content and performance standards, conducting annual testing with respect to those standards, and disaggregating the results at the building level by demographic group. The motivation behind those accountability requirements was, of course, education reform driven by evidence that the U.S. K-12 education system was no longer internationally competitive and by the fact of why disparities in education quality and achievement for our demographic groups.
The theory of action underlying those accountability requirements was that standards would focus educators and students on what it was kids needed to know and be able to do, that testing would identify who needed help at the district, school, classroom, and individual kid levels, and that school level sanctions would motivate staff to improve the quality of teaching and learning.
By 2009, however, it was apparent that there was wide variation in the quality of both the content and the performance standards that states chose to implement. As a result, the states, under the auspices of the Governor's Association and the Council of Chief States School Officers launched the Common Core States Standards Initiative with the goal of higher, fewer, and deeper standards, and obviously, more uniformity across the states. At the very same time, the height of the Great Recession, Congress passed the American Recovery and Reinvestment Act, which contained two very important provisions. One was the Race to the Top Assessment Program, which funded development of the Common Core state assessments. The second was the Race to the Top Fund, which put on the table $4.35 billion for states to individually pursue education reform. Of special note is that the Race to the Top Fund selection criteria gave particular advantage to states that committed to doing several things: developing and adopting common standards, developing and implementing common assessments, implementing evaluation systems for teachers and principals using student achievement growth as a significant factor, and using those evaluations for some highly consequential purposes, like compensating, promoting, retaining, granting tenure for, and removing educators.
Given their desperate financial circumstances—remember, this was the height of the Great Recession, most states changed law or policy to compete for these grants. This direction was pushed further by the Education Department's waiver program, which set aside some of the accountability provisions of NCLB in return for, among other things, locking in test-based teacher evaluation. It should come as no surprise, then, that by 2015, 43 states required or were transitioning to the use of student test scores for teacher evaluation.
That brings us to New York, for it was in New York that we see the Race to the Top Fund and the Common Core State Standards Initiative collide. In 2014, 96% of New York teachers were rated as effective or highly effective by the evaluation system then in place, but only 31% of students were proficient in English Language Arts and 36% in Math on New York's Common Core aligned tests.
Puzzled by these seemingly contradictory characterizations of the quality of New York's education systems, Governor Cuomo, in March of 2015, asked the state legislature to do two things. First, reduce the role of principal judgement in teacher evaluation and second, increase the weight of test-based growth indicators from 20% to 50%. Why? Because from Cuomo's point of view, the principals were, perhaps unknowingly, inflating teacher ratings. In his mind, New York's education system was short-changing its kids and this was his way of fixing it. Unfortunately, for Governor Cuomo, the Teachers' Union saw it differently. It responded by urging parents to boycott the spring assessment, which they did in the large numbers I described earlier.
Not surprisingly, the dislike shown by the union for using tests to evaluate teachers was not particular to New York, nor was it particular to teachers. The PDK/Gallup Poll that I mentioned earlier found that 63% of public school parents were against that use with only 43% in favor. Why would public school parents oppose the idea of using student test scores to evaluate teachers? A North Bellmore Long Island parent put it very simply. "The minute they tied teacher evaluations to those tests, they set up the classrooms to be about nothing except testing, so of course teachers are going to make kids spend all of their time preparing for the test. Their careers depend on it." That last sentence, I think, is highly significant because it implies that the test preparations cited here was likely to have been quite different from the learning activity described in the Teoh study. Otherwise, why would this parent be so strenuously opposed to it?
Ironically, the research community has consistently argued for caution in using students' test performance, including growth measures, for making career decisions about teachers because other factors affect student achievement that are both beyond the control of the classroom teacher and beyond our ability to methodologically remove. Policy papers outlining such concerns have been published by the American Educational Research Association, AERA and the National Academy of Education, the American Statistical Association, and the Economic Policy Institute. I think it's plausible to suggest that what happened in New York was a chain reaction beginning with a dramatic increase in the role of test results for educator evaluation combined with the introduction of a much more rigorous test that led to pressure on teachers and principals to perform, resulting in excessive and narrow test prep that is activities not well aligned to curriculum and standards, anxiety for kids about the highly emotionally charged nature of that preparation, and unhappiness for parents about that anxiety and about the extent and the nature of the test preparation they were observing. In concert with one another, as well as independently, the union, parents, and principals mobilized, leading parents to Opt Out their kids, media coverage of it, and extensive lobbying of education officials and elected representatives, all of which attracted others with education-related complaints to join the movement, ultimately, generating significant political reaction and, in some cases, overreaction.
What was different about California, which had a very low overall non-participation rate? Among other things, it had no state policy that linked teacher evaluation to student test scores. Other states with relatively low non-participation rates, including those cited by ED didn't make test scores the preponderant evaluation criterion as New York did, got ED permission to delay implementation, or walked back from their original Race to the Top-induced policies entirely.
I showed data earlier suggesting that the public doesn't appear to support Opt Out, but how much public support is there for K-12 testing more generally? Recent data on this question come from three surveys, each of which I've already mentioned. The Education Next Survey found that 67% of respondents favored NCLB annual testing. Only 21% were opposed. Parents' views were similar to those of the public, but teachers were evenly divided. In the PDK/Gallup National Poll, 67% of respondents felt that using tests to measure learning was important for improving their public schools. However, an almost equal percentage felt that there was too much emphasis being placed on testing in their schools. Last, compared to other indicators, like teachers' grades, teachers' written observations, and examples of student work, tests were preferred as a progress measured by the smallest percentage of respondents.
On-screen: [Tests: 16% of respondents, Teacher grades: 21%, Teacher's written observations: 26%, Examples of student's work: 38%.]
The last survey was conducted by the Great City Schools among parents in selected member districts, a population you'll recall as having very low Opt Out rates. Not surprisingly, results indicated strong support for measurement and its role in accountability, but also strong support for what constitutes a better test. Large majorities of respondents agreed with the following statements: "Accountability for how well my child is being educated is important and it begins with accurate measurement," "Children should be required to take tests that ensure they're learning the standards; new tests should replace current tests," "Better tests would ask students to do more than provide an answer by filling in bubbles or picking multiple choice questions," and "Better tests would expect students to demonstrate their thinking—thinking critically and solving complex problems." In combination, these three surveys suggest that the public may have more favorable views toward testing than either the existence of the Opt Out movement or the media coverage of it would imply, but the results also indicate a perception that the tests respondents know are not the only method, that they're not necessarily the best available method, and that there are specific ways in which those assessments could be improved.
Let me recap what I've said so far. First, Opt Out matters because state assessments help us understand the effectiveness of education at the system level and because traditionally underserved groups depend on those assessments to document achievement gaps and advocate for resources to help close them. In contrast to these underserved groups, those who Opt Out appear to be a more privileged subgroup. These demographic differences in support and participation have led to Opt Out becoming a civil rights issue. Opt Out, obviously, becomes more problematic with incidence, which, in 2015 varied very widely in the largest of the U.S. ED sited states from 3% in California to 20% in New York. For New York, a prime motivator was a dramatic increase in the role of test results for educator evaluation combined with the appearance of a much more rigorous test, causing a chain of related effects. Interestingly, national polls suggest that the public opposes Opt Out, but that it also opposes the use of student tests for teacher evaluation. Finally, the polls suggest general support for state testing, but also significant concerns, including about too much emphasis on assessment and on problems with tests as the public knows them.
What should the assessment community do—that is, those who make tests and study testing in schools, to respond to the concerns raised by the movement and by the public more generally? I'll suggest two classes of action—not because I think they're going to necessarily change opponents' minds, but because I think they're the right thing to do. First, we might communicate several key messages more actively and effectively to a wide variety of audiences—policy makers, the state department staff, local educators, parents, students, and the public.
What messages? One message is that the competencies needed for success in college and careers are changing and their levels are increasing. As a consequence, the measures we use for evaluating school effectiveness must be broader and they must be more demanding. Second, we know we cannot measure these competencies effectively with only traditional item formats. We agree test does not need to mean fill in the bubbles.
That said, better assessment will take significant class time, if it's to meaningfully represent learning. Short tests cannot effectively cover both the breadth and the depth of the standards, and therefore, run the risk of leading to even narrower test preparation than otherwise would be the case. Fourth, more efficient assessment is going to require investment in school technology infrastructure. We know anecdotally that perceptions of too much testing are exacerbated by long windows and classroom disruption caused by having inadequate numbers of computers. An assessment that could have taken a few days to administer in a school may end up taking a month or more because too few machines were available. A fifth key message is that using student test scores to make teacher career decisions is a highly controversial practice even within the research community. Student achievement and teacher effectiveness are not the same thing. Just because we have evidence supporting test use for evaluating student achievement doesn't automatically make those scores valid for making decisions about teacher compensation, promotion, tenure, or renewal. The last message is that participation is essential if student competency and education effectiveness are to be evaluated fairly.
As the New York Times Editorial Board said, "Opting Out is not the answer. It harms the educational chances of those who need effective schooling the most." In conjunction with communicating those messages, the assessment community needs to be working hard to turn those words into reality. How do we do that? The data are pretty clear. Significant segments of the public and of educators don't want to spend more student time on the tests that they know because to them, those tests are distant from and irrelevant to learning and instruction. When they hear the word testing, they see this. We need to get them to see something that looks more like that.
On-screen: [Turning Words into Deeds. (Graphic of a man in black and white shaded in with test scoring paper; Graphic of a man in color shaded in with a guitar, a flower, leaf, Shakespeare collage.]
To change their perception, we will have to devise tests that look like and are valuable learning experiences as much as measuring devices—tests that are, in the words of President Obama, worth taking. One way to do that is to try and build tests that try to model good teaching and learning practice by, for example, structuring performance tasks so that they reflect the steps that might be found in a classroom project, placing the student into a process like the one that he or she should be internalizing as a habit of mind. As a second example, you might include knowledge representation similar to the ones that proficient performers tend to use, like planning tools for writing. The premise behind the inclusion of these types of devices is that repeated encounters with planning tools and structured performance tasks on summative and formative assessments should encourage the routine use of those very same devices in teaching and learning practice, thereby, helping to build better students and more effective teachers by the very act of assessment participation.
Related to making tests worth taking is making tests that are worth preparing for because test content and format represent the standards as completely as possible, evoking the very same knowledge, processes, strategies, and habits of mind as would be routinely exercised in the very best of classrooms. That type of broad and deep representation can help shift the focus of prep from the test to the standards, which is where that focus belongs, so that, as implied by Teoh, preparation becomes a valued learning activity that helps students develop fluency, acquire conceptual knowledge, consolidate their knowledge, and retain and transfer it.
We need to be working hard to build tests that encourage participation because they're engaging—by, for example, incorporating highly interactive tasks, like simulations and summative assessment, and building formative assessments into educational games. Among other things, building formative assessments into games may help us try out and refine ideas in lower stake settings that we can then move into higher stakes ones.
We need to create summative tests that offer more actionable results. For example, by placing students into learning progression levels as a starting point for teacher formative follow up and by describing the processes students use in problem solving as a guide to helping teachers choose instructional next steps. We need to help states and districts create coherent systems of assessment, or to use Linda Darling-Hammond's phrase, "Systems of educational support," where each assessment has a clear purpose, where different assessments—summative, interim, and formative—work together to facilitate teaching and learning, and where efficiency, clarity of purpose, synergy, and utility, are likely to be higher because the testing program was purposefully designed instead of being assembled more incrementally and incidentally, as is too often the case.
Finally, we might try using theories of action to create coherent systems and rationalize existing ones. Theories of action describe the assessment program's intended effects and how it proposes to achieve them in ways that all stakeholders can understand. For a state or district, the theory of action becomes the basis for communicating the intended value of the assessment program and for getting early indications as to its political viability and its scientific defensibility. Returning to New York, state officials clearly had an implicit theory of action for their assessment program, as did those who formulated the Race to the Top Fund. Perhaps those programs would've been more effective if officials had explicated their theories and used them to more thoroughly explore with their constituents and the assessment community the viability and defensibility of their ideas.
I'll close by coming back to where I started. Opt Out can be framed as a classic conflict between individual rights and the collective good, but that's probably too simple because it also contains conflicts among and between federal, state, and local responsibilities for education, social classes, racial ethnic groups, workers and higher level management, and education reform philosophies, like top-down sanctions, driven reform, bottom-up reform, and middle-out reform, as well as other things. Given the nature of these multiple interacting dimensions, one might wonder if Opt Out is, in part, a symptom of the divisive malaise that seems to have settled across our nation. I don't know, but it should be obvious that even more narrowly construed, the assessment community cannot solve the Opt Out problem on its own, but it needs to do its part and it needs to do its part by communicating more effectively with critics to better understand their concerns and by working with constituents, especially policy makers, to promote more appropriate assessment practices.
In addition, the community needs to take its share of responsibility for creating engaging, useful assessment approaches that have positive impact on teaching and learning—something that the ESSA Innovation Pilots may well offer a new opportunity to do. If we each do our parts—assessment organizations, policy makers, educators, parents, students, and the public, we stand a much better chance of making progress toward solutions to this problem, as well as to the more pressing concerns that challenge our country. Thanks very much.
Speaker: Wade Henderson, President and CEO, The Leadership Conference on Civil and Human Rights and The Leadership Conference Education Fund.
Speaker: Wade Henderson - Well, good afternoon everyone. Good afternoon. I feel like I'm a product of divisive malaise. [Laughter.] Come on, speak up. Speak up. Randy, that was a great presentation—a lovely paper, well researched—it's probably the first paper we've had that has taken a scientific look at empirical data related to Opt Out and you should be very proud of it. I think it went very, very well. Let me re-introduce myself, guys. I'm Wade Henderson. I am the President and CEO of the Leadership Conference on Civil and Human Rights and the Leadership Conference Education Fund. The Leadership Conference on Civil and Human Rights is really the nation's premier Civil and Human Rights Coalition with over 200 national organizations, as we say, "Working to build an America as good as its ideals." I'm also the Joseph L. Rauh, Jr. Professor of Public Interest Law at the David A. Clarke School of Law, University of the District of Columbia, and as Ida mentioned, I'm also the Vice Chair of the ETS Board of Directors—Board of Trustees, actually.
I want to introduce my colleague, Liz King. Liz is the Director of Education Policy at the Leadership Conference. Liz, delighted to have you here. Liz has agreed to stay behind and answer questions. I have to leave at 1:00 p.m. I have a hard stop meeting with the House Republican Leadership that I need to—guys, when they call, I have to show—so, what can I say? I have to be there this afternoon.
Randy has done an excellent job with his analysis and he put up—I think he quoted from a policy paper that the Leadership Conference issued on May 5th of last year. We'll make that available to anyone who would like to have it. You can get it from our website, which is www.civilrights.org and you can pull that down, but we'll send it to Tom and he can distribute it as well. Randy quoted from that paper and he gave you a couple of extensive paragraphs, which explained why the civil and human rights community opposes the Opt Out concept. It boils down to one sentence, which, Randy, you didn't quote, so I'll quote that. It says, "You really can't fix what you can't measure." The reality is that if you are interested in getting data that helps you better understand what your students or our students are experiencing in school, you really can't do that without some form of measurement.
I want to address a couple of things before I start and then put it in context. First, I want to address the perceived conflict of interest that comes from being a testing assessment company that critiques the Opt Out movement and finds it wanting. Would anyone really be surprised that ETS would take a scientific look at Opt Out and find somehow that it doesn't work properly? That's what you do. I'm also on the Board of ETS, and so some may perceive my opposition to Opt Out as reflecting that conflict of interest and I just want to address it squarely. My organization—the Leadership Conference on Civil Rights was founded in 1950. I have been its President for 20 years. My position on Opt Out and Educational Measurement predated my tenure on the board of trustees at ETS. I will leave my position at the end of this year having served for 20 years. I can assure you my position on Opt Out is going to continue after my departure. It is not a position which reflects my association with ETS and I do not think it adequately explains ETS's position in opposition to Opt Out. I think Randy did a good job himself and I simply wanted to acknowledge that. I think it's really important to add some context to this discussion so that we can properly understand what the Opt Out movement is and where it fits into the broad debate about what we need to do as a nation to educate all children for a bright future.
First, it's important to remember where the 95% participation rule came from—that is the 95% of all students taking exams. It became law in 2002 with No Child Left Behind because there was evidence that schools were routinely excluding students, especially those with disabilities from the annual assessments. They would send them home on test days or have them go into another room while other students were taking the test. This meant that we had incomplete data on how all students, and especially students with disabilities were doing, and couldn't adequately provide support for them. This, despite the fact that the Individuals with Disability Education Act and the Section 504 of the Rehabilitation Act of 1973 had been law for decades.
Second, Communities of Color's relationship with standardized testing has long been fraught for reasons that are specific to them and different from the concerns of white parents that make up the Opt Out movement. Randy, you talked about that in focusing on the more well-advantaged students whose parents opposed the use of these exams. Our nation has a history of using standardized tests to deny opportunity to people of color and there is great concern about cultural bias in standardized tests that harm students of color. That said, these communities are also more likely to be skeptical of the fairness of the education system as a whole, and more interested in comparative data to show what's going on in schools.
Third, while Randy's report correctly notes that the racial makeup of the Opt Out movement is largely white, he doesn't fully explore why that may be. The Opt Out movement arises, as he pointed out, out of concerns about the colliding roll out of teacher evaluations, which included student test scores, the Common Core State Standards, and new, more challenging assessments. Raising standards on assessing all children against those standards meant that scores would be down across the board. Everyone knew this, but for white parents, this was perceived as a threat to the success of their children. Never mind that colleges and universities were saying for years that they were having to spend too much on remediation because students were arriving unprepared. Never mind that business leaders were saying that they couldn't always find qualified people to hire. No, for those in the Opt Out movement, it's more important to use concerns about stress and teaching to the test to obscure their real goal, which is protecting their longstanding access to resources. Parents of color were far more accustomed, being told that their child's school was falling short.
Now, this context is vital to a conversation about opting out in Communities of Color. The concerns in communities of color about what's going on in schools are much broader than testing and the outsized attention that the Opt Out movement attracts is just one more example of how concerns of white parents obscure the needs, desires, and expectations of communities of color.
Now, in a recent poll that the Leadership Conference Education Fund conducted of Black and Latino parents and families and we called it the New Education Majority Survey, which we plan to conduct annually, and we call it the New Education Majority Survey because for the first time in American history, public school students are a majority of students of color, and so trying to understand how these parents perceive these issues is an important element in trying to plan for the future and trying to engage in a level of reform that will have significant impact in helping to transform schools.
Now, we found that testing concerns barely register at all among these parents, and I think, Randy, your data suggests that as well. In an open-ended question where respondents have the opportunity to tell us what they think about what makes a great school, only 2% said less reliance on standardized tests and that poll result is shown here. By the way, you can get copies of the actual survey that we did earlier this year if you're interested, we'll make that available. But, it really underscores the fact that a very small percentage of parents, when asked themselves to identify those areas of education that are most important, less than 2% said reliance on testing. What we found was that Black and Latino parents want their children to have great teachers who are challenging them to meet their potential. They want their children held to high expectations and they expect that the content that they are learning is as rigorous as what white students are getting and we found that they are well aware that these things are not happening—that their kids attend schools that receive less funding, fewer resources, and more inexperienced teachers.
In focus groups—and that's the next slide, by the way—is my friend here to do that next slide? Don't worry about it. We have one other slide, which shows the focus group result. In focus groups we did last year, as a part of our Common Core State Standards work, we found that Black and Latino parents think testing is vitally important to helping them understand how well their children are doing. Oh, thanks my friend. I just had that one other slide. I appreciate it very much.
Speaker: Wade Henderson - We're not going to worry about it. Forget it. No, don't worry about it. It's okay. It's just another slide that came from—not as if you haven't seen a lot of them, right? We have one other slide that touches on focus group activity. Let's not sweat it. Okay.
As I said, in focus groups we did last year, as part of our Common Core State Standards work, we found that Black and Latino parents think testing is vitally important to helping them understand how well their children are doing. The concern that they have was more about not necessarily understanding the results from these tests, not being sure how well the tests measure their children's knowledge of the content, and not knowing exactly how they would inform teaching their child would receive.
The Opt Out movement poses a real threat to the work that we have been doing to ensure educational equity for all students. One, while the Opt Out conversations have been framed in terms of parental choice, we are worried that schools in districts will resort to being the ones making the decisions—the incentive to hide the results for historically marginalized students and Opt Out provides the perfect cover to do just that. Second, the data is important because it reveals inequities. Without it, we have no way to advocate for the changes that we believe will improve outcomes. Now it's true that we've had this data for decades and have failed to do what's necessary to rectify the problem, but that's a failure of public and political will—not the data itself. Given the disproportionate opting out of white and wealthier parents from the tests, we won't have the comparisons of data that we need to argue for the resources and the great teaching that all children deserve.
From our perspective as civil rights advocates, we have an opportunity that we simply can't afford to ignore with regard to communities of color. We have to help them better understand the value of the data that they are getting and what they can do with it to get the things that they want and need for their children. Right now, we know that there is support for statewide annual assessments within these communities. That is not inevitable and should not be taken for granted. Those of us who believe that tests can be a tool to make education better and fairer need to really make sure that that happens. This isn't to say that communities of color are receptive, necessarily, more than any others to the Opt Out messaging, because right now that messaging is so disconnected from their concerns that they really don't see that necessary relationship, but that may not last forever. If they think that having civil rights data is being used as a cover, they're going to have a real problem.
Our job must be, in my view, what it has always been and that is to empower communities of color, people with disabilities, and low income people regardless of race—to empower them to fight for educational equity that will improve for the education for their children and to build the political will it will take to make equity a reality.
As I said at the outset, you can't change—you can't fix what you can't measure. The data that we are seeking that is required now by federal law and, in many instances, in state laws is absolutely essential to measuring the progress that our students make in the educational system. Without that data, we short change them, we short change our ability to make empirical judgments, and at the end of the day, it's a self-defeating effort to move away from the kind of information that informed parents, I would think, would basically want.
With that I will stop. I will say—Randy, I thought you did a great job. I thought the data was extremely helpful and very compelling. I liked the narrative that you pulled together to explain the role of New York in distorting the debate on Opt Out. I thought it was terrific and it turned out to be accurate. If you weren't a terrific researcher, you'd be a great forensic analyst. [Laughter.] We'll put you there for that. Thank you ladies and gentlemen. I appreciate it.
End of Opt Out: An Examination of Issues video.
Video duration: 1:08:27