Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading. Washington, DC: National Center for Educational Evaluation and Regional Assistance.
This report summarizes an evaluation of school-based implementation of response to intervention (RTI) service delivery models by elementary schools (grades 1–3) for reading in 2011–2012. MDRC, as subcontractors to the National Center for Education Evaluation, conducted this study. The study had two samples. The first was a reference sample of representative schools in 13 states to evaluate the prevalence of RTI practices and outcomes. This sample was used to evaluate service intensity and whether more services were provided to poorer readers. The second was an impact sample, representing 146 schools that self-reported implementing an RTI framework for 3 or more years. With this impact sample, the researchers estimated the effectiveness of reading interventions for readers scoring slightly below these schools' eligibility threshold for reading intervention. For this analysis, the researchers used a regression discontinuity evaluation design. Regression discontinuity designs depend upon a threshold level (cut point) on some test that determines group assignment. In this study, students who scored just below the school-specified cut point for intervention services were considered to meet criteria for the intervention group and were compared to students who scored just above the cut point and were therefore considered the "no intervention" comparison group. The key evaluation question is whether the group scoring just below the eligibility threshold shows higher reading skills following intervention than students scoring just above the threshold value. In this study, the threshold was the reading score that qualified the child for Tier 2 services at his or her school. Because students who are just above these thresholds are usually very similar to children who score just below the thresholds, students in the intervention group should perform better following intervention than those in the "no intervention" comparison group if the intervention is indeed effective.
The main findings were that (1) the prevalence of self-identified "full" implementation of an RTI framework for early reading was higher (86%) in the impact sample than in the reference sample (56%); (2) schools in the impact sample adjusted the delivery of reading programs and were more likely to serve students who met eligibility criteria than students who did not meet eligibility criteria; and (3) students who met eligibility criteria did not show improvement in reading skills and in grade 1 actually showed small, but negative impacts of intervention.
Not surprisingly, the third finding created considerable controversy in educational reporting publications and blogs, with many of these sources producing alarming headlines and interpreting the results as indicating that RTI frameworks are not efficacious and are potentially harmful—at least in first grade. Because this is a clear misinterpretation of the findings of the evaluation study, the remainder of this commentary will be devoted to explaining what the study actually showed.
The study did not show that RTI practices were ineffective. An Institute of Education Sciences guidance document on RTI (Gersten et al., 2009) reports that implementation of RTI in controlled settings with guidance from individuals with expertise external to the school (e.g., researchers) was effective for improving reading skills, especially in grade 1. This guidance document built upon previous research and concluded that "well-designed and closely monitored small-group reading interventions could be beneficial to early-grade readers in terms of improving their specific reading skills" (p. 2). The more recent evaluation report is really about the implementation of an RTI framework for early reading skills by schools that were not supported and that self-identified as having full implementation of an RTI framework. In fact, it is clear that the schools may not have implemented effective RTI frameworks, suggesting that (1) implementation of RTI without expert assistance is difficult for many schools and (b) self-report of implementation practices is likely an unreliable mechanism for the identification of high-quality implementation. Thus, the results should not be interpreted as an efficacy or effectiveness study (i.e., a study of the impact of RTI on student outcomes); rather, it is at best an implementation study demonstrating that schools may have difficulty implementing an RTI framework without some form of support. However, because the study relied so much on self-report, it is not clear that it is even a strong implementation study. Regardless, finding that schools have difficulty implementing an RTI framework is sobering but not surprising because RTI frameworks require significant changes in how service delivery is conceptualized and implemented in schools, with an accompanying shift in resource allocation (Fletcher & Vaughn, 2009).
Some defenders of RTI frameworks are quick to identify the study design as a problem. In fact, a regression discontinuity design is a strong approach to evaluation and has the added advantage of allowing researchers to operate in natural environments where services are provided for all students in need—not as in designs based on random assignment where some students receive the targeted intervention and others do not (i.e., a randomized controlled trial). If the study is well designed and the assumptions underlying the design are met, the strength of the inferences that can be drawn from regression discontinuity designs is expected to be comparable to that from a randomized controlled trial. Because so many students in the comparison group reportedly received intervention, the design assumptions were not met.
The researchers intentionally conducted the evaluation in the impact sample of schools reporting that they had an RTI framework for beginning reading and had implemented it without external support. Thus, the researchers had no control over the nature or quality of the reading interventions or the RTI framework. Although the researchers collected data on reading skills, they did not determine how students were screened and deemed eligible for reading intervention. For example, it is well known that progress monitoring using timed assessments of text reading is reliable and valid for determining who is at risk for reading difficulties, determining the level of intensity of service delivery, and moving students across the tiers (Kovaleski, VanDerHeyden, & Shapiro, 2013). Roughly half the schools in the impact sample reported using this type of progress monitoring. However, many other schools in the study reported use of less reliable methods such as error and miscue analysis through running records or other methods that do not have established reliability or validity.
As the researchers acknowledged, the design focused on only students at the school-designated threshold for service delivery and did not focus on students reading significantly below grade level—students we might think of as requiring intensive interventions. Nonetheless, if the schools' implementation of the RTI framework was effective and the self-reports were reliable, there should have been a shift in reading achievement following intervention reflecting improvement in the reading levels of the students in the intervention group. No such improvement was observed. However, the researchers did not evaluate or observe implementation of the RTI framework or the reading interventions provided; they administered surveys and relied on the schools' report. Thus, the study cannot resolve questions about the efficacy of RTI frameworks implemented in a manner consistent with best practices.
When the implementation data are examined, potential explanations for the null/negative results emerge. About 40% of the students qualified for Tier 2 instruction, which implies that the core reading program was not strong or the threshold was not appropriate. Slightly more than half of the students (59%) in the treatment group received adjusted reading services in Tier 1, which is general classroom instruction; relatively few received Tier 2 or Tier 3 instruction. This may have occurred because the number of students below the threshold was more than the schools could accommodate with Tier 2 intervention. However, almost half the schools provided intervention to all students regardless of the threshold and almost two-thirds provided intervention during Tier 1. Thus, many schools violated the most important element of an RTI framework for providing intervention to students with reading difficulties: Supplement, don't supplant, the Tier 1 reading instruction (Fletcher & Vaughn, 2009; Kovaleski et al., 2013). The schools did not extend time in reading by providing a supplemental intervention (e.g., Tier 2).
In many respects, schools were not using their progress-monitoring data to determine who did and did not need additional and more differentiated instruction. The amount of extra instructional time provided to students reading below the schools' eligibility cut point was not statistically significant and averaged about 6 minutes. When students were assigned to Tier 2 or Tier 3, the impact on reading skills was small and generally negative. Group sizes were reportedly smaller for students reading below the eligibility cut point, but only by approximately one student compared to students reading above the threshold. Except for additional phonics instruction, there was little evidence that instruction was differentiated according to students' instructional needs. It was not apparent that the teachers delivering services had specialized training in reading intervention, and most schools relied on classroom teachers to deliver intervention. These practices differ from the intervention practices associated with improved outcomes in most systematic implementations of Tier 2 and Tier 3 intervention studies for students with reading difficulties. In these studies, trained interventionists provide small-group instruction that is differentiated and supplements Tier 1 instruction.
As the report acknowledges, these findings generalize only to students close to the threshold—not to students reading far below grade level who typically would be the participants in Tier 2 and Tier 3 services. The researchers were not able to identify factors that involved school composition, instructional factors, or other components of schools and instruction often related to poor reading achievement. As unexplored factors, they suggested "(1) false or incorrect identification of students for intervention, (2) mismatch between reading intervention and the instructional needs of students near the cut point, and (3) poor alignment between reading intervention and core reading instruction" (p. 17). Without these three factors in place, it is difficult to argue that an effective RTI framework was implemented. Even with these limitations, the small, but negative outcomes in grade 1 are surprising, but no obvious explanations are apparent for this finding, which is in the opposite direction of the Institute of Education Sciences guidance findings (Gersten et al., 2009).
This study does not show that RTI frameworks are ineffective for improving reading in grades 1–3. It does show that many schools inadequately provide RTI without critical elements of the RTI framework: screening and progress monitoring using reliable and valid measures, research-based Tier 1 instruction, increasingly intensive tiers of intervention based on students' needs that supplement the core instruction, and highly qualified personnel to provide instruction and intervention. Successful implementation requires professional development, data-based decision-making, and changes in resource allocation away from traditional silos and toward more integrated use of resources that prioritize core instruction and reliable and valid data systems for screening and monitoring student progress. These findings should raise concerns about the implementation of RTI methods without external support from districts and states and the shifting of resources so that core instruction is enhanced and effective supplemental instruction is available. The fact that controlled studies of RTI show efficacy is promising, but RTI is not a short-term implementation that can be done out of a kit or box. The scaling issues remain significant and need to be taken seriously by administrators and policymakers. RTI is ultimately about increasing intensity. This study provides another example that slight increases in intensity and differentiation of instruction don't make much difference for children reading above or below grade level.
Ignoring issues around RTI, this study adds to the evidence that despite large federal efforts like Reading First, we have not scaled what we know from research. In the recent reauthorization of the Elementary and Secondary Education Act, the Every Student Succeeds Act, Congress authorized funds for reading intervention, but at a markedly reduced level compared to the previous authorization that created No Child Left Behind and Reading First. In addition, Congress did not prioritize funding where there is the most evidence that intervention would make the most difference: kindergarten and grades 1–2. Rather, Congress spread the funding across all grades (including preschool), ignoring considerable research documenting that early reading intervention is critical because it is very difficult to intervene and close achievement gaps after grade 3. Early access to print is critical for development of the neural systems that emerge through instruction to mediate reading (Dehaene, Cohen, Morais, & Kolinsky, 2015). Absent this early access to print, the child does not have sufficient exposure to text to develop automaticity of sight word reading and also loses access to reading as a way of building vocabulary and background knowledge, which are essential to reading comprehension. Policy must embrace the importance of early reading on a long-term basis and support teachers and schools as we try to improve beginning reading instruction.
Dehaene, S., Cohen, L., Morais, J., & Kolinsky, R. (2015). Illiterate to literate: Behavioural and cerebral changes induced by reading acquisition. Nature Reviews Neuroscience, 16, 234–244.
Fletcher, J. M., & Vaughn, S. (2009). Response to intervention: Preventing and remediating academic difficulties. Child Development Perspectives, 3, 30–37.
Gersten, R. M., Compton, D. L., Connor, C. M., Dimino, J., Santoro, L., Linan-Thompson, S., & Tilly, W. D. (2009). Assisting students struggling with reading: Response to intervention and multi-tier intervention in the primary grades. Washington, DC: National Center for Education Evaluation and Regional Assistance. Retrieved from http://ies.ed.gov/ncee/wwc/PracticeGuide.aspx?sid=3
Kovaleski, J. J., VanDerHeyden, A. M., & Shapiro, E. S. (2013). The RTI approach to evaluating learning disabilities. New York, NY: Guilford Press.