Reed, D. K., Aloe, A. M., Reeger, A. J., & Folsom, J. S. (2019). Defining summer gain among elementary students with or at risk for reading disabilities. Exceptional Children, 85(4). doi:10.1177/0014402918819426
Summer reading programs have been prevalent in the United States for a long time; however, there is limited high-quality research examining the efficacy of these programs. Few summer reading intervention studies have used random assignment (Kim & Quinn, 2013), a key design feature of high-quality efficacy studies. Additionally, previous summer reading intervention studies have experienced challenges related to sample bias. In school-based research, a sampling method is considered biased if it systematically favors some students for participation over others. When sample bias is present, the results for the specific sample may not generalize to the larger population. Because summer reading programs are voluntary, it is likely that students who participate in summer reading interventions are different than those who do not. In this case, we may overestimate or underestimate the effects of a summer reading intervention for the general population because we have tested the effects of the intervention with a sample that is not representative of the broader population. Reed and colleagues (2019) sought to fill the gap in high-quality research of summer reading interventions by using an advanced analytical technique called propensity score matching to study the effects of a summer reading intervention for students with or at risk for reading disabilities in K–4. Propensity score matching attempts to control for sample bias by controlling for differences between the treatment and control groups at pretest. In this way, the approach attempts to create equivalent groups at pretest.
Reed and colleagues (2019) sought to answer two research questions:
Examining K–4 students enrolled in a summer reading program from one Midwestern school district, Reed and colleagues (2019) examined the effects of a summer reading program for students in K–4 who were identified with or at risk for reading disabilities. Students were identified and considered eligible for the summer program based on their oral reading fluency and accuracy scores on the FastBridge assessment, which was given three times a year. Administrators at participating schools were given a list of the 1,316 students deemed eligible for the summer reading program based on FastBridge scores. The schools identified those 769 students who would most benefit from the summer program for participation. The final treatment group reflected those families who agreed to participate and included 470 K–4 students. Between 50% and 75% of the participants in each grade level were identified with a disability. Eligible students who did not participate in the summer reading program formed the pool for the propensity score (matched) control condition.
The summer reading program took place over 28 days between June 19 and August 10 with a week break over the Fourth of July holiday. Students received treatment in reading instruction, 3 hours per day for a total of 84 total hours of instruction. Daily instruction was based around 3 primary components: a whole group reading lesson with the program Wonders (McGraw-Hill Education, 2017a), a whole group language arts lesson with Wonders, and two separate small group differentiated lessons with Wonder-Works (McGraw-Hill Education, 2017b). The small group lessons targeted students particular areas of weaknesses (e.g., phonics, close reading skills). Targeted skills were identified using pretest results and curriculum-embedded measures available through the Wonder-Works program.
Students were administered the Reading Assessment for Prescriptive Instructional Data (RAPID) test as a pre- and posttest measure. The assessment was administered approximately three weeks before the start of the summer program and again approximately three weeks after the end of the program. The RAPID test provides students with an overall reading score (the Reading Success Probability; RSP), as well as scores on individual components of reading.
In answering the first research question, results showed growth on pre- to posttest measures overall but not across all RAPID subscales. Students in kindergarten, first grade, second grade, and fourth grade appeared to benefit more from the summer reading program than students in the third grade. Overall students in second grade appeared to benefit the most from the summer reading program. While not all were statistically significant, all subscales showed improvement for kindergarten and first-graders. The largest effect sizes reported for fourth grade were observed in reading comprehension and overall reading performance. Third-grade participants experienced the least reading growth, reporting on average negative effect sizes on three of the five measures, although none were statistically significant. These findings should be interpreted with caution as they are evaluating gains from pretest to posttest among a single. Single group analyses do not provide minimal control for threats to internal and external validity.
The second research question addressed how students in the treatment and control groups compared using partially clustered weighted linear mixed models. This research question is key to understanding whether the summer reading intervention was the reason that students in the summer reading intervention made gains. The authors were only unable to compare students in treatment and control in kindergarten due to a measurement issue. Although students in grades 1 to 4 mostly outperformed their peers in the control group on reading components, most effect sizes were small or near zero and the only significant difference in overall reading improvement favoring the treatment condition was found for students in grade 1. The authors interpreted this finding as further support for the importance of early intervention for students with reading difficulties. Of note, results showed that some students in the control condition (who did not receive the summer reading intervention) showed significant improvements, which led to null between-group findings. This findings seems to suggest that summer loss was not present for many students in the control group. In fact, control students in grade 3 actually outperformed their treatment group peers who attended the summer reading intervention.
Although this study has many features of high-quality research, there are a few important limitations worth discussing. For one, conclusions about the efficacy of summer reading interventions are limited due to low fidelity of teachers’ core lesson. A reason for this low fidelity is thought to be caused by the programs Wonders and WonderWorks, which were designed as 90-minute literacy blocks over a regular 9-month school year but were being applied to a 3-hour block over a 28-day period. Although propensity score matching is a valuable technique for controlling for sample bias, it remains possible that there were unmeasured differences between those students whose families decided to participate in the summer reading program (treatment) and those who did not (control) that would influence study findings. Lastly,
as is common with other studies of summer programs, treatment group attrition was high across all grade levels (K = 20.0%, 1 = 20.4%, 2 = 21.4%, 3 = 22.2% and 4 = 26.1%). Although analyses indicated there were no significant differences in the demographic variables of the treatment group between the final sample and the initial sample before attrition, attrition reflects a threat to the validity of the study.
What does this study tell us about the effects of the summer school program under study? Students who received the summer school reading intervention, on average, showed gains in reading. However, most observed gains were small or close to zero, and they were not statistically significant when compared to the control group. One might wonder, given previous research on summer loss, why were there not significant differences between students who received a 6-week summer reading intervention and those who did not? These findings may indicate that students with reading difficulties, including a large proportion of students with disabilities, require for more intensive summer reading interventions to significantly outperform their untreated peers. Additionally, the reading achievement results for the untreated control condition showed that these students, on average, did not demonstrate significant summer loss, and in some cases, showed substantial growth over the summer. Lastly, this reinforces prior research that found that younger students are most responsive to reading interventions, as students in grade 1 were the only students who demonstrated a statistically significant gain on overall reading improvement relative to the control condition.