Collins, A. A., Lindström, E. R., & Compton, D. L. (2018). Comparing students with and without reading difficulties on reading comprehension assessments: A meta-analysis. Journal of Learning Disabilities, 51, 108–123.
Assessing reading comprehension is a complex undertaking, in part due to the multiple cognitive processes required to make meaning from text and in part due to the difficulty of measuring the mix of observable and unobservable processes involved. Many different assessments have been developed, but recent research has shown that the correlation between them is not as large as would be expected for measures of the same construct. As a result, researchers have explored factors that might explain why these measures are not correlated more strongly. Two such factors are the types of items used to assess reading comprehension across different assessments and key differences in the characteristics of the students being assessed. Different item formats may tap slightly different sets of cognitive abilities. Differences in reader characteristics may contribute to variation in the gap between the average scores of typical readers and those with reading difficulties on different assessments.
Collins, Lindström, and Compton (2018) synthesized research that compared reading comprehension assessment in students with reading difficulties and typical readers to determine the impact of item response format and reader characteristics on the difference in scores between students who do and do not struggle with reading.
Among the differences between reading comprehension assessments that are likely responsible for lower-than-expected correlations between measures, the format of items used in each test is thought to be important. Collins et al. (2018) examined the following six most common item formats:
Collins et al. (2018) noted that previous research has found that the average difference in reading comprehension scores between struggling readers and typical readers varies across tests. They suggested that differences in the skills required to respond correctly to different types of items are at least partially responsible for variations in this achievement gap. Students with reading difficulties have been shown to perform more like typically developing peers on items requiring sentence-level comprehension, such as cloze tasks, than on those that require more complex cognitive processing, such as open-ended items.
In addition, other features of an assessment may contribute to variation in the size of the achievement gap. For example, some assessments have time limits (requiring faster cognitive processing). Some assessments require students to read a passage silently before responding to comprehension questions, and other assessments require oral reading. These differences may alter the comprehension task enough to change the magnitude of the gap in scores between students with and without reading difficulties. It is important to determine whether test characteristics contribute meaningfully to the difference in performance between typical and struggling readers because of the role that reading comprehension scores often play in determining whether a student has a reading disability and in evaluating the efficacy of interventions for students with reading difficulties.
The primary purpose of Collins et al.’s (2018) study was to determine whether the reading comprehension achievement gap between students with and without reading problems varies depending on the format of the items on the assessment. They also explored differences in this gap based on other test features (e.g., text genre, timed vs. untimed, standardized vs. unstandardized) and student characteristics (e.g., grade level, how students were identified as having reading difficulties).
To address their research questions, Collins et al. conducted a meta-analysis, which is a method for synthesizing the results of many studies within a topic area. Often referred to as a “study of studies,” a meta-analysis takes a systematic, comprehensive approach to combining data from a set of prior studies to determine what the research literature at large says about a particular issue.
In conducting their meta-analysis, Collins et al. (2018) included 82 studies conducted between 1975 and 2014. All studies met the following criteria:
Across the 82 studies that met these criteria, results from more than 5,000 students with reading difficulties were compared to nearly 6,500 typically developing students.
Collins et al. (2018) quantified the size of the achievement gap between typical and struggling readers in each study using the Hedges’ g effect size statistic. Hedges’ g represents the number of standard deviation units that separate the two groups. It provides a common metric that can be averaged across the set of studies in the meta-analysis to determine whether factors such as item format, test features, and student characteristics are associated with larger or smaller gaps between students with reading difficulties and typically developing students. This average gives more weight to studies that involved larger numbers of students than those with smaller numbers of students because larger studies typically produce more precise results. In reporting the average effect size in a meta-analysis, the 95% confidence interval is also reported to give the range of values that likely includes the true effect size.
The results of the meta-analysis indicated that students with reading difficulties scored lower on average than typically developing students on all reading comprehension item types (meaning that the effect sizes were negative). The difference was statistically significant for all item formats except for sentence verification. The average number of standard deviations separating the two groups of students differed by item response format. The authors found the following average effect sizes by item format (the 95% confidence intervals are listed in parentheses):
For two item types, other characteristics of the assessment were associated with differences in the magnitude of the effect size for the difference between struggling and typical readers. Multiple-choice assessments with time limits showed significantly smaller differences between groups, and tests in which the passage was removed before the students answered the questions resulted in significantly larger differences. For open-ended items, reading comprehension tests showed significantly larger gaps between groups if the tests used only expository texts, were administered in a group setting, and were administered by researchers. However, tests with open-ended items were associated with significantly smaller gaps when the tests included items that gradually increased in difficulty and used basal and ceiling rules for administration and scoring. No student characteristics were associated with significant differences in the magnitude of the effect size for the difference between struggling and typical readers (including grade level, how students had been identified as having reading difficulties, and whether they had been identified as having learning disabilities or reading disabilities).
Results of the meta-analysis shed additional light on the nature of the reading comprehension deficits seen in students with reading difficulties. In particular, these students performed closer to average on retell, sentence-verification, and cloze assessments and further below average on open-ended, multiple-choice, and picture-selection assessments. According to Collins et al. (2018), this finding indicates that reading comprehension item types that tap into skills such as decoding and sentence-level comprehension seem to be less difficult for struggling readers than those requiring higher-level cognitive processing. Further, the authors indicated that their results show that students with reading difficulties have specific deficits in constructing complete and accurate mental models of the meaning of a text, which is reflected in poorer performance on certain item types.
Protocols for identifying students who have reading difficulties must account for the finding that the size of the achievement gap between typical and struggling readers depends on the format of the items used to assess reading comprehension. Given the variation found across different assessment formats, using multiple measures of reading comprehension merits consideration. Assessing a student for a reading disability with items that measure sentence-level comprehension (i.e., cloze and sentence verification) and items that measure the ability to form more complex mental representations of text, such as open-ended items, would inform educators about the nature and extent of the student’s reading difficulty. Additionally, doing so would help educators make the best determination about the student’s need for special education services.
The findings of this meta-analysis support those of other researchers who reported lower-than-expected correlations between different measures of reading comprehension. As a result, it appears that different item formats measure either somewhat different reading constructs or different aspects of the construct of reading comprehension. Therefore, those who develop reading comprehension assessments (whether for classroom, state accountability, or research purposes) should consider the impact of item format and make careful choices about the type or types of items to include. Further research is needed to gain additional insight into the constructs measured by existing reading comprehension assessments. Such research should involve both students with reading difficulties and typical readers to shed additional light on the differences between these groups in their performance on reading comprehension assessments.