Print

Validated Forms of Progress Monitoring in Reading and Mathematics


As discussed in our companion piece on progress monitoring in a multilevel prevention system, research on progress monitoring has been conducted for many years now. Validated forms of progress monitoring have been developed over this span of years. One of these forms, curriculum-based measurement (CBM), is discussed in this article, where we also present some background on CBM-based progress-monitoring measures in reading in the elementary grades and in mathematics.

Curriculum-Based Measurement (CBM)

 

Progress-monitoring tools must provide schools with reliable and valid indicators of reading competence with which to index student improvement across time (Deno, 1985). Also, given that frequent data collection is required, progress-monitoring tools must be efficient: easy to collect the data, without excessive time demands. CBM is the approach to progress monitoring for which the vast majority of research has been conducted. That is, CBM has been validated with many studies and many different researchers have conducted those studies. CBM differs from most approaches to classroom assessment in two important ways (Fuchs & Deno, 1991). First, CBM is standardized so that the behaviors to be measured and the procedures for measuring those behaviors are prescribed and have been shown to be reliable and valid. Second, with CBM, each weekly test is of equivalent difficulty and represents what the teacher wants the student to be able to do well at the end of the year.

 

To illustrate how CBM is used, let's say that a teacher establishes a reading goal for year-end performance as competent second-grade performance. The teacher identifies enough passages of equivalent, second-grade difficulty to conduct weekly assessments across the school year. Each week, the teacher administers one test by having the student read aloud from a different passage for 1 minute; the score is the number of words read correctly. Each simple, brief assessment produces an indicator of overall reading competence (Fuchs, Fuchs, Hosp, & Jenkins, 2001).

 

Because each CBM test at a given grade level is of equivalent difficulty, a teacher can graph a student’s weekly scores and directly compare the child’s test scores collected at different times during the year. Also, a line of best fit can be drawn through the score to show the rate of reading improvement. Studies show that CBM progress monitoring enhances teachers' capacity to plan programs for and produce better achievement among students with learning disabilities. The methods by which CBM informs instructional planning rely on the graphed performance indicator score. If a student's growth line of best fit through these CBM data points suggests strong improvement, the teacher increases the student's goal for year-end performance; if not, the teacher revises the instructional program. Research shows that with CBM, teachers design more varied instructional programs that are more responsive to individual needs (Fuchs, Fuchs, & Hamlett, 1989b), that incorporate more ambitious student goals (Fuchs, Fuchs, & Hamlett, 1989a), and that result in stronger end-of-year scores on commercial, standardized reading tests (e.g., Fuchs, Deno, & Mirkin, 1984), including high-stakes state tests.

 

Progress-Monitoring Measures in Reading


Kindergarten

 

At kindergarten, the major options for CBM reading measures are phoneme segmentation fluency, rapid letter naming, and letter-sound fluency. With phoneme segmentation fluency, the tester says a word and the student says the sounds that constitute the word. The examiner presents as many words within 1 minute as the rate of the child’s response permits. With rapid letter naming, the examiner presents a page of lower- and uppercase letters randomly ordered; the student says as many letter names as he or she can in 1 minute. With letter-sound fluency, the examiner also presents a page with lower- and uppercase letters randomly ordered; this time, however, the student says sounds for 1 minute. Compared to phoneme segmentation fluency, rapid letter naming and letter-sound fluency are easier for teachers to learn to administer, and accuracy of test administration therefore tends to be stronger. On the other hand, compared to rapid letter naming, phoneme segmentation fluency and letter-sound fluency provide better targets for instruction, because they relate more clearly to what children need to master when learning to read. For this reason, phoneme segmentation fluency and letter-sound fluency may guide the kindergarten teacher’s instructional behavior more effectively (although such studies have not been conducted).

 

First Grade

 

At first grade, two approaches to CBM in reading have been studied. With one approach, students begin the year on nonsense word fluency but they switch to passage reading fluency in January. With nonsense word fluency, students are presented with a page of consonant–vowel–consonant (with some vowel–consonant) nonwords (like bap) and have 1 minute to read as many as they can. With passage reading fluency, students are presented with grade-level text (each alternate form is a passage of roughly equivalent, first-grade difficulty), and students read aloud for 1 minute. With the second, alternative approach, schools use the same progress-monitoring measure, word identification fluency, across all of first grade. With word identification fluency, students are presented with a page showing 50 high-frequency words (each alternate test samples words from a list of 100 words and presents the 50 in random order); students read as many words as they can in 1 minute. The advantage of nonsense word fluency is that it may help teachers figure out what sounds students do and do not know. The downside of the nonsense word fluency/passage reading fluency combination is it is impossible to compare scores from the first half of the year to scores in the second half of the year. By contrast, word identification fluency can be used with strong reliability, validity, and instructional utility across the entire first-grade year (Fuchs, Fuchs, & Compton, 2004).

 

Grades 2–5

 

At grades 2 and 3, the CBM passage reading fluency measure provides the strongest source of information on reading development. Each week, one test is administered. The student reads a passage representing the student’s grade level; each week a different passage of equivalent difficulty is used. The student reads aloud for 1 minute; the tester counts the number of words read correctly. The reliability, validity, and instructional utility of this measure have been demonstrated repeatedly (see Fuchs & Fuchs, 1998, for a summary). At grades 4–5, however, some studies suggest that the validity of the CBM passage reading fluency task decreases (Espin, 2006). So, we recommend that beginning at fourth or fifth grade, a different measure should be used that taps some aspects of comprehension more directly, namely, CBM maze fluency.

 

With CBM maze fluency, students are presented with a passage from which every seventh word has been deleted and replaced with three possible replacements, only one of which makes sense. The student has 3 minutes to read and replace blanks, and the score is the number of correct replacements. Most recently, Espin (2006) provided evidence that maze fluency demonstrates strong reliability and validity and models reading development well, beginning at fourth grade.

 

Progress-Monitoring Measures in Mathematics

 

With CBM, a teacher indexes a student’s competence with the grade-level curriculum. A key challenge in the development of CBM has been to identify measurement tasks that simultaneously integrate the various skills required for competent year-end performance. Two approaches have been used. One involves identifying a task that correlates robustly (and better than potentially competing tasks) with the various component skills constituting the academic domain. For example, Deno (l985) first identified passage reading fluency (often termed oral reading fluency) as a key CBM task by showing that its correlations with valued criterion measures exceeded correlations for other possible CBM tasks. Conceptually, it makes sense that passage reading fluency is a robust indicator of overall reading competence. This approach is illustrated in the CBM reading tasks just described.

 

There is, however, a second approach to designing CBM tasks. This second approach involves systematic sampling of the skills constituting the annual curriculum to ensure that each weekly CBM represents the curriculum equivalently. Curricular-sampling approaches to CBM in reading exist. However, the most widely known and used curricular-sampling approach is the math CBM systems developed in the 1980s by Fuchs and colleagues (e.g., Fuchs, Hamlett, & Fuchs, 1990). With these math computation and math concepts/applications CBM systems, each weekly test incorporates the same problem types in the same proportion: for computation, there are addition, subtraction, multiplication, and division of whole numbers and fractions; for concepts/applications, there are concepts, numeration, applied computation, word problems, geometry, money, and measurement. In either system, the total test score, which is the indicator of overall math competence in the annual curriculum, is graphed to depict slope (i.e., rate of learning). This second approach to identifying a CBM task also produces strong correlations with valued criterion measures. It offers the added benefit of informing instruction by providing descriptions of individual skill mastery, because each skill in the annual curriculum is systematically assessed on every weekly test (e.g., Fuchs, Fuchs, Hamlett, & Allinder, 1991; Fuchs, Fuchs, Hamlett, & Stecker, 1990).

 

References

 

Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232.

 

Espin, C. (February, 2006). The technical features of reading measures. Paper presented at the annual meeting of the Pacific Coast Research Conference.

 

Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57, 488–501.

 

Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). The effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Research Journal, 21, 449–460.

 

Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for reconceptualizing the identification of learning disabilities. Learning Disabilities Research and Practice, 13, 204–219.

 

Fuchs, L. S., Fuchs, D., & Compton, D. L. (2004). Monitoring early reading development in first grade: Word identification fluency versus nonsense word fluency. Exceptional Children, 71, 7–21.

 

Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989a). Effects of alternative goal structures within curriculum-based measurement. Exceptional Children, 55, 429–438.

 

Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989b). Effects of instrumental use of curriculum-based measurement to enhance instructional programs. Remedial and Special Education, 10 (2), 43–52.

 

Fuchs, L.S., Fuchs, D., Hamlett, C.L., & Allinder, R.M. (1991). Effects of expert system advice within curriculum-based measurement on teacher planning and student achievement in spelling. School Psychology Review, 20, 49-66.

 

Fuchs, L.S., Fuchs, D., Hamlett, C.L., & Stecker, P.M. (1990). The role of skills analysis
in curriculum-based measurement in math. School Psychology Review, 19, 6-22.

 

Fuchs, L. S., Fuchs, D., Hosp, M., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239–256.

 

Fuchs, L.S., Hamlett, C.L., & Fuchs, D. (1990). Curriculum-based measurement in math. For information, contact L.S. Fuchs, 228 Peabody, Vanderbilt University, Nashville, TN 37203.

 

Back To Top