Panel #2: “Meeting the Needs of Struggling Learners-How is RTI Addressing the Needs of Students with Disabilities, Other Students Who Need Differentiated Instruction, and Students Who Need Adequate Instruction?”

Innovations in Implementation Integrity

By Amanda M. VanDerHeyden, Ph.D., Education Research & Consulting, Inc.

RTI Leadership Forum
Washington, DC
December 8, 2010

By far, one of the most common errors in Response-to-Intervention (RTI) implementation is failure to implement interventions with integrity. Given the research describing what it takes to get interventions implemented well in classrooms, it is amazing to see that most systems continue to overemphasize intervention selection and underemphasize intervention management. Believing that selecting the “right” intervention is all that it takes to ensure effective implementation is magical thinking. Research tells us that intervention failures ought to be exceedingly rare events. Experience tells us failures are much more probable than they ought to be. Sustained, effective implementation involves having a decision maker who will track the right indicators and make data-informed implementation adjustments to ensure the desired outcomes are attained. RTI is particularly vulnerable to decision errors because the validity of a final decision is influenced by not just the integrity of intervention but also the integrity and accuracy of all assessments, interventions, and decisions that preceded the final judgment.

Innovations are needed in RTI to limit “operator error” for the intervention, assessment, and decision procedures. In other fields, operator error is limited in many ways. Think of your car. The cruise control on my car automatically reduces speed when a car is in front of me at a certain distance. The lights automatically turn on and off when needed and the control panel tells me when it is time for maintenance. In medicine, for example, we see single-dose medications, technology that allows for implanted devices to deliver time-released doses of medication, and other strategies that are designed to reduce patient error. In fact, likelihood of operator error can be assessed relative to the costs of available intervention protocols in reaching a decision about which intervention to begin. This type of relative decision making is part and parcel evidence-based medicine but has yet to make its way into evidence-based education. Evidence-based education provides a necessary but insufficient focus on the degree to which an educational intervention is scientifically supported. Selecting a scientifically based intervention is only part of the process. There are many decisions that factor into the decision about when to intervene, who to intervene with, which intervention to select, and when to conclude that something different or more intensive is needed. In other words, there are many opportunities for operator error in RTI.

Lately, my inspiration for building sustainable, low-error RTI implementation has come from dartboards and prostate cancer. Let me explain. When deciding whether to conduct an assessment, users should consider the accuracy of the assessment in the context in which it will be used, the costs or side effects of the assessment, and whether it will lead to some change in instruction that will meaningfully advance learning as a result of conducting the assessment. When deciding whether to implement a treatment, decision makers must evaluate the probability of positive and negative outcomes resulting from use of the treatment versus the probability of outcomes resulting from not using the treatment.

Hoffman, Wilkes, Day, Bell, and Higa (2006) developed a decision-making aid that could be used by decision makers to determine relative risk when reaching decisions about what assessments to conduct and what interventions to begin. In the figure below, we can see that conducting a prostate screening each year slightly lowers the risk of death, but there is a cost. Patients exposed to the screening experience incontinence or impotence at much higher rates as a result of the screening presumably because of false-positive errors that lead to unnecessary treatment that carries those associated risks.


Figure 1: Reprinted with permission from Hoffman, Wilkes, Day, Bell, & Higa (2006). The roulette wheel: An aid to informed decision making. PLoS Med, 3(6): e137. doi:10.1371/journal.pmed.0030137.

Decision makers can look at the following dartboards and choose which one they would throw a dart at to make a judgment about whether to have an annual prostate screening. If they aim at the dartboard on the left, the chances of death are greater, but only slightly so. If they aim at the one on the right, their chances of death are lower, but with a much higher risk of negative side effects. 


Figure 2: Reprinted with permission from Hoffman, Wilkes, Day, Bell, & Higa (2006). The roulette wheel: An aid to informed decision making. PLoS Med, 3(6): e137. doi:10.1371/journal.pmed.0030137.


The same value can be obtained for a decision maker choosing to implement a treatment. In the example below, knowing that a disease has 40% mortality that can be cut in half with treatment—a treatment that carries with it minor side effects in 10% of patients—decision makers can readily see that treatment ought to be the favored option based on probability.



Figure 3: Reprinted with permission from Hoffman, Wilkes, Day, Bell, & Higa (2006). The roulette wheel: An aid to informed decision making. PLoS Med, 3(6): e137. doi:10.1371/journal.pmed.0030137.

This kind of probabilistic thinking is absent in decisions about screening and intervention in schools. In RTI, there has been a presumption that more intervention is always better, with a resulting emphasis on avoiding false-negative errors at screening. There has also been an assumption of stability of assessment accuracy across contexts in adopting and using certain decision rules. Research data tell us these assumptions are error prone. In education and in RTI, we could do a better job of specifying the probability of positive outcomes relative to the risk of worsening outcomes as a result of a given assessment or treatment in a particular setting. Expending resources to assess and intervene only when those expenditures are likely to improve the odds of a positive change for the students and decrease the odds of negative outcomes (or at least not increase the odds of negative outcomes) ought to be standard by which systems decide whether to use a given assessment or intervention strategy.

In education, our red outcomes might be failing to learn to read, dropping out of school, and exclusion from school. Yellow outcomes might be loss of instructional time, separation from peer groups, loss of free time or enrichment, and dollar costs of unnecessary assessment and intervention supplies. Green outcomes might be passing the high-stakes test of reading or mathematics or performing above functional benchmark scores on periodic school-wide screenings. The amount of green and red on the graph would depend heavily on the school context. Before beginning any assessment or intervention, there would be much more red on the dartboard for students enrolled in low-achieving schools (because there is a higher prevalence or higher probability of failure for students in that context). Prevalence of failure systematically affects the accuracy of an assessment tool designed to detect risk, for example, and changes the cost–benefit equation of doing nothing versus providing intervention. For example, if a child is enrolled in a low-achieving school, that child’s probability of failure is high as a result of simply being enrolled in that school. Intervention in that context may be favored even if it comes at a high cost because the cost of doing nothing is substantial: likely failure. Further, a screening assessment may perform no better than chance in detecting individual students at risk if greater than 50% of the children are actually at risk. Technological innovations make it possible to quantify the relative effects of assessments and interventions on outcomes so that we can make the most efficient, most accurate decision about when to intervene, who to intervene with, and which interventions to begin. In turn, more accurate decision making allows us to be more efficient and more effective at changing the odds of the outcomes we care about like learning to read or becoming proficient in mathematics. Strategies like the one presented by Hoffman and colleagues could greatly reduce operator error in RTI decision making by helping us aim at the right target and attain better educational outcomes as a result of our decisions.

Some folks reading this commentary may think this is pie-in-the-sky thinking, but as a frontline implementer, more efficient and accurate decision making would substantially change the quality of schooling experiences for students and their families. Currently, schools do not have a filter for determining how much assessment is enough or when one assessment is better than another. The same goes for intervention. I see this all the time and the prevailing logic is that “more is better.” This type of thinking is reflected in the recent media attention around the major gift to the Newark city schools. If I had 5 minutes with those funders and decision makers, I would say pay attention to efficiency and pay attention to operator error. More is not always better and it is time to give consideration to the consequences of unnecessary assessment and intervention and to become more precise in our allocation of instructional resources. RTI has fundamentally shifted the way we think about instructional resource allocations in schools with a resulting improvement in student and system outcomes. “More is better” thinking is primitive thinking that is too costly. Using technology to guide decision making at the ground level will reduce operator error in decision making and provide implementers with feedback about whether their efforts change the odds of failure for the better (or not), which is what it is all about anyway.


Hoffman, J. R., Wilkes, M. S., Day, F. C., Bell, D. S., & Higa, J. K. (2006). The roulette wheel: An aid to informed decision making. PLoS Medicine, 3(6), e137. doi:10.1371/journal.pmed.0030137


Copyright © 2010 National Center for Learning Disabilities, Inc. All Rights Reserved.

Back To Top
You must login to this website in order to comment.