What’s Your Plan? Accurate Decision Making within a Multi-Tier System of Supports: Critical Areas in Tier 2

How effective is your Tier 2 system of support? Are the interventions working? How do you know? The success of a multi-tier system of supports (MTSS) is dependent upon accurate decisions made at all tiers of the framework. This article is the second in a series of two articles on MTSS decision making. The first article introduced Tier 1 screening decisions, the effectiveness of core instruction, and the use of checklists to support implementation. This article addresses the complexity of the MTSS process and identifies decision points within critical areas of Tier 2 at the building level. Additional tools and resources are also offered to further assist teams in sharpening the accuracy of their Tier 2 decisions to have a positive impact on student outcomes.


Critical Tier 2 MTSS Decision Points

Tier 2 decisions at the building level are critical and complex. After gathering accurate screening data on all students, schools must analyze the data, validate student needs, and match students that need support with an effective intervention. In other words, schools need the right interventions in place, the interventions must be intensive enough to accelerate student learning, and each intervention must be implemented with fidelity. Not an easy task! If your Tier 2 is not working as well as you would like, consider the following five critical areas as a starting point for improving your Tier 2 supports.

Intervention Selection

How do you decide on adding an intervention to your building or district’s stock of intervention resources? When asked that question, a local district superintendent chuckled and responded, “We are poorly informed. We are heavily influenced by the opinions of outside experts or consultants. I have noticed that my principals tend to choose the first thing that might make the problem go away.” He is not alone. Many schools acquire new interventions almost by chance. For example, a vendor might give the central office something to pilot; a regional education cooperative might offer a professional development opportunity for the intervention and is purchasing the materials; or a teacher might choose to attend a workshop. However interventions are selected, we know that to be effective they need to be intensive, focused on targeted skills, and efficient for these students to catch up. Effective interventions also require more explicit instruction, such as instruction that is focused on critical content, is highly organized, and provides frequent opportunities for student responses and practice (see Archer & Hughes, 2011). The challenges involved in meeting all of these requirements are why many schools choose a published commercial product.

If Tier 2 is not as effective as you would like, one area to consider is how your building or district is selecting interventions or programs to use when intervening with students. This process can be overwhelming, however, consider sharpening your decision making in intervention selection by taking a deeper look at these next factors. When choosing a research-based intervention, start with a predictable problem by grade level or a cluster of a few grades by examining your reading intervention data over time. For example, in 4th through 5th grade teachers and teams within a building may have evidence that some students will continue to struggle with decoding multi-syllabic words, which has a significant impact on their comprehension.

Once you have identified a predictable priority skill area, begin the search for an intervention that would meet the needs of your problem and fit with your current Tier 1 curriculum, one that is available and affordable for your school, and one that is supported by rigorous evidence. (If you need assistance in determining what is considered “rigorous” evidence, the U.S. Department of Education, Institute of Education Sciences has published a user-friendly guide with a helpful checklist (see Appendix B, Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide, December 2003).

Last, but certainly not least, to maximize the effectiveness and efficiency of resources, it is important to select an intervention that your school has the capacity to implement and that you are able to replicate in more than one grade level or classroom.

The National Implementation Research Network (or NIRN) has developed a tool that is helpful in analyzing the components involved in the selection of a new intervention (or practice). The tool is shaped like a hexagon to highlight six the critical components: 1) need, 2) fit, 3) resource availability, 4) evidence, 5) readiness to implement, and 6) capacity to replicate. Each critical component shown in the hexagon is accompanied by a few questions that will help schools and districts determine if the new intervention, program, or practice is a worthwhile investment given the school’s limited time and money (for more information, see NIRN’s Exploration State: Assessing Readiness page.

assessing-evidence-based-programs-and-practices Figure 1: The Hexagon Tool for Assessing Readiness SOURCE: National Implementation Research Network (2009). Copyright 2013 Karen Blase, Laurel Kiser and Melissa Van Dyke and the National Implementation Research Network. Used with permission.

Matching Students to Interventions

A second critical decision point for the success of Tier 2 MTSS is how students are matched to interventions. For most districts, this practice has evolved over time and is done through a grade-level team that analyzes screening data to initially group students; validates student needs through additional classroom or diagnostic assessments, classroom performance, and teacher professional judgment; and then rearranges groups as needed. Error patterns in the screening protocols, as well as intervention pre-placement tests, are additional tools that are used in this process.

There are several important considerations within this stage to avoid decision errors. First, teams should use both data points and professional judgment to make decisions. Professional judgment alone may not be adequate, because it can be highly variable (VanDerHeyden & Burns, 2010, p 37). Teachers are primarily focused on the performance of the students in their own classroom rather than a broad sample of 4th-grade students. On the other hand, leaving out professional judgment may lead to mistakes in placement. One school almost placed a student in an intensive intervention based on screening data alone only to find that there was a typo in the screening data (a score of 145 was listed as 45). Situations like this are why individual student data in combination with the teacher’s professional judgment to validate and provide more in-depth information on the student’s current performance is critical to the success of matching students to intervention.

A second factor that can lead to errors in this process is the human nature of decision-making. Daniel Kahneman and Amos Tversky were instrumental in exploring how humans make judgments (Kahneman, 2003). They labeled some of these tendencies as making decisions based upon availability, anchoring, and insufficient adjustment. Availability is choosing the first thing that comes to mind, such as choosing an intervention for a student simply because the intervention already in use. For example, a team assigns Peter to a Read Naturally group that is already occurring; however, Peter’s skill deficits do not match the priority skills that the intervention is designed to teach. Anchoring is limiting the goals set for the student based on preconceived notions or perceptions of the student’s abilities. Comments such as “Oh, she won’t be able to do that, it’s too hard. Let’s put her here” are often examples of anchoring. Insufficient adjustment is maintaining initial biases about a student’s performance, even with evidence to the contrary. Statements such as “Well, I know the progress-monitoring data shows she is on track, but I just want to keep her in there just to be sure” can indicate insufficient adjustment.

Matching these students to the correct intervention is a difficult task with a lot of room for error. Becoming more aware of our natural tendency to jump to the easiest solution, keep students where they are, or fail to make adjustments when needed will prepare teams in adjusting the focus from adult or system preferences back to student need. The complexity of this process also highlights how important the overall MTSS framework is to effective Tier 2 functioning, as it takes creative leadership, flexible resources, and skilled staff to ensure this process works smoothly.

Monitoring Student Progress

How long do you keep a student in an intervention? What is “adequate progress”? This is a critical decision point that requires accurate and timely decisions by the teachers or team.

Historically, this topic has generated a lot of attention in the literature (McMaster, Fuchs, Fuchs, & Compton, 2002; Wayman, Wallace, Wiley, Ticha & Espin, 2007). In practice, schools typically chose from a variety of methods, such as an exit criteria based on reaching a benchmark goal; percentile rates; student rate of growth; student rate of growth as compared to the median rate of growth for all students in the grade or intervention group; or mastery tests. A common method is to review and analyze a graph of progress-monitoring data points. If three or more data points are below the aim line (the line between the baseline and the target goal), the team considers a change in intervention.

Unfortunately, these methods may not result in the best decisions because they do not take other important factors into account. For example, how many data points have been collected? How frequently was the data collected (e.g., once a month or once a week)? What were the student’s initial skills? What materials were used? With so many factors to consider, teachers and teams may lose confidence in their ability to interpret progress-monitoring data, which can lead to missed opportunities for these students. Inadequate goals, students lingering too long in interventions, and intervention changes not happening quickly enough can all interfere with student progress and success.

An emerging body of research is looking at student rate of progress in comparison to a normative sample of student progress percentiles. DIBELS Pathways of Progress are based on the scores of thousands of students and allow educators track their individual student’s growth alongside the trajectories of other students with similar initial reading skills. (Good, Powell-Smith, Dewey, 2013).

Researchers continue to investigate this topic, but educators already know that students cannot linger in Tier 2 instruction if it is not working. The general consensus seems to be that if all of the components of MTSS are functioning and you have weekly progress monitoring data, eight to twelve weeks may provide enough data to determine if the intensity of instruction is making a difference or if the student needs a change. In some cases, six weeks may be enough. But if students are not making progress, the bigger question is what are the barriers to the student’s learning? What instructional supports can be provided to break down those barriers? Additional diagnostic assessment, which may or may not include evaluation for a disability, may be needed to assist the team in problem solving through the student’s learning difficulties.

Managing Interventions

Have you ever set a goal or made a plan to change a behavior, such as exercising more or eating more vegetables? Did you do it?? As it turns out, adults do not always do what they say they will do (VanDerHeyden & Burns, 2010). Many times, Tier 2 interventions are carefully planned at the beginning but then lack follow up and monitoring, which could explain why students fail to make progress in Tier 2 interventions. For example, one team thought an intervention was implemented as designed, with one lesson completed per intervention time period. But the team discovered much later in the semester that they were only on Lesson 3, when according to the original plan they should have been on Lesson 10.

Failing to monitor intervention data over time is another example of a predictable problem. Because these problems are predictable they can be anticipated and addressed before they impact student learning. Along with student progress-monitoring data, Tier 2 interventions should be monitored with a checklist procedure and, at times, direct observation by a coach or coordinator. The checklist can be as simple as the following example:

  • Student attendance
  • Materials used
  • Actual time engaged in intervention
  • Pacing or lesson steps accomplished

The school MTSS leadership team or a subcommittee, including the coach and the principal, should systematically review your checklist to ensure that the Tier 2 intervention is operating as intended by the research design. An additional resource for checklists for many common interventions can be found at Heartland Area Education Agency 11’s website, Treatment Integrity Checklist page

Tracking Intervention Effectiveness

What if you have evidence that your intervention is being implemented as intended but a number of students in your building are still not making adequate progress? The last critical decision point within Tier 2 is gathering information on the effectiveness of Tier 2 interventions as a whole, in addition to individual student progress. If an intervention is not producing positive effects, schools should examine the integrity of the implementation of the intervention as well as the match to student need. Tier 2 intervention time is too precious to waste on an intervention that is not consistently accelerating growth for the majority of the students.

Michigan’s Integrated Behavior and Learning Support Initiative (MiBLSi) has developed a Tier 2-3 Intervention Tracking Tool, a three-part resource designed to assist teams in systematically evaluating Tier 2 and 3 interventions. It consists of 1) Student Detail; 2) Intervention Effectiveness; and 3) Schoolwide Access (see The Intervention Effectiveness gives a monthly update on the overall effectiveness of the intervention using a combination of student performance data and an implementation fidelity rating. Teams can use this tool to monitor how groups of students are performing over time with the intervention.

Intervention Sept. Oct. Nov. Dec. Jan. Feb. Mar. April May


# of Students Participating in the Intervention











# of Students Responding/ Making Adequate Progress toward their Goal











% of Students Responding/ Making Adequate Progress toward their Goal











Average Implementation Fidelity Rating










Figure 2: Tier 2-3 Intervention Tracking Tool SOURCE: Michigan’s Integrated Behavior and Learning Support Initiative (MiBLSi)

Troubleshooting Guide for MTSS Decisions

Finally, an additional tool that can assist MTSS leadership teams in sharpening their decision-making process is the Troubleshooting Guide for MTSS Decisions: Building Level. This guide was designed to provide ideas, resources, and additional questions for critical decision points within MTSS, such as screening, instruction, supplemental intervention, evaluation, and implementation. When teams are faced with a difficulty within one of the above focus areas, they can use the suggested resources as a guide to prompt their thinking on what may be causing the problem.

Putting It All Together

How effective is your Tier 2 system of support? Are the interventions and the systems you have carefully put in place effective and efficient? How do you know? Taking a deeper look at the five critical Tier 2 decision points, namely 1) intervention selection; 2) the process of matching students to interventions; 3) monitoring student progress; 4) managing interventions; and 5) tracking intervention effectiveness is an essential part of a healthy MTSS. When these Tier 2 system pieces are working, combined with effective leadership, core instruction, and resource allocation, students and staff have the best opportunity to reach their fullest potential.


Archer, A., & Hughes, C. (2011). Explicit instruction: Effective and efficient teaching. New York, NY: Guilford Press.

Good, R.H., Powell-Smith, K.A., & Dewey, E.N. (2013). DIBELS Pathways of Progress: Setting Ambitious, Meaningful and Attainable Goals in Grade Level Material. Available:

Kahneman, D. (2003). A perspective on judgment and choice. American Psychologist, 58, 697–720., In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of Response to Intervention: The science and practice of assessment and intervention, p. 107.

McMaster, K., Fuchs, D., Fuchs, L. S., & Compton, D. L. (2002). Monitoring the academic progress of children who are not responsive to generally effective early reading intervention. Assessment for Effective Intervention, 27(4),–23–33).

Identifying and implementing educational practices supported by rigorous evidence: A user friendly guide (2003). Coalition for Evidence-Based Policy, Washington, DC.
Wayman, M. W., Wallace, T., Wiley, H., Tichá, R. & Espin, C. A. (2007). Literature synthesis on curriculum-based measurement in reading. The Journal of Special Education, 41(2), 85-120.

      Back To Top