Moving From Good Intentions to Good Outcomes: Implementation of Educational Programs
Life is filled with commitments that are made in good faith. When we say we are going to exercise regularly, spend more time with our children, and save for our retirement, we typically mean it. The challenge is that life is a great deal more complicated than reasonable plans and good intentions. Frequently, change is hard, schedules are tight, other needs arise that are urgent, and we are accountable for other things. As a result, many well-intentioned commitments are never actualized.
Unfortunately, the same forces that strongly shape our private lives also act on us and our colleagues in schools. When educators make a commitment to implement programs for students, it is reasonable to assume that they mean to implement them at the time they make the commitment. Unfortunately, both research data and common observations in practice suggest that implementation in practice does not necessarily follow from the planning alone (Noell et al., 2005). In reality, getting programs implemented for students is frequently far more difficult than either identifying promising practices or obtaining educators' commitments to implement them.
The cynical observer of this reality might draw the conclusion that educators are resistant or unreliable. Although that may be true for the occasional bad apple, anyone who has worked closely with many teachers across years and settings would have a hard time endorsing those characterizations as generally being true of teachers. I would argue that they are people just like the reader and the author. Just like the rest of us, teachers are responding to the demands of their environment. The salient emergency gets attended to first, the things for which they are held accountable are attended to next, the activities that are necessary for maintaining a manageable daily routine next, and so on. The reality is that intervention plans are all too often very far down the list because they are not typically an emergency, teachers are not accountable for implementing them, and they are not necessary for managing the group.
It is important to recognize that parents and education administrators are incredibly influential in establishing what is important in schools. If these two groups attend to turning in lesson plans, completing forms (e.g., individualized education programs [IEPs]), bus duty, and maintaining order, then those are the things that are going to get done first. It is quite understandable that this is the case—teachers are eager to maintain their employment and their positive working relationships with parents and administrators. The problem in the current context is that the things in the education environment that often demand the most attention—the administrative and managerial tasks in many cases—are not necessarily the most important things in terms of the education of students. Tasks like giving a clear and compelling lesson in algebra or making sure that a student's intervention to develop phonemic awareness is actually implemented often do not make it to the top of administrators' and parents' list of things to follow up on.
The potential tragedy of a focus on the administrative and the managerial duties in education is that parents, administrators, and educators may begin to behave as if those are the critical outcomes. If a student has been evaluated, all of the relevant forms have been filled out, and the student is placed in a program that is labeled as being appropriate, the participants may perceive that their responsibilities have been met. However, little could be further from the truth. The reality is that a decision has just been made to label a student and may have been made to segregate that student from his peers for some part of his instruction. Segregation and labeling can have adverse consequences for students. The societal justification for this is that it will lead to better services and better outcomes for the student than if the action were not taken. The question is, will it?
Moving from Form to Substance: Intervention Implementation
Intervention plan integrity (IPI) is the degree to which an educational intervention or service is implemented as planned. IPI can be used to describe the degree to which an IEP, a Section 504 accommodation plan, a classwide behavior management plan, or an individual student intervention is implemented as it was planned. At a very basic level IPI is the single most fundamental element underlying all services to children with exceptional needs. If schools complete forms and place students in programs without actually implementing the intended services, the entire system of services to children with exceptional needs becomes an administratively supported mirage. The issue has both substantive ethical and legal dimensions (Noell & Gansle, 2006). For example, in many states, students are required to be provided with interventions in general education prior to consideration for a full and individual evaluation as a buffer against overidentification of students as disabled. In the most dramatic examples of this, Response to Intervention (RTI) itself is proposed as the means by which students are identified as having disabilities (Vaughn & Fuchs, 2003). However, what function does the system serve if what really happens is that plans are devised but never get implemented in harried schools in which no one is attending to IPI?
Educational systems must document the provision of individualized services to students who are disabled or suspected of having a disability; however, to do so at the expense of actually demonstrating implementation of those services would put the educational system in the unenviable position of having created a process with no substance. Moreover, such a failure to provide services would be, at a fundamental level, a failure to protect students' civil rights. It is also important to recognize that such a misdirected focus leading to such outcomes is not solely a teacher issue, but is a parent, administrator, teacher, and lawmaker issue. To the extent that all parties to the educational enterprise recognize IPI as fundamental to the legitimacy of the educational enterprise and attend to it, implementation will be improved. To the extent that the contributors to education focus on forms and management tasks, those things will be predominant.
Assessing Intervention Implementation
Considering all of the professional issues in assessing IPI would require a much longer and far more technical discussion than space here permits (see Noell, 2008, for further discussion). However, the research available thus far does seem to suggest two fundamental principles that can guide educators and parents in assessing IPI. Most importantly, look rather than ask. The available data suggest that teacher self-reports of implementation do not provide the kind of specific, accurate information needed for an accurate assessment of IPI. The most clearly promising alternative is to review the products of the intervention and to observe implementation. Brief, “drop by” observations of implementation can be incredibly informative, but they are hard to do. Reviewing products is an attractive approach because it is flexible as to when it is done and can reflect large spans of time. It simply involves reviewing the physical products of the intervention (e.g., self-monitoring logs, academic practice work products, and/or progress-monitoring data) to see if the intervention is being done and what elements are being omitted, if any.
Evidence suggests (Noell et al., 2005) that a second key principle is the importance of brief, regular meetings with the person or persons implementing the intervention to review the data. These meetings should include review of the student outcome data and the implementation data, and discussion of the problems that have emerged. In any intervention, problems will eventually emerge. Such problems can include student improvements that make the current plan irrelevant, problems with the implementation process, realization that the plan is impractical, and/or inadequate improvement. The key is to recognize that the plan will have to be revised and that keeping on track will inevitably require objective data about student outcomes and plan implementation.
One of the questions that will inevitably arise when policy makers, educators, and parents become seriously engaged in assessing implementation is how much is enough. Years of research and practice in this area suggest to me that perfect implementation in education may in rare cases be an attainable goal, but is rarely a sustainable goal. People make mistakes, schools have early dismissals, and disruptive events happen in schools. It also is clear that deciding beforehand how much implementation will be enough is not achievable with the current state of knowledge. However, individual students' data may provide the key on a case-by-case basis. Obviously, if a student is making good progress, then there are few reasons to worry about improving implementation. If student progress is poor, reviewing implementation data may suggest that the intervention does not meet the student's needs even when implementation is very good. Alternatively, if progress is poor and implementation is weak, the team involved in the intervention can use the data to develop strategies for improving implementation in hopes of improving student outcomes.
Educators, parents, and policy makers who choose to seriously examine the implementation of the educational interventions will face substantial challenges. The challenges will include both practical barriers, such as resistance to change, and technical barriers, such as deciding how to assess implementation. However, general and specialized programs in education are important commitments to the students we serve and to our own future as a nation. It is also true that sustained implementation of educational programs will require the sustained focus of educators, parents, and policy makers. Hopefully, the challenges will not deter us from being substantively engaged in assessing, talking about, and improving the implementation of programs in education. Improving student outcomes requires moving beyond planning for effective educational programs—it depends on actually implementing effective educational programs.
Noell, G. H. (2008). Research examining the relationships among consultation process, treatment integrity, and outcomes. In W. P. Erchul & S. M. Sheridan (Eds.), Handbook of research in school consultation: Empirical foundations for the field (pp. 323–342). Mahwah, NJ: Erlbaum.
Noell, G. H., & Gansle, K. A. (2006). Assuring the form has substance: Treatment plan implementation as the foundation of assessing response to intervention. Assessment for Effective Intervention, 32, 32–39.
Noell, G. H., Witt, J. C., Slider, N. J., Connell, J. E., Gatti, S. L., Williams, K. L., et al. (2005). Treatment implementation following behavioral consultation in schools: A comparison of three follow-up strategies. School Psychology Review, 34, 87–106.
Vaughn, S., & Fuchs, L. S. (2003). Redefining learning disabilities as inadequate response to instruction: The promise and potential problems. Learning Disabilities Research & Practice, 18, 137–146.
Back To Top