Panel 3 Speaker: Jack Fletcher, Ph.D. - University of Houston (TX)
RTI Leadership Forum
December 8, 2010
Good afternoon. I hope everybody enjoyed their soup. I certainly did. I wanted to call this talk Much Ado About Nothing, but I thought that was probably too irreverent so I didn’t go there. But in a sense I want to preface my talk by saying that I really do think that a lot of emotion is stirred up over this particular issue of what constitutes a comprehensive evaluation and the point of my paper is that it’s not really arbitrary or a matter of choice. There are laws and the laws are really clear that every child needs a comprehensive evaluation and it also talks about what the comprehensive evaluation has to include and most of the issues are not about whether to do our comprehensive evaluation or what a comprehensive evaluation looks like under RTI. It doesn’t have much to do with RTI at all. On the other hand, if you’re working in the context of an RTI model, and notice how I careful I am to phrase this, in the context of an RTI model which is a service delivery model and where identification is really a byproduct of participating in RTI, your conception of what a learning disability might be is different. An RTI model—I put up…can we have the next slide, please? I can click? I’ve already gone too far. How do I go back? Right. So these for example are 4 different kinds of models for identifying children with learning disabilities. What I’ve done is I’ve highlighted what I think people would regard as the essential characteristic of the model, so that with IQ achievement discrepancy models, the discrepancy between IQ and achievement and what people are saying is that it’s basically a marker for the core construct of a learning disability which is unexpected underachievement. Here you see a low achievement model, you see cognitive strengths and…that’s not right. Yeah. Okay. You see patterns of strengths and weaknesses which is a cognitive model and RTI, and RTI is an attempt to move beyond what used to be an even older neurological or perceptual model of learning disabilities away from what IQ discrepancy instituted which is basically a cognitive model towards a model that really looks at instruction and a child’s intractability to instruction, so that a child with a learning disability becomes a child who’s really hard to teach. People may not agree that we should make those shifts, but I think it’s important to recognize that that’s what inherent in these sorts of discussions. It’s what our underlying construct is for learning disabilities.
The learning disabilities summit that Lou Danielson and Renee Bradley organized some time ago talked about what the core aspects of a learning disability were. They identified three aspects. The first is that you evaluate a child’s instructional response, which is inclusionary. And notice how I talked about that. I didn’t say evaluate RTI. I said evaluate instructional responses. And people have conflated the evaluation of instructional response with the RTI process itself. Evaluating instructional response is a very important part of issues that pertain to eligibility for special education, but it cannot be the only criterion both for psychometric reasons and because of legal reasons. The law doesn’t allow it. And then LD summit said establish low achievement because low achievement is an inherent part of learning disabilities and then it said apply the exclusions. Make sure that the child is not underachieving because of contextual factors, other disabilities, things of that sort. When we talk about what needs to go into a comprehensive evaluation, this is what we need to be able to assess.
The misconceptions of RTI—I put only a few of them up—but the first is that the goal of RTI is to identify students as learning disabled. That’s not true. It’s a service delivery model. Identification’s a byproduct. The second is that inadequate instructional response equates to special education eligibility. Not only is that illegal but anybody who advocates that should go to special education jail. Like George Basche in Florida and Ed Steinberg in Colorado. Ed Shapiro should go there because these are examples of states that I’ve heard have an RTI stand-alone model.
Another myth. That the evaluation procedures are fundamentally different is not the case. Most aspects of the evaluation procedures, parental consent, reliability and validity of data, comprehensive data gathering process are universal and applied no matter what sort of identification model you use including the idea of the evaluation of instructional response in some way because the law requires evidence of adequacy of instruction in journal education in reading and math. So no matter what you do, that’s part of the evaluation. What’s different are mostly parental issues. This is what RTI adds. If you’re in an RTI model, you’re required to document parental notification and the right to request an evaluation at any time. You have to specify the learning strategies used to accelerate the progress. This goes into the IEP and it’s also given to the family. And then some states add additional criteria that might involve the number of interventions, the duration and the fidelity and that varies from state to state. But that’s what RTI adds. The rest of it is universal.
So, what is a comprehensive evaluation? Well this is a lot of words and I’m a pretty wordy person. It’s how I think is in words. It’s a data-gathering process that includes child observation. It may or may not use standardized texts. In the context of RTI, the goal is not only eligibility but the purpose of evaluating your child and doing a comprehensive evaluation. If you’re in an RTI context it’s to understand why the child has not responded to instruction. In the context of RTI instructional response is routinely evaluated, has to be added if you have another identification model. The exclusionary criteria require consideration of other factors and may involve additional evaluations. There is absolutely nothing about an RTI model that says that a child can’t be evaluated for speech and language issues, an intellectual disability, autism, anything else that the interdisciplinary team deems fundamental. And I agree with that. I think that’s important. I’ll even add that I think that children that are being evaluated for learning disabilities ought to have routine behavior rating screenings for co-morbid learning, attention, and emotional problems, should be done with every kid, easy to do. And that we ought to include norm referenced achievement tests because how you do on norm referenced achievement tests can be directly tied to intervention.
There’s a big question and a lot of controversy about what cognitive assessments add. I know how to do cognitive assessments. I’ve done them for years and years. I cannot find data that shows that cognitive assessments, strengths and weaknesses in cognitive skills are related to intervention outcomes. It’s very hard to find. I’m personally still doing research on it. I wish other people would. But it’s hard to find.
A bigger issue is that there is little evidence that there is additional value-added information that you get from an evaluation of cognitive skills if you’ve carefully evaluated achievement levels. Very important issue. They’re correlated, so if you measure achievement, what’s there left for a cognitive assessment to add?
And then one that really drives me nuts is the idea that cognitive deficits somehow indicate that the child has a biological problem as opposed to an environmental problem. If you’re in a Title 1 environment you can’t read words, on average you’re going to have phonological awareness difficulties. I mean it’s not that, it doesn’t really rule that out. And a learning disability is very much an interaction of biological and environmental problems.
IQ is an issue only if there’s a question about IQ, like the presence of an intellectual disability. And one of the explicit purposes of IDEA 2004 is to move people away from knee-jerk, gatekeeping kinds of evaluations where every child gets the same evaluation.
There are big issues in identification and RTI models, but they are basically many of the same issues that we’ve always had to deal with. There are no qualitative markers that tell us if somebody has LD or doesn’t have LD. LD exists on a dimension. It’s part of a normal continuum. It’s like other kinds of problems that kids have, like ADHD or obesity or hypertension in adults, there’s nothing unique about the kinds of problems. People still do not take into account measurement error and I’m dismayed by the number of states that have gone in and again adopted absolute criteria, tries to read below the 5th percentile, or….there’s no difference between that and saying that you have to have a 16 point difference between IQ and achievement. We are repeating the same mistakes and nobody’s talking about for example confidence intervals, measurement error which is something that we do routinely if we’re going to evaluate intellectual disabilities.
Instructional response itself is a continuum. It’s probably a continuum of severity. And we don’t have evidence that they’re qualitative markers that demonstrate, discriminate adequate and inadequate responders that don’t exist on a continuum of severity. The specific issues in RTI are more than just cut points and they don’t equate to the adequacy of the measurement of the, of instructional response. And I’m right with Amanda. I think the big question is how does the field move away from the preoccupation with rigid kinds of psychometric markers that we have used for 30 years towards more informed ways of making decisions about kids that involve multiple criteria and even as Daryl said the use of things like Bayesian models that would talk about the probability of certain outcomes given certain experiences or things of that sort. I talked about it in my paper, but I’m not going to talk about it now.
So I was asked to talk about alternative views. And the alternative view to the viewpoint that I would present is the one that’s in the White Paper that the Learning Disabilities Association recently released. And so in my paper I raised a lot of questions about the White Paper because I don’t feel like there’s an evidence base that supports some of the assertions that were made in the White Paper. The first is, and what I really want to know, is where is the data on processing strength and weaknesses models that shows us about decision errors? Specificity, sensitivity. How accurate are these models? I cannot find it. I’d also like some clear evidence of aptitude by treatment interactions. Group by treatment interactions because I can’t find it. The arguments, many of the arguments are really predicated on a straw-person view of RTI as a stand-alone model; a comprehensive evaluation is always required. Statues. I am so tired of seeing that the statues mandate a cognitive assessment. The statue says that you need to measure the manifestations of the disorder of psychological processes and then goes into the 8 domains of achievement in which a person may have some sort of impairment. It does not mandate assessment of cognitive skills. And then the other one—the idea that RTI models don’t generate true positives, meaning that if you go through the RTI process, there’s no way to evaluate the diagnostic accuracy of RTI models because you don’t know who has LD. If that is true, then there are no true positives in LD because the issue is always the same. It’s always going to depend on how we choose to operationalize the model and what sorts of measurements we take, and there are absolutely true positives at any model that we decide to use.
My Center that Brett Miller and NICHD support evaluates these sorts of things. We take them very seriously. And I’m going to finish up by just telling you about three studies that we’ve done recently that address some of these issues. We can’t find data on the sensitivity and specificity of processing strengths and weaknesses models. So what we’ve done is we’ve done a simulation study and we have basically taken 3 PSW models, created a latent data set where the number of kids that have specific learning disabilities is created and known, and then asked how well can these 3 PSW models recapture these data. And to try and summarize a very complicated study, for all three they are very accurate. They don’t identify very many children as SLD. Two to three percent, depending on assumptions that you make about the size of the discrepancy in cognitive skills and what’s an important difference. If the models say a kid is not LD, they’re very accurate. They don’t make very many false negative errors. But, the positive predictive value is very low, which is a big issue for any kind of clinical assessment model. And what that means is that if we did 10,000 tests, tested 10,000 kids with a CDM model, 1,558 would be identified as positive. Only 25 are correctly identified. And there are 1,533 that are false positives and if you’re in an ATI model you get the wrong intervention. Should be (sounds like I-etrogenic).
We need to study these models more carefully. I talked about a continuous severity. These are cognitive profiles of children who are inadequate responders at the bottom using different kinds of criteria. So the bottom two lines use two different criteria for defining inadequate response, the one in the middle are adequate responders, the ones at the top are typically achieving children. These are different sorts of cognitive measures from phonological awareness, rapid naming, different language measures, working memory measures. The simple point is that these are parallel profiles. Right? They’re parallel. You’re not seeing qualitative differences. And the best predictor of intervention response across all these different measures is the very first one which is phonological awareness.
And then finally I talked about value added models. This is really complicated. I apologize. But I just want people to know that people are studying these sorts of things. I’m sure you’re going to get more data from Doug later on. But if you take the cognitive…if you try and predict the cognitive data, you put their reading scores into the model and then you add the contrast of adequate versus inadequate response, there is basically no additional variance added, meaning that the cognitive skills are fully explained by their level of reading ability.
So I’m going to stop. I mean RTI is absolutely not a panacea. I have a laundry list of things here that potentially represent problems with RTI models. They need to be taken very seriously and people like Doug and Don Deshler have done a great job of reminding us of what these issues are, but let’s remember that RTI provides an alternative to cognitive and even older neurological conceptualizations of LD that are directly linked to instruction, which I think is important. Cognition. I have not said that cognition is not related to LD. I have not said that there aren’t neurobiological factors in LD. I study these things myself. I publish brain imaging studies. I work with people that do genetic studies. But I cannot sit here and tell you that this knowledge facilitates intervention. I have good brain imaging capabilities. I don’t send kids that I think might have a learning disability for a brain imaging study. I think that RTI makes learning disabilities a real construct. We can argue about how to measure learning disabilities, but the underlying constructs are absolutely real and they survive the types of definitional variability in the disputes that we have. Thank you very much. (applause)
Back To Top