October 26, 2009
I missed Secretary of Education Arne Duncan’s speech at Teachers College on Thursday because I was working on his behalf in Washington. I was one of about 17 researchers on a panel evaluating a batch of research proposals on school reform for the Institute of Education Sciences (IES), the research arm of the federal Department of Education. IES seeks to identify malleable factors (e.g., education programs, policies and practices) that can improve education outcomes. To do so, IES has developed a progressive goal structure for research projects. Goal One projects are exploratory, and intended to inform the development of interventions by examining existing relationships between policies and practices and educational outcomes. Goal Two projects are intended to develop innovative educational interventions that can be implemented in school settings, and to collect some preliminary data on the educational outcomes observed in a pilot implementation of the intervention. Goal Three projects use rigorous methods to examine the efficacy of fully-developed interventions, as well as the feasibility of implementation, in at least one local site. And finally, Goal Four projects attempt to evaluate whether interventions proven to be successful in a local site, with help from the program developers, can be scaled up to be effective under different conditions, and without the direct involvement of the program developers. (There’s also a Goal Five, for research on measurement, but that’s a different animal.) Over the years that IES has had this a goal structure, more than 70% of the projects funded under Goals One through Four have been Goal One or Goal Two projects; about one-quarter have been Goal Three projects, and only 3% have been Goal Four projects.
The reasons for this are pretty clear. To be a good prospect for scaling up in a Goal Four project, an intervention must previously have been shown to be effective in at least one site, using rigorous methods for assessing cause-and-effect relationships. Relatively few interventions meet this threshold, because most policies and programs don’t have educationally meaningful effects, even if it seems like they ought to. Similarly, projects that are good candidates for Goal Three funding must previously have shown at least some evidence of effects on student outcomes in pilot studies in which the intervention received a tentative tryout, but not a full-blown test using rigorous experimental or quasi-experimental research methods.
I was struck by a thought experiment: what if my panel of distinguished researchers (the other members, at least) had been presented with a proposal based on the Race to the Top criteria that Secretary Duncan talked about at Teachers College, and which have been acclaimed by opinion writers such as Nick Kristof and David Brooks, as well as the editorial page writers for major newspapers in New York City and around the country? The draft Race to the Top criteria for funding state proposals provide incentives for linking teachers to their students’ standardized test scores, and in his remarks on Thursday, Secretary Duncan drew attention to Race to the Top incentives for states and districts to link student performance to the teacher preparation programs from which students’ teachers had emerged. Only Louisiana currently does this, the Secretary said. What if a scale-up proposal for this intervention had been presented to a panel charged with applying the IES criteria to evaluate its fundability?
It would have been laughed out of the room.
Not literally, of course; the panel members take their work very seriously, and seek to provide feedback to applicants as well as to advise the staff of IES about the merit of the proposals. But a key criterion for the viability of Goal Three and especially Goal Four proposals is evidence that the intervention has had a positive effect on student outcomes. There is to date no evidence that the implementation of longitudinal data systems linking teachers and teacher preparation programs to student achievement outcomes has actually improved student performance.
Could such data systems result in improved outcomes for students? Sure. However, I have yet to see a full-blown theory of change specifying exactly how the implementation of longitudinal data systems would result in better outcomes, and even theories that seem quite plausible often don’t pan out. Developing a theory of change would be an essential feature of a Goal Two development and innovation project.
And that, in my view, is where longitudinal data systems linking teachers and teacher preparation programs to student outcomes belong. As innovative pilot projects developed and refined in local settings over a few years—not as projects rushed to scale in states across the country despite the complete lack of evidence that they will improve student achievement.