merous examples have been identified in history and mathematics, as well as in science (National Research Council, forthcoming). Such “relevant” knowledge regarding the tenacity of everyday understandings becomes usable for a teacher only when it is applied to the subject and topic that are being taught, and when it is incorporated into instructional activities that draw out and effectively work with students’ preconceptions. These activities must have potential for being incorporated into existing instructional practice, or be an acceptable replacement for that practice, if they are to have an effect. Only the most extraordinary teachers will be able to undertake such a task on their own. Furthermore, only the most inefficient profession would require the design of individual solutions to a general problem.

To effectively “bridge” research and practice, then, a research and development program must generate and draw on existing robust, relevant knowledge from a variety of disciplines, elaborate that knowledge so that it is usable in instructional practice, and then incorporate it into carefully tested tools and programs, directed both at student learning and at teacher learning. The research and development must be closely intertwined, so that program features are designed in response to research knowledge (derived either from disciplinary research or from the study of educational practice), and so that knowledge is continuously revised in the iterative cycles of design, study, and redesign. This necessarily means that the research and practice relationship is neither linear nor unidirectional. Instead, researchers and practitioners must interact in meaningful, progressively more sophisticated ways. Research is not neatly packaged and sent out to teachers to be implemented. Instead, researchers and teachers are mutually engaged in research and development in the context of practice.

Critical to the notion of follow-through is that when research findings are compelling, sustained attention is required to ensure independent replication of research and evaluation results in the range of environments of intended use for an educational intervention. Because education is a complex enterprise in which any outcome is influenced by a variety of factors, the conditions that support success in one setting may not be understood until it is attempted in other settings in which conditions differ. Moreover, evaluations are often conducted only by the designers of the intervention or their “critical friends.”



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement