This chapter is not intended to provide step-by-step guidance on how to conduct evaluations. Rather, we describe major stages in the evaluation process and discuss how NASA could improve its efforts related to each of those stages. Following an initial discussion of evaluation issues with some reference to NASA, the chapter is organized by the major components involved in evaluating programs, from design to evaluation of impact. The chapter draws in part on a paper the committee commissioned by Frances Lawrenz to review a set of ten external evaluations of NASA’s K-12 projects, including the Aerospace Education Services Project (AESP), NASA Explorer Schools (NES), a module of the Digital Learning Network (DLN), and EarthKAM (Lawrenz, 2007). Lawrenz also reviewed evaluations of two programs that are outside the headquarters Office of Education: GLOBE and the Sun-Earth Day event. Table 5-1 summarizes key aspects of the evaluations, including the questions and the design or methods.


The evaluation of education programs is a well-codified practice. There is a professional organization of evaluators, several related journals, and a code of ethics. There are established methods for framing evaluation questions; for hypothesizing the theories of change or of action by which a program expects to reach its goals; for developing measures of the extent to which the stages of a theory are realized; and for crafting an evaluation design, collecting data, analyzing the data, and reaching conclusions about the import of the investigation. Although there are disputes in the field about such issues as the best design to use for particular kinds of questions, the practices are widely understood and accepted.

In carrying out a specific program evaluation, it is important to be clear about the intended goals and objectives of a program, as well as to distinguish the purposes of the evaluation itself, in order to frame questions appropriately and design the evaluation to address those questions. The key to an effective evaluation is a design that answers the specific questions that are relevant for decisions at a given time. Sometimes, quantitative data may be necessary; at other times rich qualitative data are more responsive to the specific questions.

One way to arrive at priority questions for an evaluation is to consider the major audience for the evaluation and how the results from the evaluation will be used. It is important to recognize that one evaluation by itself may not be able to provide the necessary information to meet the needs of different audiences or the decision at hand. For example, program or project developers might want information on how to improve a program; congressional aides might want to know if the program improves student

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement