National Academies Press: OpenBook
« Previous: Contents
Suggested Citation:"Preface." National Research Council. 1982. Learning From Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press.
×
Page R9
Suggested Citation:"Preface." National Research Council. 1982. Learning From Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press.
×
Page R10
Suggested Citation:"Preface." National Research Council. 1982. Learning From Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press.
×
Page R11
Suggested Citation:"Preface." National Research Council. 1982. Learning From Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press.
×
Page R12
Page xiii Cite
Suggested Citation:"Preface." National Research Council. 1982. Learning From Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press.
×
Page R13
Suggested Citation:"Preface." National Research Council. 1982. Learning From Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press.
×
Page R14

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Preface Late in 1978 the National Research Council, with support from the Carnegie Corporation, established the Panel on Outcome Measurement in Early Childhood Demon- stration Programs, to operate under the aegis of its Committee on Child Development Research and Public Policy The panel was established in response to a widely per- ceived need to review and reshape the evaluation of demon- stration programs offering educational, diagnostic, and other services to young children and their families. The panel's mandate was to examine the objectives of contem- porary demonstration programs; to appraise the measures currently available for asssessing achievement of those objectives, particularly in light of their relevance for public policy; and to recommend new approaches to evalua- tion and outcome measurement. The members of the panel construed their mandate broadly. Recognizing the increasing diversity of programs aimed at young children and their families, we examined programs providing a wide range of services--not just preschool education (probably the predominant focus of demonstrations in the past) but also day care, health care, bilingual and bicultural education, services to the handicapped, and various family support services. Because we wanted to contribute to the future of evaluation more than to comment on its past, we deliberately included services and issues that have not been heavily studied but are likely to be salient in the 1980s and beyond. Rather than confine our attention to relatively small- scale, carefully controlled demonstrations, such as the preschool programs that were precursors of Head Start in the 1960s, we also examined larger, less controlled, policy-oriented demonstrations of novel service delivery systems. We paid explicit attention to the problem of

implementing successful demonstrations on a large (state or national) scale. While we tended to focus on publicly funded programs for children from low-income families, we also examined privately funded programs and programs that serve children without regard to income. The panel examined questions that went considerably beyond "outcome measurement" as that term is usually conceived. We paid relatively little attention to the metric properties of particular instruments, concentrating instead on the broader context of outcome measurement--on the kinds of information that would be most useful in shaping policies and program practices. This inquiry led to consideration not only of outcomes but also of the services delivered by programs, of day-to-day transactions between program staff and clients, and of interactions between programs and their surrounding communities. Finally, we found it impossible to discuss outcome measures without also considering the kinds of research designs and evaluation processes in which measures might most usefully be embedded. The panel itself was a diverse group, including persons trained in psychology, sociology, anthropology, economics, medicine, and statistics--some of them from the academic community, some from state and federal governments, and some from private research organizations. Although there were, of course, differences in emphasis and differences of opinion about specific points, it is significant that these diverse members agreed on the panel's basic message. An important part of the panel's message involves programs themselves: the diversity of services they render, the clients they serve, and the policy issues they raise. As members of the panel pooled their knowledge about particular programs, we began to see that systematic examination of the characteristics of contemporary demon- stration programs, and of their attendant policy issues, would go a long way toward pinpointing the inadequacies of existing measures and designs as well as point toward needed improvements. Our emphasis on program realities and policy concerns is not intended as advocacy for specific programs or policies; it is intended solely to highlight issues of design and measurement. to balance attention to the benefits of children's programs with attention to measurement of their costs, administrative burdens, and unintended consequences. We by no means want to imply that evaluators must confine themselves to questions posed by program managers and In this connection, we attempted x

policy makers. On the contrary, one of the most important functions of evaluation is to raise new questions, and one of its major responsibilities is to reflect the concerns and interests of children, parents, and others affected by programs. Nonetheless, sensitivity to issues of public policy and program management, in addition to professional expertise in child development, family functioning, or research methodology, will probably increase the evaluator's ability to identify significant questions that have previously escaped notice. Existing evaluations have tended to focus on how programs influence the development of individual children. Although the underlying concern of many programs has been long-term effects, in practice most evaluations have had to measure immediate impact--the "short, sharp shock," one member of the panel put it--often by means of stan- dardized measures of cognitive ability and achievement. A panel composed primarily of researchers might be expected to urge a search for new measures in the "socioemotional" domain and to recommend design and funding of long-term, longitudinal studies of program effects. Although we recognize the value of such measures and studies for addressing certain scientific and practical questions, we see them as part of a larger mosaic of potential measures and designs, addressing a much wider range of questions. No single evaluation can examine every aspect of a program's functioning. On the contrary, resource constraints and the burden that evaluation imposes on programs and clients necessitate careful selection of questions to be answered and methods to be used. However, the choice of measures and of research designs should be based on rational assessment of the full range of possi- bilities, in light of the goals and circumstances of the particular program and evaluation in question--not on grounds of convention or expediency. To this end the panel urges that evaluators give careful consideration to several types of information that lie outside the domain of developmental effects but that can potentially illuminate the working of programs as well as program outcomes in the broadest sense. Specifically we call attention to the importance of: · characterizing the immediate quality of life of children in demonstration programs, particularly day care and preschool education, in which they spend a large part of the day; X1

. describing how programs interact with and change the broader social environment in which a child grows or a family functions--the web of formal and informal institutions (extended families, schools, child welfare agencies, and the like) that can potentially sustain, enhance, or thwart growth and change; and · documenting the services received by children and families and describing the transactions between clients and program staff. This information is essential for determining whether programs are operating in accordance with their own principles and guidelines and those of their funding agencies and sponsors. It is also essential for understanding variations in effectiveness within and across programs. More generally, we believe that the most useful evalua- tions are those that show how and why a program worked or failed to work. To understand which aspects of a demon- stration program can be applied in wider contexts, tracing the interactions among programs, clients, and community institutions is more valuable than merely providing a scorecard of effects. For this purpose, a mix of research strategies may be needed--qualitative as well as quantita- tive, naturalistic as well as experimental. This report bears the burden of amplifying and justify- ing the position outlined above. In preparing the report the panel drew on a group of papers on outcome measurement for specific types of programs, prepared by panel members and consultants. Although the papers stimulated our thought and discussion, the report does not simply summar- ize the papers nor are its conclusions a compilation of conclusions presented in the papers. Rather the report identifies common themes and overarching ideas that do not necessarily appear in any single background paper. The papers vary widely in scope and emphasis. The paper on health programs, by Melvin Levine and Judith Palfrey, covers a range of issues in health measurement that have arisen from the authors' experiences with a particular program, the Brookline Early Education Project. The paper by Jeffrey Travers, Rochelle Beck, and Joan Bissell offers a taxonomy of measurement approaches to day care. The paper on family service programs, by Kathryn Hewett and Dennis Deloria, concentrates on special issues raised by the unique and comprehensive characters of several federal and private programs. The paper on com- pensatory preschool education, by David Weikart, discusses the short- and long-term effects of some of the earliest · ~ X11

and most important demonstration projects, concentrating particularly on the High/Scope Foundation's Ypsilanti Perry Preschool project. The paper on programs for the handicapped, by Mary Kennedy and Garry McDaniels, focuses on the concerns of federal policy makers. Finally, the paper on communication and dissemination of research results, by Dennis Deloria and Geraldine Brookins, discusses a cross-cutting issue outside the domain of outcome measurement per se, but one that is highly relevant for the use of evaluation results. Several people were particularly helpful in the preparation of this report, and I would like to acknowl- edge their contributions. Barbara Finberg of the Carnegie Corporation made constructive suggestions throughout our work. Early drafts of the report were reviewed in detail by Robert Boruch and Alison Clarke- Stewart as well as by members of the Committee on Child Development Research and Public Policy. John A. Butler developed the original plan for this panel, helped organize the study, and was study director at the beginning of the project. Janie Stokes, administrative secretary for the project, typed drafts of a number of the papers and kept things generally in order. I am fortunate to be associated with a panel that was both hard working and enthusiastic. Many members worked beyond the call of duty, and the individual papers that panel members volunteered to coauthor were helpful in guiding our discussion and presenting issues. Finally, my special thanks go to Jeffrey Travers, who wrote the report. Originally a panel member, then study director for the project, he produced draft after draft with both grace and humor. This report has benefited enormously from his substantive insights about children's programs and his ability to organize a complex mass of information. Richard J. Light, Chair Panel on Outcome Measurement in Early Childhood Demonstration Programs · . . x'

Next: Part 1: Report of the Panel »
Learning From Experience: Evaluating Early Childhood Demonstration Programs Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!