Click for next page ( R2


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page R1
] LEARN FROM EXPERIENCE : Evaluadng Eartr Childhood Demonsh~hon Programs Jeffrey R. Travers and Richard I. Light Ed itors Pane} on Outcome Measurement in Early Childhood Demonstration Programs Committee on Child Development Research and Public Policy Assembly of Behavioral and Social Sciences National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1982

OCR for page R1
NOTICE: The project that is the subject of this report was approved by the Governing Board of the National Research Counc il. whose members are drawn f rom the Counc its of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The Trembers of the Committee responsible for the report were chosen for their special competences and with regard for appropriate balance. This report has been reviewed by a group other than the authors according to procedures approved by a Report Review Committee consisting of members of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The National Research Council was established by the National Academy of Sciences in 1916 to associate the broad community of science and technology with the Academy's purposes of furthering knowledge and of advising the federal government. The Council operates in accordance with general policies determined by the Academy under the author ity of its congressional charter of 1863, which establishes the Academy as a private, nonprof it, self-governing membership corporation . The Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in the conduct of their services to the government, the public, and the scientific and engineering communities. It is administered jointly by both Academies and the Institute of Medicine. The National Academy of Engineering and the Institute of Medicine were established in 1964 and 1970, respectively, under the charter of the National Academy of Sc fences . Library of Congress Cataloging in Publication Data Main entry under t isle: Learn ing f rom exper fence . Includes bibliograph ical references . 1. Child development--United States--Addresses, essays, lectures . 2. Education, Preschool--Un ited States--Addresses, essays, lectures. I. Travers, Jeffrey R. II. Light, Richard J. III. National Research Council (U.S. ) . Panel on Outcome Measurement in Early Childhood Demonstration Programs. LB1115. L33 370.15 ' 2 81 - 22595 ISBN 0-309-03232-6 AACR2 Ava i fable f rom NATIONAL ACADEMY PRES S 2101 Constitution Avenue, N.W. Washington, D.C. 20418 Printed in the United States of America

OCR for page R1
Pane] on Outcome Measurement in Early Childhood Demonstration Programs Richard J. Light, (Chair), Graduate School of Education and J.F.K. School of Government, Harvard University Rochelle Beck, Children's Defense Fund, Washington, D.C Joan S. Bissell, Employment Development Department, Sacramento, California Urie Bronfenbrenner, Department of Human Development and Family Studies Cornell University (member until 1980) Geraldine Kearse Brookins, Department of Psychology, Jackson State University Anthony S. Bryk, Graduate School of Education, Harvard University Dennis J. Deloria, Administration for Children, Youth, and Families, U.S. Department of Health and Human Services William S. Hall, Center for the Study of Reading, University of Illinois Robert W. Hartman, The Brookings Institution, Washington, D.C. Pablo Navarro-Hernandez, Department of Anthropology, Inter-American University of Puerto Rico (member until 1980) Barbara Heyns, The Center for Applied Social Science Research, New York University Melvin D. Levine, Department of Pediatrics, Children Hospital Medical Center, Boston, Massachusetts Garry L. McDaniels, General Accounting Office, Washington, D.C. Samuel Messick, Educational Testing Service, Princeton, New Jersey David P. Weikart, High/Scope Educational Research Foundation, Ypsilanti' Michigan 1 S Lee J. Cronbach (ex officio),-Member, Committee on Ability Testing, School of Education, Stanford University Staff Jeffrey R. Travers, Consultant/Study Director Janie Stokes, Administrative Secretary iii

OCR for page R1
Committee on Child Development Research and Public Policy Alfred J. Kahn, (Chair), School of Social Work, Columbia University Eleanor E. Maccoby, (Vice Chair), Department of Psychology, Stanford University Urie Bronfenbrenner, Department of Human Development and Family Studies, Cornell University John P. Demos, Department of History, Brandeis University Rochel Gelman, Department of Psychology, The University of Pennsylvania Joel F. Handler, School of Law, University of Wisconsin Eileen Mavis Hetherington, Department of Psychology, University of Virginia Robert B. Hill, National Urban League, Inc., Washington, D.C. John H. Kennell, School of Medicine, Case Western Reserve University Frank Levy, The Urban Institute, Washington, D.C. Richard J. Light, Graduate School of Education and J.F.K. School of Government, Harvard University Laurence E. Lynn, Jr., J.F.K. School of Government, Harvard University Robert H. Mnookin, School of Law, Stanford University William A. Morrill, Mathematica Policy Research, Inc., Princeton, New Jersey Richard R. Nelson, Department of Economics, Yale University Constance B. Newman, Newman and Hermanson Company, Washington, D.C. John U. Ogbu, Department of Anthropology, University of California, Berkeley Arthur H. Parmelee, Department of Pediatrics, University of California, Los Angeles Harold A. Richman, School of Social Service Administration, University of Chicago Roberta Simmons, Department of Sociology, University of Minnesota Jack L. Walker, Institute of Public Policy Studies, University of Michigan Robin M. Williams, Jr., Department of Sociology, Cornell University iv

OCR for page R1
Wayne Holtzman (ex officio), Chair, Panel on Selection and Placement of Students in Programs for the Mentally Retarded; Hog g Foundation for Mental Health, University of Texas Sheila B. Kamerman (ex officio), Chair, Panel on Work, Family, and Community; School of Social Work, Columbia University

OCR for page R1

OCR for page R1
Contents Preface PART 1: REPORT OF THE PANEL Evaluating Early Childhood Demonstration Programs Introduction, 3 Programs for Children and Families, 1960-1975, 7 The Program and Policy Context of the 1980s, 11 Implications for Outcome Measurement and Evaluation Design, 19 Implications for the Evaluation Process, 43 References, 49 PART 2: PAPERS The Health Impact of Early Childhood Programs: Perspectives from the Brookline Early Education Project Melvin D. Levine and Judith S. Palfrey Measuring the Outcomes of Day Care Jeffrey R. Travers, Rochelle Beck, - and Joan Bissell Informing Policy Makers About Programs for Handicapped Children Mary M. Kennedy and Carry L. McDaniels Preschool Education for Disadvantaged Children David P. Weikart ~ V11 ix 1 55 57 109 163 187

OCR for page R1
Comprehensive Family Service Programs: Special Features and Associated Measurement Problems Kathryn Hewett, with the assistance of Dennis Deloria 203 The Evaluation Report: A Weak Link to Policy Dennis Deloria and Geraldine Kearse Brookins 254 V111

OCR for page R1
Preface Late in 1978 the National Research Council, with support from the Carnegie Corporation, established the Panel on Outcome Measurement in Early Childhood Demon- stration Programs, to operate under the aegis of its Committee on Child Development Research and Public Policy. The panel was established in response to a widely per- ceived need to review and reshape the evaluation of demon- stration programs offering educational, diagnostic, and other services to young children and their families. The panel's mandate was to examine the objectives of contem- porary demonstration programs; to appraise the measures currently available for asssessing achievement of those objectives, particularly in light of their relevance for public policy; and to recommend new approaches to evalua- tion and outcome measurement. The members of the panel construed their mandate broadly. Recognizing the increasing diversity of programs aimed at young children and their families, we examined programs providing a wide range of services--not just preschool education (probably the predominant focus of demonstrations in the past) but also day care, health care, bilingual and bicultural education, services to the handicapped, and various family support services. Because we wanted to contribute to the future of evaluation more than to comment on its past, we deliberately included services and issues that have not been heavily studied but are likely to be salient in the 1980s and beyond. Rather than confine our attention to relatively small- scale, carefully controlled demonstrations, such as the preschool programs that were precursors of Head Start in the 1960s, we also examined larger, less controlled, policy-oriented demonstrations of novel service delivery systems. We paid explicit attention to the problem of ax

OCR for page R1
implementing successful demonstrations on a large (state or national) scale. While we tended to focus on publicly funded programs for children from low-income families, we also examined privately funded programs and programs that serve children without regard to income. The panel examined questions that went considerably beyond "outcome measurement" as that term is usually conceived. We paid relatively little attention to the metric properties of particular instruments, concentrating instead on the broader context of outcome measurement--on the kinds of information that would be most useful in shaping policies and program practices. This inquiry led to consideration not only of outcomes but also of the services delivered by programs, of day-to-day transactions between program staff and clients, and of interactions between programs and their surrounding communities. Finally, we found it impossible to discuss outcome measures without also considering the kinds of research designs and evaluation processes in which measures might most usefully be embedded. The panel itself was a diverse group, some from private research organizations. Although there were, of course, differences in emphasis and differences of opinion about specific points, it is significant that including persons trained in psychology, sociology, anthropology, economics, medicine, and statistics--some of them from the academic community, some from state and federal governments, and . . . these diverse members agreed on the panel's basic message An important part of the panel's message involves programs themselves: the diversity of services they render, the clients they serve, and the policy issues they raise. As members of the panel pooled their knowledge about particular programs, we began to see that systemati examination of the characteristics of contemporary demon- stration programs, and of their attendant policy issues, would go a long way toward pinpointing the inadequacies of existing measures and designs as well as point toward needed improvements. Our emphasis on program realities and policy concerns is not intended as advocacy for specific programs or policies; it is intended solely to highlight issues of design and measurement. ~ ~ In this connection, we attempted to balance attention to the benefits of children's programs with attention to measurement of their costs, administrative burdens, and unintended consequences. We by no means want to imply that evaluators must confine themselves to questions posed by program managers and x c

OCR for page R1
policy makers. On the contrary, one of the most important functions of evaluation is to raise new questions, and one of its major responsibilities is to reflect the concerns and interests of children, parents, and others affected by programs. Nonetheless, sensitivity to issues of public policy and program management, in addition to professional expertise in child development, family functioning, or research methodology, will probably increase the evaluator's ability to identify significant questions that have previously escaped notice. Existing evaluations have tended to focus on how programs influence the development of individual children Although the underlying concern of many programs has been long-term effects, in practice most evaluations have had to measure immediate impact--the "short, sharp shock," as one member of the panel put it--often by means of stan- dardized measures of cognitive ability and achievement. A panel composed orimarilv of researchers might be ~ , expected to urge a search for new measures in the "socioemotional" domain and to recommend design and funding of long-term, longitudinal studies of program effects. Although we recognize the value of such measures and studies for addressing certain scientific and practical questions, we see them as part of a larger mosaic of potential measures and designs, addressing a much wider range of questions. No single evaluation can examine every aspect of a program's functioning. On the contrary, resource constraints and the burden that evaluation imposes on programs and clients necessitate careful selection of . questions to be answered and methods to be used. However, the choice of measures and of research designs should be based on rational assessment of the full range of possi- bilities, in light of the goals and circumstances of the particular program and evaluation in question--not on grounds of convention or expediency. To this end the panel urges that evaluators give careful consideration to several types of information that lie outside the domain of developmental effects but that can potentially illuminate the working of programs as well as program outcomes in the broadest sense. attention to the importance of: Specifically we call characterizing the immediate quality of life of children in demonstration programs, particularly day care and preschool education, in which they spend a large part of the day; X1

OCR for page R1
describing how programs interact with and change the broader social environment in which a child grows or a family functions--the web of formal and informal institutions (extended families, schools, child welfare agencies, and the like) that can potentially sustain, enhance, or thwart growth and change; and documenting the services received by children and families and describing the transactions between clients and program staff. This information is essential for determining whether programs are operating in accordance with their own principles and guidelines and those of their funding agencies and sponsors. It is also essential for understanding variations in effectiveness within and across programs. More generally, we believe that the most useful evalua- tions are those that show how and why a program worked or failed to work. To understand which aspects of a demon- stration program can be applied in wider contexts, tracing the interactions among programs, clients, and community institutions is more valuable than merely providing a scorecard of effects. For this purpose, a mix of research strategies may be needed--qualitative as well as quantita- tive, naturalistic as well as experimental. This report bears the burden of amplifying and justify- ing the position outlined above. In preparing the report the panel drew on a group of papers on outcome measurement for specific types of Programs, prepared by panel members and consultants. Although the papers stimulated our thought and discussion, the report does not simply summar- ize the papers nor are its conclusions a compilation of conclusions presented in the papers. Rather the report identifies common themes and overarching ideas that do not necessarily appear in any single background paper. The papers vary-widely in scope and emphasis. The paper on health programs, by Melvin Levine and Judith Palfrey, covers a range of issues in health measurement that have arisen from the authors' experiences with a particular program, the Brookline Early Education Project. The paper by Jeffrey Travers, Rochelle Beck, and Joan Bissell offers a taxonomy of measurement approaches to day care. The paper on family service programs, by Kathryn Hewett and Dennis Deloria, concentrates on special issues raised by the unique and comprehensive characters of several federal and private programs. The paper on com- pensatory preschool education, by David Weikart, discusses the short- and long-term effects of some of the earliest ~ X11

OCR for page R1
and most important demonstration projects, concentrating particularly on the High/Scope Foundation's Ypsilanti Perry Preschool project. The paper on programs for the handicapped, by Mary Kennedy and Garry McDaniels, focuses on the concerns of federal policy makers. Finally, the paper on communication and dissemination of research results, by Dennis Deloria and Geraldine Brookins, discusses a cross-cutting issue outside the domain of outcome measurement per se, but one that is highly relevant for the use of evaluation results. Several people were particularly helpful in the preparation of this report, and I would like to acknowl- edge their contributions. Barbara Finberg of the Carnegie Corporation made constructive suggestions throughout our work. Early drafts of the report were reviewed in detail by Robert Boruch and Alison Clarke- Stewart as well as by members of the Committee on Child Development Research and Public Policy. John A. Butler developed the original plan for this panel, helped organize the study, and was study director at the beginning of the project. Janie Stokes, administrative secretary for the project, typed drafts of a number of the papers and kept things generally in order. I am fortunate to be associated with a panel that was both hard working and enthusiastic. Many members worked beyond the call of duty, and the individual papers that panel members volunteered to coauthor were helpful in guiding our discussion and presenting issues. Finally, my special thanks go to Jeffrey Travers, who wrote the report. Originally a panel member, then study director for the project, he produced draft after draft with both grace and humor. This report has benefited enormously from his substantive insights about children's programs and his ability to organize a complex mass of information. Richard J. Light, Chair Panel on Outcome Measurement in Early Childhood Demonstration Programs . . . x, Il