National Academies Press: OpenBook

Learning from Experience: Evaluating Early Childhood Demonstration Programs (1982)

Chapter: The Evaluation Report: A Weak Link to Policy

« Previous: Comprehensive Family Service Programs: Spal Features and Associated Measurement Problems
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 254
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 255
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 256
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 257
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 258
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 259
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 260
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 261
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 262
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 263
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 264
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 265
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 266
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 267
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 268
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 269
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 270
Suggested Citation:"The Evaluation Report: A Weak Link to Policy." National Research Council. 1982. Learning from Experience: Evaluating Early Childhood Demonstration Programs. Washington, DC: The National Academies Press. doi: 10.17226/9007.
×
Page 271

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

The Evaluation Report: A Weak Link to Policy Dennis Deloria and Geraldine Hearse Brookins As secretary of the U.S. Department of Health, Education, and Welfare (HEW) from 1977 to 1979, Joseph Califano personally requested many of the evaluations that were carried out by the HEW Office of the Inspector General. Among the hundreds of department priorities, issues commanding Califano's direct attention were of greater than usual importance. Following his request, the evaluation staff of the Office of the Inspector General would spend six or eight months gathering data, often traveling to many regional offices and local projects across the country. When data collection and analyses were completed, the inspector general and his staff reported the findings directly to Califano. Califano stipulated that the findings be summarized in a written report not longer than 15 pages and summarized orally in 20 minutes, followed by 40 minutes for his questions. From this brief interchange he decided what action, if any, should result from the months of evaluation. Some dearly held evaluation practices are called into question when the secretary of a major department permits but 15 pages and 20 minutes for reporting important findings, when evaluation reports about federal programs and policies often are 100 to 300 pages in length. Given this discrepancy, it seems necessary to reexamine their contents and organization. By doing so we may find ways to refocus them to better meet the needs of policy makers such as Califano. Here we first discuss the work of policy makers and some reasons why evaluation reports tend to be long. We then examine three policy reports to determine their similarities in meeting the needs of policy makers. 254

255 Finally, we summarize 10 features that appear to make evaluation reports more useful. POLICY MAKERS: PEOPLE IN A RUSH Managers' activities are generally characterized by brevity, variety, and fragmentation, claimed Mintzberg (1973) in a broad review of studies examining the nature of managerial work. He pointed out that managers' jobs are remarkably alike, including senior and middle managers in business, U.S. presidents, government administrators, production supervisors, foremen, and chief executives. He found the brevity of managers' activities surprising: telephone calls averaged 6 minutes, unscheduled meetings averaged 12 minutes, and work sessions averaged 15 minutes. Brevity was also reflected in the treatment of mail. Executives expressed dislike for long memos and skimmed most long reports and periodicals quickly. Most surprising, significant activity was interspersed with the trivial in no particular order. Managers must be prepared to shift moods quickly and frequently. Mintzberg found strong indications that managers preferred the more active elements of their work: activities that are current, specific, and well defined. Among written communications, they seemed to prefer those dealing with active, concrete, live situations. The managers typically received about 20 periodicals and many reports per week. "Most were skimmed (often at the rate of 2 per minute), and an average of only 1 in 25 elicited a reaction," stated Mintzberg (1973:39). From this it would appear that to be effective, or to be even thought- fully considered, evaluation reports written for policy makers must make some carefully thought-out concessions to such a frenzy of executive activity. EVALUATORS: PEOPLE CONCERNED WITH METHODS Evaluators are typically social scientists, with extensive training in the scientific method. Central to that training is the notion that any statement of evalua- tion or research findings must be accompanied by a careful description of the precise methods used, so other scientists can replicate them to verify the findings. By training and scientific necessity, evaluators devote a substantial part of most reports to detailed descriptions

256 of the methods used. Such reports typically follow the classical "dissertation" style, having chapters on back- ground, purpose, hypotheses, subjects, design, measures, data collection. statistical analysis, findings, and ~ · ~ a~scusslon. , The many variations of this style share one essential characteristic: Their f undamental organization emerges from the scientific method. dictates that the overall report format be organized Practically, this around the methods used, and findings are embedded as a subsection within. The dissertation-style report may contain facts needed by policy makers, but they are usually fragmented because of the need to respect the conventions of science. For example, the details needed to answer a single policy question may be scattered across several chapters--some in the chapter describing the subjects, some in the dis- cussion of child measure outcomes, some in the discussion of parent measure outcomes, some in the discussion of staff interview outcomes, and some in the chapter present- ing overall findings. The burden falls on the policy maker to locate the fragments and piece them together to answer complex questions. . TWO REPORTS ARE NEEDED: ONE SCIENTIFIC, ONE POLICY The methods-oriented evaluation report is necessary to uphold the conventions of science, but a policy-oriented report seems necessary to reach policy makers. Coleman (1972) elegantly described the relationship. He said that the original policy questions must be translated into questions that can be addressed by the methods of science; at the conclusion of the scientific process the findings must be translated into the world of policy. Viewed in this way, most evaluations stop short of comple- tion if the final report is a conventional, methods- oriented one. Only a rare policy maker would spend the time and effort needed to extract policy information from a methods-oriented report while being bombarded by the dizzying activity described by Mintzberg. An alternative would be a brief, policy-oriented report that describes concrete action items in language understandable to policy makers. Passages detailing methods used to conduct the evaluation would be removed so the policy maker would not have to sift through them to locate passages with findings of interest. Policy questions and their answers would form the major organiz

257 ing theme of the report. The jargon of evaluation would be avoided. Policy makers might well consult such a report in making important decisions--at present a too-rare occurrence. Three Sample Policy Reports To explore our hunches we examine three policy reports that embody many of the features needed by policy makers. All three were written to directly inform or influence policy, and they advocate specific policy actions. The authors appear familiar with matters of policy and policy reporting. They are situated differently in relation to the policy makers they attempt to inform: Some work in a federal agency responsible for administering programs, some in a private research consulting firm, and some in a child advocacy group. The reports are different in important ways. One report presents original data only, another presents findings from other studies only, and one presents some of each. One looks only at the process of implementing a major piece of legislation, another at the effects on children of existing school enrollment practices, and another looks at both program process and effects on children. One project had a budget of more than $7 million, another less than 5 percent of that, and one used existing staff in a federal agency. One was requested by Congress, another by a program administra- tion agency, and one was undertaken solely through private initiative. This diversity makes their similarities even more significant. Although the three reports have certain exemplary features, they are also not without faults, some of which may be serious. Whatever faults they possess, however, do not detract significantly from the policy-oriented characteristics we are interested in. This paper examines and emphasizes the strengths of these reports, rather than their faults, in the belief that this strategy can more directly contribute to future improvements. This paper does not attempt to assess the actual policy impacts that these reports have already had, nor does it lay out a sequence of events to increase policy impact. Past experience suggests that policy reports, no matter how well written, will not have much influence without deliberately organized support of one kind or another. Such a topic lies outside the intent of this paper.

258 Our examination is based on simple inspection rather than quantitative analysis. It should be considered a search for hypotheses to be confirmed, rather than a confirmation itself. To the extent our conclusions appeal to common sense, we consider them sufficient. To or tent our examination we looked to the reports for answers to four questions: 1. What policy perspective did the authors adopt? 2. What policy questions did they address? 3. What methods did they use to answer the questions? 4. What format of presentation did they used There are many smaller questions buried in each of these; the answers are implicit in the narrative. From this examination has evolved some guidelines that may be of use to others preparing policy reports. Report 1: Progress Toward a Free Appropriate Education an] ;~v Perspective This report (U.S. Office of Education, 1979) is the first of a series of annual reports to Congress on progress in the implementation of P.L. 94-142, the Education for All Handicapped Children Act of 1975. The act requires reports to be delivered to Congress each January. The Bureau of Education for the Handicapped (BEH, now located in the U.S. Department of Education), which prepared the report, is the agency responsible for carrying out provisions of the act. This, of course, gives the authors a vested interest in the findings, since their purpose is to report BEH's success or lack of success in implementing the act. Despite the potential - for a conflict of interest, the report maintains an objective tone throughout; problems as well as successes in implementation are highlighted. The report does not stress future policy actions, but its discussions of problems often include descriptions of corrective actions initiated by BEH or references to the need for additional money or work. Although BEH wrote the report mainly for Congress, the authors explicitly kept in mind many others who might use the findings, such as federal administrators in HEW, the Office of Education, and BEH; state directors of special education and state evaluators; leaders of professional

259 associations and advocacy groups; and members of the academic community (U.S. Office of Education, 1979:77). The report addresses issues of importance to federal policy by virtue of the source of its mandate, the position of its authors, and its stated audiences. Depending on the nature and seriousness of its findings, the report could influence many kinds of decisions: federal legislative authorizations and appropriations, federal regulations and guidelines, federal program implementation practices, training and technical assist- ance, and similar state (and local, where appropriate) decisions. Moreover, massive funds are involved for implementing the act. For fiscal 1979 the federal appropriation was $408 million, and the states projected outlays up to 30 times as great, for a possible total of $24 billion nationwide (U.S. Office of Education, 1979:113). The act affects every state and every local school district, involving thousands of educators and millions of children. Policy Questions Six policy questions are addresssed in the report: · Are the intended beneficiaries being served? · In what settings are the beneficaries being served? What services are being provided? What administrative mechanisms are in place? What are the consequences of implementing the act? To what extent is the intent of the act being met? All six are closely tied to the concerns of Congress and the requirements of the act. Their final wording was arrived at by a task force, which invited consultation and review from all persons directly concerned with administration of the act. None of the questions explicitly inquires about the changes in children resulting from implementation of the act; instead, they explore the process of providing required services and whether the intended children are being served. Each of these questions implies a host of subordinate questions, which are discussed either directly or indirectly in the narrative. For example, under the question "Are the intended beneficiaries being served?" the main issue appears to be "How many eligible children are not being served?" Another subordinate question examines inconsistencies among states in the percentages

260 of children served and the reasons for the differences. Another asks if only eligible children are being served. None of the major questions directly mentions costs, although costs are prominently discussed in many of the subordinate questions. Methodology This report summarizes data from other sources rather than presenting original data. Sixteen sources are cited, although the body of the report says little about the studies or their methods. Readers wishing more information are referred to notes, append- ixes, or to the studies themselves; references to them are made mainly through the use of footnotes or credits under tables and figures. By thus removing most discus- sion of the supporting sources, the full emphasis of the report is place on substantive issues, producing a high ratio of substantive findings to supporting explanation. The policy questions are stated in general terms, but each section of the report begins by clarifying the intent of its question. The clarifications are taken directly from language in the act or related committee print, and the authors provide additional interpretation when needed. They cite findings from previous studies or court rulings when specific problem areas need to be emphasized. This results in a thorough contextual description for readers, setting clear expectations for the kinds of findings needed to answer the questions. The authors present and discuss data from the appropriate sources. The report often points out discrepancies or conflicting findings and isolates these areas for examination in future studies. Throughout the report the methodology is subordinated to policy considerations. For example, historical narra- tive and case examples are interwoven with statistical tabulations for answering a single question. This is an improvement on the frequent practice of grouping statis- tical results in one part of the report, historical back- ground in another, and case examples in a third; such fragmentation forces the reader into several disconnected sections of the report for partial answers to a single question. The BEH report avoids this problem. Format The BEH report addresses six policy questions; the questions are used as chapter headings to organize the entire report. This permits the reader to go directly to the questions of interest and find all the needed information in one place.

261 An executive summary, which can be read in about 15 minutes, provides an overview of the report. A reader wishing to follow up one of the statements in the execu- tive summary can find the corresponding sections of the report fairly easily. Two improvements would have made it even easier to locate them: page references following statements in the summary and a more complete table of contents. Policy-related subheadings are used throughout the report and could easily have been listed in the table of contents. Most topics in the report are presented in self- contained, well-labeled sections that are readable in 15 minutes or less. This permits rapid access to the authors' conclusions in any area of the report, eliminat- ing the need to sequentially read the report from cover to cover for answers to specific subordinate questions. This vastly improves accessibility of information compared with more traditional evaluation reports and saves much time and work for the reader. The readability of the report is lower than antici- pated, measuring near the "very difficult" score of Flesch's (1949) readability formula. A close look at the language in the report shows that there is just as much jargon as in the typical evaluation report, but with one important difference: The jargon is that of policy makers, not of evaluators. Much of the language derives from the act itself and from related legislative processes; some originates in the discipline of special education; the rest originates in the federal and state processes for implementing the act. Most of this jargon, unlike evaluation jargon, is likely to be familiar to the policy makers who will read the report or its summary. The report could nonetheless benefit from more deliberate use of plain English. Statistical presentations were kept simple throughout. and graphic displays were used frequently. No special training is required of the reader to interpret the statistical data. Only the most elementary statistics were presented: counts, percentages, ranks, and costs. Any backup materials that did not directly assist in answering the policy questions were relegated to appen- dixes or referenced in other sources. Throughout the report, however, sufficient information was included to eliminate almost all need for reference to the appendixes or sources in order to understand the report.

262 Report 2: Children at the Center Policy Perspective Children at the Center (Abt Associates, Inc., 1979) is the final report of the National Day Care Study (NDCS), a large-scale study of the costs and effects of day care. NDCS was initiated in 1974 by the Office of Child Development, now the Administration for Children, Youth, and Families (ACYF). This large-scale research project was designed to "inves- tigate the costs and effects associated with variations of regulatable characteristics of center day care-- especially care giver/child ratio, group size, and care givers qualifications" (Abt Associates, Inc., 1979:xxv). These three characteristics are generally considered to be central determinants of quality in center day care and are key factors in state and federal regulations. One of the central issues of federal policy in subsi- dized day care is the relationship of day care costs to its effects on children. Undergirding this issue are a number of assumptions regarding the characteristics of center care, the quality of care, and the developmental well-being of children in day care settings. ACYF was particularly committed to the assumption that ". . . developmental well-being and growth of children (could) be fostered in a day care setting" (Abt Associates, Inc., 1979:xxvi). Hence it seems the NDCS was implemented to determine whether federal regulations could be developed to incorporate ACYF's commitment to quality without nullifying the indirect economic benefits that have motivated day care legislation. Although ACYF was the primary source that influenced the structure of the stuBv. there ware ;~1 cr' Etcher c^~'r,-~= and issues. The Federal Interacencv Dav Care R~al~i r~m~nE lacked empirical evidence to support the assumptions upon which the requirements were based, and this lack to a large degree motivated the structure of the NDCS. There were few data available on a large-scale basis regarding characteristics, such as group size, staff/child ratio, and care giver qualifications, their effects on children, and the relationship of costs to effects--all of which are policy issues. The NDCS combined some of the concerns of ACYF and the needs of the Federal Interagency Day Care Requirements into one study by examining the effective ness of varying center day care arrangements while taking into consideration such demographic variables as regions, states, socioeconomic groups, etc. At least with respect to center care, it was thought that the results of such a

263 study could provide essential information for policy reformation regarding standards and regulations. The report speaks to several policy audiences. It is explicitly addressed to administrators within ACYF and to those preparing the Federal Interagency Day Care Requirements. It is also addressed implicitly to state and local governments that regulate day care licensing, monitoring, and standards. In addition, the report can be viewed as being addressed to Congress, which approves the appropriations for federally funded day care. Policy Questions In this report, three major policy questions were addressed (Abt Associates, Inc., 1979:13): · How is the development of preschool children in federally subsidized day care centers affected by variations in staff/child ratio, care-giver qualifica- tions, group size, and other regulatable center characteristics? · How is the per child cost of federally subsidized, center-based day care affected by variations in staff/child ratio, care-giver qualifications, group size, and other regulatable center characteristics? · How does the cost-effectiveness of federally subsidized, center-based day care change when adjustments are made in staff/child ratio, care-giver qualifications, group size, and other regulatable center characteristics? The answers to these questions were intended to play a major role in decisions about current regulations and practices that affect day care centers serving federally ~''~ ; ~ ; -^A mr"~h~nl ah i ldren . Adequate answers require that the policy variables have a direct relationship to the major policy issues and questions. Staff/child ratio and care-giver qualifications were assumed to affect children's cognitive and social development. These two characteristics of day care were also known to have a significant impact on the cost per child of day care. Group size was specified in the Federal Interagency Day Care Requirements and therefore was of interest. Given the variety of issues regarding day care, federal involve- ment, and regulation, an attempt to deal with more than three major policy questions would have merely diluted the report's policy effectiveness. The policy issues are clearly identified and, notably, so are issues that are not a focus of the study. The authors' disclaimers are significant because they further delimit the research =~-~~ rig

264 being considered and restrict the readers' attention in the proper context. By calling attention to issues that are not a focus, the authors demonstrate a recognition that there are other important questions that could be addressed. Methodology One of the major challenges of a study with national policy significance is the selection of a sample. To this end the evaluators carefully and deliberately selected a sample with appropriate classroom composition, care-giver qualifications, and racial composition. Fifty-seven centers with such diversity were selected within three sites. Selection of sites was based on four general criteria. These criteria required that the sites have a sufficient number of eligible centers, represent different geographic regions of the country, show different demographic and socioeconomic characteristics, and exhibit regulatory diversity. The actual selection of sites resulted from an analysis that grouped urbanized areas according to measures of socioeconomic status. The analysis yielded six prototypical cities within three regions--South, North, and West. On the basis of feasibility of study implementation, the final choice of sites was Atlanta, Detroit, and Seattle. In one phase of the study, a quasi experiment was executed to compare three groups of centers: treated high-ratio centers, matched low-ratio centers, and unmatched high-ratio centers. The authors point out that the staff/child ratio was selected for manipulation because of its critical policy relevance. The quasi experiment included only 49 Of The In a.; ~ ~;~ ~ he total sample. _ . .. . ~ a, __ -a= -~ -me ~ we ~1 ~ n fine Given the policy questions involved, it was important to employ measures of classroom composition and staff qualifications that were reliable and valid. Classroom composition was defined in terms of number of care givers per classroom, group size, and staff/child ratio. These particular variables were measured by both direct observa- tion and schedule-based measures. However, only measures based on direct observation were used in the effects analyses. Information regarding care-giver qualifica- tions was gathered through interviews with care givers. Measures based on direct observation were also used to determine teacher behavior and child behavior. In addition, standardized tests were used to measure the impact of center characteristics on aspects of school

265 readiness. Parent interviews were also conducted to obtain information on parental involvement and family use of center services. These measures were used primarily to assess quality of care at the centers--the outcomes. The data were subjected to multivariate statistical analyses, but the findings that link classroom character- istics to measures of quality and measures of costs are correlational. The statistical strengths of the reported relationships are sufficient to be used as significant indicators of both quality and costs. The researchers in the NDCS used methodological procedures that were sophisticated and appropriate to the study's goals and mandate. Format The authors present the policy-relevant findings at the beginning of the volume, allowing the reader to become aware of the major findings immediately. Policy recommendations, which stem directly from the findings, are concretely stated and provide a contextual framework that encourages the policy maker to consider actual policy decisions. The recommendations are grouped by area, providing the reader with a logical progression. For example, the authors present first the findings for preschool children, then the findings for infants and toddlers. After the findings, the authors recommend regulations and guidelines for both groups. The summary gives suggestions for fiscal policy. Unlike the authors of many research and evaluation reports, the authors of Children at the Center do not assume that all readers are familiar with key terms used in the study and therefore provide a glossary at the beginning of the volume. This feature guards against misinterpretation of terms and results and, hence, of implications on the part of the reader. Since the glossary precedes the executive summary, the reader does not have to turn to a specific section of the volume to determine how the variables were defined in order to place the findings and recommendations within the proper context; thus, time is saved for the policy-making reader. All information is presented in discrete chunks, each of which represents a whole in itself. Specifically, a reader can glean from the executive summary the major findings regarding day care and federal policy. Or, to gain some insight into the manner in which regulatory language should be constructed, the reader could turn to that section and obtain information in a few minutes.

266 Just as written information is presented in discrete chunks, most of the data are presented in bivariate tables that are concrete presentations of statistical relationships. This kind of uncomplicated presentation seems more likely to be retained by the reader than are complex multivariate tabular presentations. Report 3: Children Out of School in America Policy Perspective Children Out of School in America (Children's Defense Fund, 1974) is a national compre- hensive study of the nonenrollment of school-age children, conducted in 1973 and 1974 by the Children's Defense Fund, a child advocacy organization. Inspired by a similar one conducted by the Massachusetts Task Force on Children Out of School, the study was initiated by the Children's Defense Fund, rather than by any particular federal or state agency. It was principally addressed to HEW's Office for Civil Rights but has wide applicability to other federal agencies, state and local governments, school districts, and parent advocacy groups. The findings are presented in three categories: barriers to attendance, children with special needs and misclassifica- tion, and school discipline. Specific recommendations are set forth for the federal government, state and local governments, and parents and children. Inherent in the recommendations is a strong advocacy position. The authors advocate that specific actions take place within the federal government, state and local governments, and among parents and children regarding the exclusion of children from school. Policy Questions The major issue in this report is the denial of a basic education to any child by schools, by either overt or covert practices and procedures. While the policy questions are not explicit in the report, one can identify at least one major policy question and three subsidiary ones: · How do exclusionary practices (overt and covert) of schools and school systems affect the education of a significant proportion of school-aged children? · How does the lack of specific procedures for individual assessment and placement affect the education of all children? · What is the relationship between school attendance and various school charges for essential educational services and material?

267 · How are suspensions and other disciplinary actions of school mediated by the race, ethnicity, and socioeconomic status of school-aged children? The exploration of these questions provided a rich data base for policy makers at the federal, state, and local levels. Indeed, such exploration fostered more specific questions to be answered by a number of agencies within these levels of government. The study also provided a basis for active advocacy for children being excluded from school. Methodology This report uses both 1970 census data on school nonenrollment and survey data obtained via a questionnaire developed by the Children's Defense Fund. The survey instrument was used to augment the census data as well as to address issues of special policy concern to the researchers. More than 6,500 households were represented in the study. The data were collected in 30 areas of the country within various geographic regions that encompassed 8 states and the District of Columbia. In addition, school principals and superintendents were interviewed about nonenrollment, classification proced- ures, suspensions, and other disciplinary actions. The data analyses include frequency counts and percent- ages, with comparisons being drawn between census data and the Children's Defense Fund data. These comparisons are presented in single, straightforward tables. Descriptions of specific methodological procedures appear in an appendix. Format The major findings of this study are reported at the beginning of the volume. This allows the reader to immediately become aware of the major issues and the scope of the work that is required to remedy the problems at issue. Most of the information is organized in short In the case of longer chapters, tne sucora~nace sections can be read within a short time, facilitating access to particular issues. For example, to understand the ways in which children are misclassified for special programs, the reader could turn to that section in the chapter on exclusion of children with special needs and thereby quickly become familiar with the subject. The document is written in simple, nontechnical language and is basically organized around the three main issues: barriers to school attendance, exclusion of chapters that can be read quickly. . . . , . ~. . . .

268 children with special needs, and school discipline and its exclusionary impact on students. The role of statistics in minimal; the technical information is placed in appendixes. The interspersal of case history and anectodotal data with survey and census data is a particularly effective mechanism for holding the reader's attention and focusing it on specific issues. MEETING POLICY MAKERS' NEEDS These three reports share a few features that set them apart from methods-oriented reports. The similarities are not fully consistent across reports, but for purposes of discussion there appear to be about 10 from which we can learn. 1. The questions addressed are clearly linked to real policy decisions. In each report the principal questions arose from a policy context: debates about day care regu- lations, progress toward implementation of new legisla- tion, or inequities keeping children out of school. Policy makers and people affected by these issues were directly involved in formulating the questions in each case. They participated in meetings to explore and define the questions, and the questions determined the evaluation methods used. 2. At least some questions in each report consider the costs affecting policy. Nearly all policy decisions involve cost (or other resource) trade-offs, either directly or indirectly. When appropriate cost data are presented in a policy report, its possible influence is greatly increased. The cost data can be obtained in different ways: In the National Day Care Study, cost data were collected concurrently with the process and outcome data; in the BEH report to Congress, cost data were estimated from several outside sources. 3. Policy questions form the central organizing theme of the report. The overall organization of these reports contrasts markedly with methods-oriented reports. A glance at the three tables of contents makes the policy orientation immediately apparent. They list the policy questions examined in a reasonably direct fashion, immediately immersing the reader in the substantive issues. This reflects the fact that each chapter typically discusses a single policy question or a small related subset of questions.

269 4. The reports describe enough of _ _ ~L~3 to Hermit informed interpretation without outside sources. All three reports went to great lengths to present readers with broad policy perspectives surrounding specific ques- tions. This permits ready interpretation of the findings by readers who are not already familiar with the policy or decision-making context. 5. Evaluation methodology is played down. Evaluation methods used to answer the questions are scarcely men- tioned in the three reports. This is not to say that the studies were not built on solidly crafted methods, for by and large they were; rather, the authors chose not to present details of methodology in these reports, which were intended for policy makers. Quite likely the omission is insignificant, considering the purposes of the three reports, since few policy makers possess the training to interpret technical methods. Moreover, the reports provide adequate references to other sources (often appendixes or other volumes accompanying the report) that detail the methods, so readers who wish to can learn more. 6. Reports begin with a brief summary of essential findings. Usually called an executive summary, it permits readers to quickly learn essential conclusions from the report and to decide which other parts of the report they want to read. It seems important for the summary to be brief (10 pages or less). Brickell et al. (1974) interviewed top-level officials from several government agencies and found they preferred 1- to 10-page reports to longer ones. They commonly requested a short report for themselves and a longer one for their sub- ordinate staff; their subordinate staff in turn requested short reports for themselves and longer reports for their subordinates, and so on down the hierarchy. 7. Backup narrative for the executive summary is - "chunked" into easily locatable brief segments throughout the body of the report. The reports are generally organized such that a reader who wants to learn more about something in the executive summary can find the backup narrative easily and read it quickly. Throughout most of the reports, information is organized into self-contained, short chunks. This lets a reader quickly follow up on one or two findings of particular interest, without requiring cover-to-cover reading. Authors can usually assume that none of the policy makers will read their report from cover to cover; rather, they will be selective, reading the executive summary and little else

270 unless it is of high interest, easy to find, and quick to read. Every incremental improvement in accessibility and readability increases the amount of the report likely to be read by the policy maker and, hence, increases the likelihood of policy impact. 8. Only simple statistics are presented. For the most part, statistical presentations in the four reports included only counts, percentages, ranks, averages, ranges, costs, and bivariate tables or graphs. If complex statistical findings cannot be reduced to these simpler forms, they probably will have little meaning to policy makers. Few of them are trained in advanced statistics, and the elegance of advanced techniques may escape them. Moreover, liberal use of statistics will often obscure other information in the report because of the demands it places on the reader. 9. Where jargon is used, it is the jargon of policy makers, not of evaluators. We thought the three reports would minimize jargon to achieve maximum clarity in presenting findings, but to our surprise they did not-- they were cluttered with jargon throughout. In contrast to methods-oriented evaluation reports, however, their jargon was taken from policy makers' language, not evalu- ators' language. Policy makers are likely to comprehend it easily. The use of policy jargon may even enhance the credibility of these reports for many policy makers, by implying that the evaluators understand issues well enough to become familiar with the appropriate language. 10. Concrete recommendations for action are based on l specific findings. The reports encourage policy action by presenting specific recommendations. These recommenda- tions tend to be down to earth and specific, avoiding abstract platitudes. This translation from findings to recommendations not only relieves the reader of the burden of interpretation, but it also helps ensure that the authors' intended interpretation will not be misunder- stood. The concreteness of the recommendations coincides with the preferences Mintzberg observed among executives for activities that were specific and well defined. Our 10 observations are little more than hypotheses at this time, but they begin to provide a framework for distinguishing policy-oriented reports from the methods- reports that underlie them. To the extent they Oriented __, _ ~ are incorporated in future policy-oriented reports, we feel the policy impact of evaluations will increase, even without the further improvements in methodology that we feel are also needed.

271 REFERENCES Abt Associates, Inc. (1979) Children at the Center: Volume 1, Summary Findings and Their Implications. Cambridge, Mass.: Abt Associates, Inc. Brickell, H. M., Aslanian, C. B., and Spak, L. J. (1974) Data for Decisions: An Analysis of Evaluation Data Needed by Decision Makers in Educational Programs. New York: Council of America. Educational Research Children's Defense Fund (1974) Children Out of School in America. Washington, D.C.: Children's Defense Fund. Coleman, J. S. (1972) Policy Research in the Social Sciences. Morristown, N.J.: General Learning Press. Flesch, R. (1949) The Art of Readable Writing. New York: Collier Books. Mintzberg, H. (1973) The Nature of Managerial Work. New York: - Harper ~ Row, Publishers. U.S. Office of Education (1979) Progress Toward a Free Appropriate Public Education. DREW Publication No. (E) 79-05003. Bureau of Education for the Handicapped, Office of Education, U.S. Department of Health, Education, and Welfare.

Learning from Experience: Evaluating Early Childhood Demonstration Programs Get This Book
×
 Learning from Experience: Evaluating Early Childhood Demonstration Programs
Buy Paperback | $80.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!