This proceedings summarizes the presentations and discussions at the 1-day public workshop on Principles and Practices for Federal Program Evaluation, which was held in Washington, DC, in October 2016. The workshop was organized as part of an effort to assist several agencies: in the U.S. Department of Health and Human Services, the Office of the Assistant Secretary for Planning and Evaluation (ASPE) and Administration for Children and Families (ACF); in the U.S. Department of Labor, the Office of the Chief Evaluation Officer (CEO); and in the U.S. Department of Education, the Institute of Education Sciences (IES). The purpose of the workshop was to consider ways to bolster the integrity and protect the objectivity of the evaluation function in federal agencies—a process that is essential for evidence-based policy making. The scope of the workshop included evaluations of interventions, programs, and practices intended to affect human behavior, carried out by the federal government or its contractual agents, that result in public reports sponsored by the federal government and are intended to provide information on their impacts, cost, and implementation.
The federal government has taken several steps over the past two decades to bolster the credibility of scientific evidence. The Information
Quality Act of 20011 and the Office of Management and Budget’s (OMB) Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies2 both advise agencies on preserving the quality of data from collection through dissemination and on developing the appropriate administrative mechanisms to carry out these standards. OMB’s Guidelines also provide guidance on remaining true to the intended users and uses of the data (utility) while presenting the data in a clear, transparent, and unbiased manner (objectivity) that is free from corruption or undue influence (integrity).
In its chapter entitled “Building the Capacity to Produce and Use Evidence,” the Analytical Perspectives Component of the Budget of the United States Government, Fiscal Year 2017 emphasized the importance of establishing centralized or chief evaluation offices in federal agencies and supported the development of guidelines for federal program evaluations, stating that “Many Federal evaluators believe that establishing a common set of government-wide principles and practices for evaluation offices could help to ensure that Federal program evaluations meet scientific standards, are designed to be useful, and are conducted and the results are disseminated without bias or undue influence.” The document went on to highlight five fundamental principles in developing standards for evaluation: rigor, relevance, transparency, independence, and ethics.
ACF and CEO have both issued evaluation policy statements for their organizations that address those principles,3,4 and the Department of Education issued a 9-page departmental directive in 2014 that addressed scientific integrity in research activities departmentwide.5 These three policies were among the documents provided to participants for review in advance of the workshop.
In the past several years, the heads of many federal agencies have articulated a long-range goal to build an infrastructure to strengthen and guide federal evaluations—one that would promote continuity and support for certain high-level principles and practices across agencies and chang-
1 See Section 515 of the Treasury and General Government Appropriations Act, 2001 (Pub. L. No. 106-554, 44 U.S.C. § 3516 note).
4 See https://www.dol.gov/asp/evaluation/EvaluationPolicy.htm [May 2017].
5 See https://www.state.gov/s/d/rm/rls/evaluation/2015/236970.htm [May 2017].
ing federal administrations. Attention to evaluations and the level of the agency responsible for such activities vary considerably across the federal government.
As with federal evaluation, the U.S. federal statistical system itself is highly decentralized; while statistical activities are conducted in more than 100 agencies, only a few focus on producing statistics as part of their primary mission. In her introductory remarks, Constance Citro (Committee on National Statistics [CNSTAT]) discussed the history of CNSTAT and its joint efforts with OMB to facilitate coordination and collaboration across the statistical system. To provide advice to Congress and the Executive Branch on establishing a new statistical agency and describe foundational principles for its activities, CNSTAT published Principles and Practices for a Federal Statistical Agency (National Research Council, 1992), and since 2001 the volume has been updated every 4 years (at the beginning of a new administration or second term). Citro said it is widely recognized to have been helpful in preserving the independence of federal statistical agencies. It is intended to bolster statistical practices from undue partisan or political influence, and she noted how that goal aligned with the sponsors’ goal for this workshop.
To further the long-range goal of strengthening federal evaluations, heads of several federal evaluation offices arranged for CNSTAT to convene a 1-day planning meeting, which was held in September 2015, to assess the usefulness of developing a document for federal evaluation programs modeled after Principles and Practices for a Federal Statistical Agency: a high-level set of guidelines that would help evaluation offices maintain standards for their programs across administrations and changes in political-level personnel. At the planning meeting, the cognizant federal agencies decided that a public workshop, with full discussion of existing policies for federal program evaluations and consideration of issues in building on these policies, would be a useful next step. The workshop was to be designed so as not to prejudge the value of a volume along the lines of Principles and Practices for a Federal Statistical Agency.
The charge to CNSTAT was to organize a workshop to consider ways to strengthen existing federal evaluation policies and to institutionalize the principles: see Box 1-1 for the full statement of task. To address the charge, CNSTAT worked with ACF, ASPE, CEO, and IES to form the Steering Committee on Principles and Practices for Federal Program Evaluation. The goal of the workshop was to review and comment on existing federal policies, which generally reference such principles as rigor, relevance, transparency, independence, and ethics, as well as objectivity, clarity, reproduc-
ibility, and usefulness, and to discuss the potential for developing a broader policy document.
This proceedings describes the workshop presentations and discussions that followed each topic: see workshop agenda in Appendix A. Chapter 2 presents the history of federal program evaluation and its successes and challenges from a variety of perspectives. Chapter 3 explores several prominent evaluation shops and their approaches to protecting the integrity and objectivity of evaluation work, with presentations from ACF, CEO, IES, and the Millennium Challenge Corporation. Chapter 4 covers the session in which workshop participants were invited to share their insights on the policies and policy-making processes at their respective agencies and organizations. The discussion in Chapter 5 focuses on the components necessary to advance high-quality evaluations and protect the infrastructure that supports them. In Chapter 6 a former statistical agency head and a former key player at OMB share their experiences institutionalizing the federal statistical system and its implication for developing a similar structure for evaluation. Chapter 7 addresses the need to develop guidance to support objective evaluation while simultaneously mitigating any potential resis-
tance to the development of this type of policy. Finally, Chapter 8 discusses key themes and possible next steps in bolstering the principles and practices for federal program evaluation.
This proceedings has been prepared by the workshop rapporteur as a factual summary of what occurred at the workshop. The steering committee’s role was limited to planning and convening the workshop. The views contained in the report are those of individual workshop participants and do not necessarily represent the views of all workshop participants, the steering committee, or the National Academies of Sciences, Engineering, and Medicine.
This page intentionally left blank.