National Academies Press: OpenBook

Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop (2017)

Chapter: 6 Making Them Stick: Institutionalizing the Principles

« Previous: 5 Putting Principles into Practice: A Balancing Act
Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×

6

Making Them Stick: Institutionalizing the Principles

William Sabol (member, steering committee) discussed his experience leading a federal statistical agency (Bureau of Justice Statistics) and with helping to develop the second edition of Principles and Practices for a Federal Statistical Agency (National Research Council, 2001). He asserted that those principles—relevance, credibility, trust, and a strong position of independence—are very similar to those being discussed for federal program evaluation. Sabol gave examples of how that volume addressed independence, which included:

  • separation of the statistical agency from the parts of the department that are responsible for policy making and for law enforcement activities;
  • control over professional actions, especially the selection and appointment of qualified and professional staff;
  • authority to release information without prior clearance and adherence to predetermined schedule of release; and
  • the ability to control information technology systems, tied largely to protection of data.

Sabol stressed that institutionalizing principles is not a one-and-done process: he has seen situations both in and outside his former agency that show how the principles and how agency heads’ capacity to uphold them and maintain independent, objective data can be challenged in many ways. As such, he said, the principles need to be continuously negotiated and renegotiated to address both new and ongoing issues. He disagreed with

Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×

the point made by Jean Grossman (Princeton University and MDRC) that the Paperwork Reduction Act had been a hindrance, primarily because it gives the U.S. Office of Management and Budget (OMB) the authority to coordinate and develop the principles and policies for the 13 primary federal statistical agencies. In addition, he said, OMB’s creation of the Interagency Council on Statistical Policy and the 2002 Confidential Information Protection and Statistical Efficiency Act (CIPSEA) were very important developments in terms of refining policy and promoting governmentwide data quality standards.

Bethanne Barnes (Washington State Institute for Public Policy), speaking on her former role as head of the OMB evidence team, noted that the statistical system is one of many government functions that have a formalized structure for information sharing, policy feedback, and best practices. This structure has key components, including: a council to facilitate collaboration with OMB; a designated office within OMB to set broad policy guidance; and staff to support the council’s work. She mentioned how in 2014 the OMB’s Office of Information and Regulatory Affairs issued Statistical Policy Directive No. 1 (also referred to as the trust directive),1 which essentially codifies the information in Principles and Practices for a Federal Statistical Agency. OMB has recently begun providing consultation to several evaluation offices on their evaluation practices, based on its experience with federal statistical infrastructure and with providing guidance on evidence-based policy. Barnes attributed the success and widespread acceptance of the statistical principles to strong interagency collaboration and emphasized the importance of sharing ideas across agencies.

Barnes acknowledged that evaluation functions do not have a similar type of overarching structure, in part because evaluation has developed more slowly, and because the nature of the structures in individual agencies has been so varied. She mentioned a report from the U.S. Government Accountability Office (2013) that showed that agencies with centralized evaluation offices had broader evaluation coverage and greater use of evaluation data. The report also noted, however, that only half of the agencies reviewed had stable sources of evaluation funding.

Barnes noted that OMB has established an evidence team within the Economic Policy Division, which focuses on multiple aspects of evidence-based policy. OMB seeks to eventually become a home for federal evaluation policy, she said. It has also informally created an Interagency Council on Evaluation Policy (cochaired by workshop participant Naomi Goldstein, of the Administration for Children and Families), which exchanges infor-

___________________

1Fundamental Responsibilities of Federal Statistical Agencies and Recognized Statistical Units; available: https://www.federalregister.gov/documents/2014/12/02/2014-28326/statistical-policy-directive-no-1-fundamental-responsibilities-of-federal-statistical-agencies-and [May 2017].

Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×

mation, collaborates on areas of common interest, and provides coordinated routine feedback to OMB on issues that affect evaluation functions. She said this council could be the basis for a more formalized structure.

Barnes said a core part of OMB’s work is to help agencies with developing authorizing legislation and with funding sources and levels. In 2016, OMB updated its Circular A-11 guidance document2 to improve the definition of evaluation, to emphasize the need for a portfolio of evidence, and to introduce the concept of credible use of evidence, including intended use of evidence. The document also includes instructions for agencies to use evaluation results and establish learning agendas in their strategic planning processes. In addition, it includes instructions to continue to use those tools throughout their performance management processes, which Barnes noted is a separate process from the development of credible evidence. She said, however, that none of those documents directly reference the principles and practices being discussed today in a comprehensive framework. Outside of OMB, Barnes said that the “Holdren memo” (Holdren, 2010) is another document that provides guidance on principles and procedures integral to protecting scientific integrity and strengthening the credibility of government research. She asserted that the central theme of the Holdren memo is that the public must be able to trust the scientific process, and it reinforces this by providing recommendations for facilitating the professional development of government scientists through such activities as publishing in peer reviewed and other scholarly journals and participating in professional meetings and societies.

Sabol asked the workshop participants what external entities could do to help institutionalize the principles and what evaluation agencies themselves could do. Russ Whitehurst (chair, steering committee) commented that Congress plays a critical role and that congressional action is most easily obtained by providing OMB with the authority to oversee the process. Clinton Brass (Congressional Research Service) said that evaluation activities seem to be Balkanized both within and among agencies—evaluation versus performance management, applied research versus methods, etc.—which may be a challenge to institutionalizing principles for evaluation and should be taken into consideration.

Sabol asked Demetra Nightingale (Urban Institute) how she managed this issue in the Department of Labor (DOL). Nightingale said that the agency maintains connections and open lines of communication among staff in statistical analysis and products, policy analysis, performance management, and data analytics, noting that evaluation touches all of these areas. She reiterated Whitehurst’s point about OMB’s role, noting that

___________________

2Preparation, Submission, and Execution of the Budget; available: https://obamawhitehouse.archives.gov/sites/default/files/omb/assets/a11_current_year/a11_2016.pdf [May 2017].

Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×

it has encouraged conformity among offices by requiring evidence-based justifications for budget increases and clarifying that the term “statistical purposes” includes evaluation. DOL also includes a chapter on evidence in its strategic plan. Barnes said that OMB is structured similarly to DOL and added that OMB’s Circular No. A-11 gives agencies guidance on this type of collaboration; she knows that the extent to which that is similarly executed in agencies varies widely across the government.

Sabol next asked the workshop participants how agencies identify existing guidance on applying evaluation principles in order to both take advantage of what opportunities currently exist and also to provide insight for future development. Howard Rolston (member, steering committee) noted that although it can be difficult because of the Balkanization of agencies and inconsistent support for evaluation, continued vigilance by OMB and broad congressional support can help those efforts. Thomas Feucht (National Institute of Justice) mentioned that, since the ultimate goal is to institutionalize evaluation principles and the conversation has been placed in context with the statistical framework and the value of Principles and Practices for a Federal Statistical Agency, acquiring something similar for federal evaluation would likely require its own statute and legislation.

Lauren Supplee (Child Trends) asked Sabol and Barnes if they could see any downside to implementing a more structured system. Sabol said that while there is potential for those in leadership to exercise more or less latitude, in general he believes that CIPSEA provides an example of implementation in a manner that is quite positive, and Barnes agreed. Daryl Kade (Substance Abuse and Mental Health Services Administration) asked if the upcoming transition to a new administration presented an opportunity to pursue institutionalizing evaluation principles more formally. Sabol referenced the Committee on National Statistics’s 4-year cycle for Principles and Practices for a Federal Statistical Agency, seeing a potential benefit in doing the same for evaluation and providing consistency in times of staff mobility. Christopher Walsh (Department of Housing and Urban Development) suggested that the 24 agencies subject to the Chief Financial Officers Act could use the requirement to include program evaluation in their 4-year strategic plans as an opportunity to institutionalize their principles.

Sandy Davis (Bipartisan Policy Center) said he believes that evaluation will not gain traction as an ongoing part of the policy-making process unless there is congressional support for it. He said that there appears to be a lot of congressional interest in improving evaluation, on both sides of the aisle, noting the support of Speaker Paul Ryan and Senator Patty Murray for the Commission on Evidence-Based Policymaking. He added that just like the process and legislation for congressional budgets, developing a structure for evaluation may take time and will undoubtedly require changes to authorizing legislation.

Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×

Regarding ethics, Sabol asked if the workshop participants see any constraints on staff who do evaluation and scientific work. He also asked about potential conflict-of-interest issues that may arise because of partnerships between government researchers and external entities. Feucht said that because of the nature of grants, there is a wide range of relationships between programs and external entities, which can occasionally introduce confusion. Mark Shroder (Department of Housing and Urban Development) noted that some agencies do not permit professional staff to publish their research findings without approval. He opposes this practice and believes professional staff should be free to publish so long as they clarify that their opinions may not reflect those of their agencies.

Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×

This page intentionally left blank.

Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×
Page 31
Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×
Page 32
Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×
Page 33
Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×
Page 34
Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×
Page 35
Suggested Citation:"6 Making Them Stick: Institutionalizing the Principles." National Academies of Sciences, Engineering, and Medicine. 2017. Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24831.
×
Page 36
Next: 7 Cheerleaders, Naysayers, Large and Small Evaluators: Fostering Support and Inclusion »
Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop Get This Book
×
 Principles and Practices for Federal Program Evaluation: Proceedings of a Workshop
Buy Paperback | $50.00 Buy Ebook | $40.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In October 2016, the National Academies of Sciences, Engineering, and Medicine convened a 1-day public workshop on Principles and Practices for Federal Program Evaluation. The workshop was organized to consider ways to bolster the integrity and protect the objectivity of the evaluation function in federal agencies—a process that is essential for evidence-based policy making. This publication summarizes the presentations and discussions from the workshop.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!