Russ Whitehurst (chair, steering committee) drew the participants’ attention back to the scope of the workshop: evaluation of federal programs intended to affect human behavior. He added that U.S. taxpayers make a decision to fund those programs with the goal of improving opportunities and reducing identified problems and that failure to use their money in a way that can contribute to those goals is a disservice to them. He referenced the opinion of Jon Baron (Arnold Foundation) about evaluations revealing the low success rates of certain programs, but he said that incremental progress is still progress.
Whitehurst said that while the evaluation principles that are currently in place are very sound, they need legislation to help give them permanence and stability. He sees the legislation taking one of two forms:
- drafting legislation on an agency-by-agency basis that supports the creation of independent research and evaluation agencies and affords them protections and statutory guidelines (such as that for the Institute of Education Sciences), or
- aligning with the Paperwork Reduction Act, creating separate legislation and giving the U.S. Office of Management and Budget (OMB) some general authority over this function, similar to that which is in place for the statistical agencies.
Whitehurst acknowledged that funding had been addressed several times throughout the workshop, telling the group that, across the board, budgets for evaluation are comparatively smaller than the money allotted to
other functions in the same programs. He thinks that budgeting for evaluation would need to be included in any legislation.
Whitehurst once again mentioned OMB as a key player in the practice of implementing evaluation principles. He added that the leading agencies have a role to play as well, but he suggested that the inclusion of Congress in the specifics of implementation could ultimately do more harm than good. He reiterated the importance of peer review as a system that can hold the producers of the work responsible for its quality, and he reminded participants of OMB’s prior practice of putting a process in place to rate the quality of evaluation efforts as another accountability measure. Baron added that the proper use of peer review and techniques like specifying confirmatory versus exploratory hypotheses could also be leveraged to influence similar processes in scholarly journals, whose peer review protocol is sometimes not as rigorous.
Judith Gueron (member, steering committee) reminded Whitehurst and the participants about the earlier discussion on the tension that arises between focusing on rigor and making evaluations useful. Whitehurst acknowledged that while the issue is a real one, it is subjective, and a high program failure rate can adversely affect stakeholders’ opinion of success. He reinforced Baron’s point about the need to evaluate program components as opposed to entire programs—especially for larger, more established operations—in an effort to mitigate that tension.
Miron Straf (Virginia Polytechnic Institute and State University) countered Whitehurst and Baron’s point, saying that the message he would like to see projected to the evaluation community would be an encouragement to move away from the myopic approach of focusing on an effect size of a single intervention and to look at the social programs as part of a complex system. He suggested looking at the major drivers and also considering the use of passive or “big data” as reinforcement. He also compared social program analysis to advances in medical research and agreed with Whitehurst that it might be wise to consider how to integrate an evaluation style that focuses on continuous improvement.
Naomi Goldstein (Administration for Children and Families) brought together the discussants’ summary points by saying that while peer review can be valuable, she believes it is a practice, rather than a principle, and falls under the larger umbrella of quality control—a very important principle to be considered. Whitehurst thanked participants for sharing their thoughts, and supporting the committee’s view that the field of federal program evaluation is an important enterprise that will need support to continue to grow, improve, and be recognized for the important contribution it makes to the U.S. population and the government.