National Academies Press: OpenBook

Evaluation Design for Complex Global Initiatives: Workshop Summary (2014)

Chapter: 11 Using Evaluation Findings and Communicating Key Messages

« Previous: 10 Lessons from Large-Scale Program Evaluation on a Not-Quite-as-Large Scale
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

11

Using Evaluation Findings and Communicating Key Messages

Important Points Made by the Speakers

  • Succinct reports, transparency of processes and findings, and wide dissemination can increase the use of evaluation results.
  • Developing a communications plan when designing an evaluation can increase the use of findings.
  • Putting findings and recommendations into context for local governments and other local stakeholders can strengthen programs on the ground.
  • Evaluation results can be especially useful if synthesized and disseminated through structured forums at the country level.

In this session, the workshop addressed issues related to the complexity of the diverse uses and audiences for large-scale evaluations and the importance of matching the message, the messenger, and the audience. Panelists also spoke about the challenges associated with tracking the use of large-scale evaluation findings.

“Our assumption,” said moderator Sir George Alleyne, chancellor of the University of the West Indies, in introducing the panel, “is that evaluation findings are useful, but being useful is not the same thing as being used.” It is incumbent, then, for evaluators to promote and propose mecha-

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

nisms to ensure that evaluation findings are used. “I think it is unjust and unfair not to make maximum use of evaluation findings,” he stated.

THE U.S. PRESIDENT’S MALARIA INITIATIVE

Bernard Nahlen, deputy coordinator of the PMI, began the panel presentation by recounting the genesis of the PMI, which was launched in mid-2005 with an initial congressional authorization of $1.2 billion over 5 years. A response to criticism of the U.S. Malaria Program administered by USAID, the PMI was intended to target 15 high-burden countries in Africa. The main criticism of that program was that, while it conducted good research and generated many documents, there was no evidence that the program was doing anything to turn the tide against the rising burden of malaria in these high-burden countries. Another criticism was that the U.S. part of the program relied too heavily on a social marketing model and that it lacked any effort to develop capacity for countries to do indoor residual spraying or to distribute mosquito nets.

As authorized by Congress, PMI’s funds would come through USAID, with CDC the U.S. government’s main implementing agency. The PMI, in contrast to PEPFAR, was not “afflicted with many earmarks and targets,” Nahlen said. The PMI was designed to have a small personnel footprint, with two resident advisors, one hired by USAID and the other by CDC, operating out of each in-country USAID health office. Each office also employed one or two local staff to help manage the program. By mid-2006, the PMI was operating in three countries; six more came online in 2007; and by 2008 the effort was operational in all 15 countries in Africa.

The program started with a clear business model, strong leadership in the person of retired admiral Tim Ziemer, and a country-level Malaria Operational Planning Process. The program was established with the provision for yearly reviews, in which program teams visited each country to work with the national programs to develop an understanding of the status of each program at that time. The 15 countries were chosen in part because they had additional globally funded malaria grants from organizations such as the Global Fund, the World Bank, and other funders working in the malaria space. To counter the criticism that USAID country offices had too much autonomy in administering program funds, the PMI had a clear mandate with targets and defined interventions that would be delivered. These mandates, said Nahlen, created some initial pushback from USAID staff and from CDC.

The PMI functions as a learning environment, Nahlen noted, in which program staff appear to be willing to try different approaches and learn from experience. In that context, the PMI decided to launch an evaluation, even though it was not mandated by Congress, and ask for actionable

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

recommendations. In the end, the evaluators made 10 recommendations, 5 on the technical side and 5 on the policy and programmatic side. To disseminate the report and its recommendations, Nahlen used an interagency advisory group comprising representatives of CDC, the National Institutes of Health (NIH), and the Office of Global Health in the Department of Health and Human Services; the Department of Defense; the Department of State; the Office of Management and Budget; and the Peace Corps. His office also posted the report on its website, sent an e-card to 27,500 USAID users, and distributed the report at the country level through its resident in-country advisors.

The report generated many responses, said Nahlen. Congress responded to the report’s recommendations with an increase in funding that has enabled the PMI to expand into four additional African countries and initiate a program in the Mekong Delta region. The report also recommended that the PMI increase its personnel footprint, particularly as it moved into countries such as Nigeria and the Democratic Republic of the Congo. Adding highly qualified local staff, the PMI not only has built capacity in these countries but also has created stability in the local programs.

Another recommendation called for the PMI to increase the flow of money through local government programs, but there was the concern about the money getting commodities to the people in need. Nahlen’s group has assessed this situation and found that of the 19 African countries, more money is now flowing through local ministries of health and the national control programs in 14 countries. In three countries—Madagascar, Mali, and Zimbabwe—U.S. government restrictions prohibit direct funding. As a final note, Nahlen said that in response to a recommendation, the PMI has hired an operational research coordinator to establish a research framework and set priorities. Talks are ongoing with WHO and other funders to ensure that the PMI’s operational research agenda complements what others are doing.

ROADS TO A HEALTHY FUTURE

The focus of the Roads to a Healthy Future program, which is funded by PEPFAR through USAID, is to address HIV and health issues along transport corridors and to examine the structural drivers of the spread of HIV, explained Dorothy Muroki, project director of the program. Many aspects of the PEPFAR evaluation report, she said, were relevant to strengthening the Roads to a Healthy Future program; in particular, she discussed two key examples of recommendations resulting in action.

In the summary findings for HIV prevention, the evaluation recognized that interventions targeted at prevention of sexual transmission, including biomedical, behavioral, and structural interventions, are all critical com-

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

ponents of a balanced and comprehensive prevention portfolio. The report also recognized that, within PEPFAR, less program monitoring data and vigorous research evidence were available on these interventions, particularly behavioral and structural interventions, as compared with the other biomedical interventions such as preventing mother-to-child transmission. “As an implementer, this for me was a stark reminder that gaps in the evidence base around behavioral and structural interventions persisted within my own program, but also across the public health fraternity,” Muroki explained. Moreover, she and her team realized that this gap in the evidence base was undermining support for these two critical elements of balanced programming because there was not enough evidence proving that the program’s behavioral and structural interventions were working.

In response, one of the steps she and her colleagues took was to examine the association between improved economic status of households and use of health services. “What we are going to do with the data is strengthen our own programming, and we’ll package it for use by host governments and other stakeholders,” Muroki said. She added that these data have generated interest in Rwanda, Tanzania, and Zambia, the three countries where her team is finalizing the study.

The second example of the usefulness of the PEPFAR evaluation that she discussed involves reexamining gender-based programming in the Roads to a Healthy Future project. Since 2005 the program had been promoting gender equity in decision making at more than 65 sites along the transport corridors, but the evaluation led her team to examine whether its local partners had the skills and tools to sustain this work, or if gender expertise was instead embedded solely within its team members. “We realized that it was important for us to look at strengthening and locally implementing partners so that they would be able to address gender issues, which was very critical for program success over the long term,” Muroki said. As a result, she has tapped into her organization’s technical expertise on gender issues to support local community partners’ efforts to build expertise in this area, which she said has had a positive impact on gender programming.

Addressing the issue of communication, Muroki said, “my perspective is that the evaluation findings could be even more useful if synthesized and disseminated through structured forums at the country level. For me the synthesis issue is critical because the [evaluation] reports are voluminous, and truth be told, is an implementer going to read the volumes? Most likely not. So having ways and means of synthesizing the information and using structured forums to be able to disseminate the information is important.” She also suggested that findings and recommendations should be put into context for local governments so evaluation findings can be used to strengthen programs on the ground. In closing, she wondered if opportunities exist for there to be more collaboration between collaborative

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

evaluations run by different funders working on similar programs within individual countries, particularly with regard to disseminating findings. “This could create synergies that would probably push the use of the findings,” said Muroki. “With information overload being an ever-present challenge in our world today, I think the public health evaluation community must engage audiences in ways that cut through their own clutter, whether we’re thinking of individuals or of organizations and institutions. Identifying and developing strategies to make this information easier to access, digest, and act upon, for me, is a critical step for consideration going forward.”

THE SOUTH AFRICAN PERSPECTIVE ON INFLUENCING POLICY AND PERFORMANCE

In November 2011, South Africa approved a National Evaluation Policy that created the Department of Performance Monitoring and Evaluation in the Office of the President. The goal of this act was to determine if services being delivered in South Africa were benefitting people on the ground. To date, said Ian Goldman, head of evaluation and research in the department, 38 evaluations have looked at several billion dollars’ worth of government programs. Each evaluation is focused on utilization—are the services offered by a program being utilized on the ground—and therefore emphasizes learning. One challenge that Goldman and his colleagues face is getting individual departments to take ownership of these evaluations, and his team is addressing this challenge by getting departments to determine what they want to evaluate.

Once an evaluation is completed, Goldman’s team creates an improvement plan, which will be monitored for 2 years to determine if the evaluations trigger action by the departments. Complementing this effort is a training program on evidence-based policy making for permanent secretaries, senior management teams, and members of the South African parliament. To incentivize departments to suggest evaluation topics, beyond the desire to improve performance, Goldman’s office pays for 50 percent of an evaluation. “We focus on the process as much as the product because a good product with poor process will not be used,” he said. Even then, he acknowledges, capacity in South Africa is limited, so his office is strategic about the evaluations it conducts and is limiting its initial efforts to 15 priority evaluations annually.

To promote independence the evaluations are undertaken by service providers, with input from a steering committee representing key stakeholders. To boost the quality of the evaluations, Goldman’s office has developed evaluation standards, competency training, and a peer review committee. His team also conducts a quality assessment within a month after an evalu-

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

ation is completed based on these standards. In addition, his department is conducting an “evaluation of evaluations” to see whether the work it has done is having an impact.

Communication is emphasized, Goldman said, and his office has developed a guideline that all reports should have a 1-page policy summary, a 5-page executive summary, and a body of the report that does not exceed 25 pages. Transparency is also key, so every report is posted to the department website along with a quality assessment, improvement plan, and progress reports, enabling the public to track what is happening. One of South Africa’s goals is to get all countries to use the same standardized terminology when it conducts evaluations so the countries can more easily learn from each other’s experiences.

COMMUNICATING RESULTS FROM THE PEPFAR EVALUATION

When the IOM does a consensus study, explained Kimberly Scott, who was one of the study co-directors for the IOM’s PEPFAR evaluation, it requires that a communications and dissemination plan be developed before the study is approved. In the case of the IOM evaluation of PEPFAR, dissemination and communication activities were included in the evaluation contract. Scott noted that the audience for the PEPFAR evaluation was quite diverse, including Congress, the Department of State, the Office of the Global AIDS Coordinator, the PEPFAR implementation agencies, others involved in national and global responses to HIV, and the general public.

Dissemination focused on three types of activities. One was issuing a variety of reports and report products such as the ones discussed at the workshop. In addition to the main report, which is available in hard copy and free in electronic format, the IOM prepared a 20-page executive summary that outlined the 13 recommendations from the evaluation and a 4-page policy brief. The IOM staff and committee members also engaged in a large number of in-person briefings, including a pre-release briefing for the Office of the Global AIDS Coordinator and its director, Ambassador Eric Goosby. Because the study was mandated by Congress, the IOM staff held briefings for congressional staff and for the committees of jurisdiction, as well as for several other committees that were interested and requested separate briefings.

Scott said that one of the most important types of briefing was to the PEPFAR implementing agencies and the technical staff of those agencies. An interesting characteristic was that the staff of those agencies established the agenda for the briefings, giving staff the opportunity to pick the technical areas that were most relevant to them and to ask about details with some specificity. This briefing also gave staff at the implementing agencies opportunities to talk with the IOM staff and committee members about the

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

technical details that went into the recommendations so that they might actually develop implementation plans for those recommendations. “Part of what we learned after we issued the report was that Ambassador Goosby required a written response from all of the implementation agencies to the report’s findings and recommendations,” said Scott. “And while that wasn’t going to be for public distribution, that was one tool that he was using to be able to break down and digest what was relevant in the report, to talk about it with the implementation agencies and their technical staff, and to discuss with them how they were going to use the report.”

The IOM also did two public briefings, one when it released the report and another planned strategically for several months after the report’s release. “We knew that people would need time to be able to digest the information in the report and to be able to have some meaningful dialog,” Scott explained. Both of the public briefings were webcast globally and an effort was made to encourage country-level participation, including participation in the briefing in person by representatives from Ghana, Kenya, Mali, and Tanzania. The IOM committee members and staff have also been invited to participate in events, including one focused on funders in the HIV/AIDS realm that allowed the IOM staff to educate donors about some of the significant issues in the report, and another for the UNAIDS Monitoring and Evaluation Reference Group meeting. The evaluation was also presented and discussed at scientific meetings, at the Consortium for Universities for Global Health Annual Conference, where committee members and staff did presentations on the methods and the findings, and at the Annual Qualitative Research for Health Conference.

In addition, the committee chair wrote an editorial for the Lancet, and there were blogs about the evaluation from, for example, the Center for Strategic and International Development and the Center for Global Development. For the project webpage, the IOM created a brief quiz that focused on disease education and provided a global epidemiological perspective as well as a more in-depth interactive experience. This interactive experience has two functions, Scott explained. First, it describes what PEPFAR does in terms of the types of services it provides; second, it illustrates what an experience might be in a PEPFAR country in trying to access prevention, treatment, or care services.

The IOM has a limited ability to track the use of its findings and recommendations, but some metrics are available, such as new legislation or changes in funding. The IOM also scans relevant websites and news outlets to track use of the report, and study staff have periodic opportunities to follow up with implementing agencies to find out how reports are being used and to get feedback. For example, said Scott, “We have gotten some very specific feedback that some technical areas will now be changing their indicators to do outcome measurement as opposed to outputs, and to start

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

to do some age disaggregation of data.” Finally, the IOM has enough flexibility to link continued activities with work it has done in the past, as in the case of this workshop.

OTHER TOPICS RAISED IN DISCUSSION

The conversation during the discussion session focused on several broad issues in the communication and use of evaluation findings, including the diversity of audiences, the budget for dissemination, the independence of evaluators, the responsibility for dissemination, and the transparency of evidence.

Muroki noted that different participants in an evaluation have different needs, incentives, and interests, and as a result they can have different audiences in mind when preparing an evaluation. Aspects of communication and dissemination need to be treated from the beginning with the same seriousness as the technical aspects of an evaluation, she said.

With regard to funding, Nahlen said that his budget was $500,000, which was “very good value for money” with a program that spends hundreds of millions of dollars annually. He also said that his office at the PMI uses a tracking sheet for its 10 primary recommendations that it updates regularly. Goldman said that his department’s evaluations cost between $150,000 and $400,000 each. He said his organization requires that departments produce a management response when an evaluation is produced and publicly released. This is used to create an improvement plan that is then tracked. However, dissemination is still “embryonic,” he said, with more work needed on targeted marketing.

With regard to the budget specifically for dissemination, Scott noted that in general the IOM dissemination represents a relatively small part of overall project budgets. Aspects of dissemination are part of the core activities of the IOM studies, said Scott, but many extra activities depend on volunteer time from staff and committee members.

Nahlen emphasized the importance of producing actionable recommendations for an evaluation to have impact. Recommendations that are self-evident or “nice to know” are hard to implement by the audiences of an evaluation. The question of the independence of program evaluators was also raised during the discussion. As Nahlen pointed out, evaluators can sometimes be so independent that their recommendations are not actionable because they are too separated from an initiative. At the same time, a balance is necessary to avoid conflicts of interest.

The audiences for an evaluation often have a shared responsibility in not only implementing but tracking and disseminating the recommendations of an evaluation, several panel members noted. “Dissemination is not disseminating to them,” said Goldman. “They’re part of the process.” As

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

Scott added, “It is often up to the sponsor to determine what will happen in terms of dissemination.”

Making disseminations public—as well as the data on which they are based—can have the effect of increasing implementation, Goldman added. Such transparency can be difficult, but it can help make a program a partner in an evaluation of that program. Goldman raised the point that many evaluations essentially amount to technical experts from northern countries telling program managers in southern countries what to do differently. The real issue is how evaluations can contribute to a change process through partnerships, he said.

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

This page intentionally left blank.

Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 95
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 96
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 97
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 98
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 99
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 100
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 101
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 102
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 103
Suggested Citation:"11 Using Evaluation Findings and Communicating Key Messages." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 104
Next: 12 Envisioning a Future for Evaluation »
Evaluation Design for Complex Global Initiatives: Workshop Summary Get This Book
×
Buy Paperback | $50.00 Buy Ebook | $40.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Every year, public and private funders spend many billions of dollars on large-scale, complex, multi-national health initiatives. The only way to know whether these initiatives are achieving their objectives is through evaluations that examine the links between program activities and desired outcomes. Investments in such evaluations, which, like the initiatives being evaluated, are carried out in some of the world's most challenging settings, are a relatively new phenomenon. In the last five years, evaluations have been conducted to determine the effects of some of the world's largest and most complex multi-national health initiatives.

Evaluation Design for Complex Global Initiatives is the summary of a workshop convened by the Institute of Medicine in January 2014 to explore these recent evaluation experiences and to consider the lessons learned from how these evaluations were designed, carried out, and used. The workshop brought together more than 100 evaluators, researchers in the field of evaluation science, staff involved in implementing large-scale health programs, local stakeholders in the countries where the initiatives are carried out, policy makers involved in the initiatives, representatives of donor organizations, and others to derive lessons learned from past large-scale evaluations and to discuss how to apply these lessons to future evaluations. This report discusses transferable insights gained across the spectrum of choosing the evaluator, framing the evaluation, designing the evaluation, gathering and analyzing data, synthesizing findings and recommendations, and communicating key messages. The report also explores the relative benefits and limitations of different quantitative and qualitative approaches within the mixed methods designs used for these complex and costly evaluations.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!