Skip to main content

Currently Skimming:

4 Developing the Evaluation Design and Selecting Methods
Pages 29-42

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 29...
... 4 Developing the Evaluation Design and Selecting Methods Important Points Made by the Speakers • Evaluations of complex initiatives require trade-offs in devel oping the evaluation design and choosing the methods. • Evaluations of complex initiatives are well served by the use of a logic model, theory of change, results chain, impact pathway, or other framework to describe how the program is intended to create change and to identify potential unintended results.
From page 30...
... Thus, the task for the IOM was to design and then conduct an impact evaluation of a complex dynamic initiative with a wide range of supported activities and a global reach. The resulting evaluation was conducted over 4 years with an extensive planning phase followed by an intensive implementation phase.
From page 31...
... She explained that the basic outcomes for the evaluation were in the areas of strengthening health systems; specific service delivery areas focusing on the integration of services and the coverage, quality, cost, efficiency, and access to services; and looking at any resulting changes in knowledge, attitude, behavior, and norms of the beneficiaries as well as providers. Seeking to respond to the congressional mandate, the IOM committee focused on key impact areas that would allow it to comment on individual and population health impact, including psychosocial well-being, HIV incidence, HIV prevalence, morbidity, and mortality.
From page 32...
... In addition, she advised that dissemination activities for these hypothetical evaluations focus on specific interpretations and applications of messages that could be tailored to specific countries, providing an opportunity for dialog without compromising the evaluation design in a way that would sacrifice candor and credibility. GLOBAL FUND EVALUATION Why is evaluating impact important to the Global Fund?
From page 33...
... To enable the evaluation of impact, the Global Fund has established a $10 million fund for investing in data infrastructures in target countries to enable rigorous analysis; disaggregation of data by time, person, and place; and inclusion of comparison groups where feasible. Low-Beer said that the Global Fund's technical committee is seeking to establish an independent yet practical approach to evaluation that can be integrated into the way the Global Fund makes grants and develops policies.
From page 34...
... In Cambodia, the Global Fund put its HIV, tuberculosis, and malaria reviews together to identify common components of value. These reviews found that malaria deaths dropped by 80 percent and that this drop was linked to specific investments in the Global Fund's portfolio, such as the $5 million to $10 million spent on community workers and the scale-up of treatments and bed net use; in addition, there was a 45 percent decline in tuberculosis prevalence related to Global Fund grants.
From page 35...
... As a result, many customers use less effective antimalarials or use artemisinin as a single agent, which could exacerbate the development of artemisinin resistance. To address these problems, the Global Fund created AMFm with the twin goals of contributing to malaria mortality reduction and delaying the development of artemisinin resistance by increasing the availability, affordability, market share, and use of quality-assured ACTs.
From page 36...
... "Whoever had antimalarials, we visited them." To measure ACT availability, price, and market share, the survey used a cluster sampling approach stratified by urban and rural areas; it also used a sample size calculation based on detecting a 20 percentage point change in quality-assured ACT availability. For data on use of ACTs, the evaluators required household survey information, but as Goodman explains, "It was decided not to fund specific household surveys for this study … largely due to cost considerations." Use of ACTs was certainly considered an important outcome, but several ongoing household surveys collect data on fever treatment, and the evaluation team hoped to use the data from these surveys.
From page 37...
... actual data collection was done by different agencies, not by the evaluation team, the evaluation team was involved in quality assurance throughout the data collection process. While there were no controls for this evaluation, the team was able to assess plausibility using the carefully documented information on process and context, and it was able to conduct its evaluation independently.
From page 38...
... Other shortcomings that he noted include poor stakeholder engagement, which is often associated with weak construct validity; the continued application of Humean designs that look for the single cause of a single effect even after recognizing that there are multiple causes with multiple effects; weaknesses in the bases for causal claims; and poor integration of multiple methods. To illustrate the challenges of finding good practices for evaluating complex programs, Stern discussed evaluations of the Consultative Group
From page 39...
... Doing so requires identifying the critical links in program planning, implementation, and delivery and identifying critical conditions, assumptions, and supporting factors. It is also necessary to identify rivals to the "official" program theory, not simply develop an evaluation that has what Stern called a "confirmatory bias" resulting from a design that evaluates a program from the perspective of how it is supposed to work rather than how a program actually is working.
From page 40...
... The presence of level-specific effects in a multilevel program highlights the importance of nested designs that may require different theories and methods at each level, which creates the challenge of achieving vertical integration between levels. Uncertain and extended change trajectories, in which markets change rapidly but landscapes change over decades, requires the use of iterative rather than fixed designs accompanied by extended longitudinal modeling.
From page 41...
... We tried to look at all of those things mainly through qualitative and some quantitative data toward the endline." The Global Fund, said Low-Beer, takes an open approach to causation that considers alternative hypotheses involving context as opposed to looking just at whether an individual intervention produces an observed effect. In addition, he and his colleagues often start with impacts and outcomes and then work back along the causal chain to try to identify other hypotheses of change that could be dependent on context.
From page 42...
... Bridget Kelly, one of the two IOM study co-directors for the evaluation, added that an operational planning phase included two pilot country visits, which in addition to being data collection trips were also an opportunity to elicit that kind of stakeholder understanding of how things operate in practice, to learn what kind of data requests would be feasible and timely, and to pilot tools for primary data collection. Goodman said that the AMFm team held a meeting of the pilot countries after producing a first draft of their report to debate the results, and during this meeting at least some of the countries were, as she put it, "not feeling happy with the way the evaluation was framed for their country." She added that the international malaria community was brought in at the design stage as part of the advisory team that supported the development of the evaluation design.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.