National Academies Press: OpenBook

Evaluation Design for Complex Global Initiatives: Workshop Summary (2014)

Chapter: 7 Applying Quantitative Methods to Evaluation on a Large Scale

« Previous: 6 Applying Qualitative Methods to Evaluation on a Large Scale
Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

7

Applying Quantitative Methods to Evaluation on a Large Scale

Important Points Made by the Speakers

  • Final health outcomes often are not measured in large evaluations, but intermediate progress that is measured does not always map to health improvements.
  • New methods of studying population health are becoming available, such as gathering data from disease registries, demographic surveillance sites, or household surveys.
  • Though modeling can be complex, the effort can pay dividends throughout the design, implementation, and evaluation of a large-scale intervention.
  • Extended cost-effectiveness analysis can look at equity issues such as distributional or financial risk protection issues.

Quantitative methods are one foundation of evaluations of large-scale, complex, multi-national initiatives. Yet many difficult decisions must be made in deciding on and implementing these methods, as pointed out by the presenters at the concurrent session on quantitative methods.

Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

OUTCOMES MATTER

What are the possible outcomes of a large-scale evaluation? Eran Bendavid, assistant professor of medicine at Stanford University, said that they fall into three categories: operational outcomes, output outcomes, and health proxy outcomes. The health arena, he said, is fortunate to have well-circumscribed health outcomes such as all-cause and disease-specific mortality, disease prevalence or incidence, and quality-of-life measures. Often, however, these final outcomes are not measured in large evaluations, raising the question of whether such outcomes are important to the evaluation of global health initiatives. Bendavid believes the answer to that question is yes because intermediate progress may not map to health improvements. There is a great deal to be said “for surprises and unexpected results,” he said. “Final outcomes are critical.”

As an example of this type of surprise, he cited some unexpected findings with regard to the effects of lowering blood glucose levels in patients with type 2 diabetes. Medical dogma held that intensive therapy to lower blood glucose was unquestionably good, yet a study conducted by the Action to Control Cardiovascular Risk in Diabetes Study Group (Action to Control Cardiovascular Risk in Diabetes Study Group et al., 2008) to confirm that belief found that the use of intensive therapy to target glycated hemoglobin levels increased mortality compared to standard therapy and did not reduce cardiovascular events.

Bendavid said that measuring final outcomes should be a critical piece of an evaluation because it is necessary for comparative effectiveness and value determinations. As to why it is rare to see final outcomes reported in an evaluation, Bendavid said that based on what he has heard from participants at this workshop there are a number of reasons. One was that context matters, and that it is hard to attribute final outcomes to these heterogeneous and complex programs. Other reasons he heard included that there is not enough time or money, and that existing findings and methods are adequate and, therefore, there is no need to evaluate health outcomes.

Data for health outcomes can come from a primary data collection effort such as those conducted by the Poverty Action Lab and the Institute for Poverty Action or by national programs such as the Mexican Seguro Popular evaluation. Aggregated data from sources such as the World Bank or the World Malaria Reports can provide health outcomes data, as can existing microdata, such as demographic surveillance sites and household surveys. When available for a study country, the DHS, said Bendavid, are a great source of long-term, high-quality data, though the use of DHS data can be challenging because of timing—the surveys are administered on average every 4–5 years—and because the measurements are mostly on total child and maternal mortality and not specific diseases of interest.

Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

But Bendavid queried whether DHS could be used to provide health outcomes data. He gave a scenario where that may be possible, using a hypothetical PEPFAR intervention using treatment as a means of preventing HIV transmission. Testing such an intervention in clinical trials that measured incidence and mortality reduction would be an enormously expensive operation, he explained, but it might be possible to use DHS instead to follow the results of the intervention. “If implementation is staggered over 1–2 years and if, during this period, you can field three to four DHS waves prior to and during the staggered implementation, and if during this time you measure all-cause adult mortality, HIV-related adult mortality, regional incidence rates, and viral suppression rates, you would have a very strong design that would piggyback on the data collection effort,” said Bendavid. “Considering the cost of many of the trials that are ongoing just for looking at the potential effectiveness of treatment as prevention for HIV, this kind of an effort could be quite an attractive alternative option.”

There are also new ways of studying population health that are appearing in the literature. One example is the registry based randomized trial, in which a large-scale randomized trial builds on an existing registry of observational data to identify and enroll patients without duplicating the collection of existing data (Lauer and D’Agostino, 2013). While this particular proposal is aimed at resource-rich countries, Bendavid suggested that it would be possible to expand DHS at some sites to allow testing the effects of implementing large and complex programs. Doing so would require some up-front investment, he said, but there could be substantial downstream reward given the much lower cost of collecting data in a health registry compared to the cost of recruiting participants for a clinical trial.

MATHEMATICAL MODELING AS A TOOL FOR PROGRAM EVALUATION

Charlotte Watts, head of the Social and Mathematical Epidemiology Group and founding director of the Gender, Violence, and Health Centre in the Department for Global Health and Development at the London School of Hygiene and Tropical Medicine, began her presentation with a brief discussion of how mathematical modeling can be used for an evaluation. First, she differentiated infectious disease modeling, which uses systems of equations to describe how an infectious disease might spread through a particular population, from statistical modeling intended to draw inferences from the data. These systems of mathematical equations can be used to describe the likelihood over time that different individuals might become infected with a disease. For example, HIV modeling can be used to explain how disease develops over time and how that affects the levels of antiretroviral therapy that will be needed or the mortality rates in a

Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

population. She explained that mathematical modeling is especially useful with infectious diseases such as HIV where it may not be possible to measure disease impact directly or where the available data measure trends in HIV prevalence or sexual behavior rather than the actual change in HIV incidence. Disease transmission modeling is also useful when the goal is to estimate broader, dynamic benefits of an intervention on subsequent chains of transmission among people not directly reached by the intervention (for example, behavior change that might have resulted in averted infections) and for obtaining estimates of cumulative, long-term benefits of infections averted for the purpose of cost-benefit and cost-effectiveness analyses.

Developing a useful disease transmission model for a specific setting is a multistep process. The first step in this compartmental deterministic modeling involves mapping out the different components that are interacting with one another within the context of the intervention. The components are then formulated mathematically so they can be coded to create the model. Watts noted that mathematical modelers are getting more sophisticated about incorporating measures of uncertainty associated with key inputs into their models that help capture what should be reflected into subsequent impact projections. With sampling methods that capture different combinations of potential model inputs, they then test the model using setting-specific epidemiological data (i.e., HIV prevalence) to identify which of the combinations actually fit the real HIV prevalence data that they have. This allows them to compare projections of transmission with or without the intervention to project the intervention impact and associated uncertainty.

Though the mathematics behind a model can be complex to set up, there is increased interest in applying this type of modeling as it can be useful throughout the design, implementation, and evaluation of a large-scale intervention. In the formative and early-stage planning phase, mathematical modeling can be used to predict what the impact might be when an intervention or technology is added to an existing health system to determine if the intervention is worth pursuing, how long it might take to show an effect, and whether the intervention should focus on specific populations. At this stage, explained Watts, mathematical modeling can be used to give project developers a sense of whether the size of the intervention they are planning matches the goals of the intervention in terms of the size of the desired effect. At the small-scale implementation phase of a program, mathematical modeling can take real data about an intervention’s initial effectiveness to provide an idea of other settings in which this intervention could work and to explore how possible refinements to the intervention might increase its impact when the intervention moves into the large-scale delivery phase.

As an example of how mathematical modeling was used to influence early-stage thinking about program delivery and planning, Watts discussed

Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

a project that was going to introduce new microbicides as a means of reducing HIV incidence. For this project, mathematical models were used to project the effect of different introduction and uptake rates of the microbicides on the reduction of HIV incidence. This modeling effort predicted that delays in the delivery of the intervention could result in lower coverage rates and significantly reduce the intervention’s potential impact on incidence. This is important information, Watts noted, that can help focus policy discussions on how to use mathematical modeling to develop targets and think through the scale of implementation needed to achieve the desired impact of an intervention.

In another example, Watts showed how mathematical modeling can be used to explore how the effects of an intervention will vary when implemented in different epidemic settings. In this case, she and her colleagues modeled the impact of microbicide on HIV transmission in Cotonou, Benin, where HIV prevalence is low and the epidemic is concentrated among vulnerable groups, compared to Hillbrow, South Africa, where HIV prevalence is much higher and the epidemic is more generalized in the population. The mathematical model projected that the same level of microbicide use would cause a much greater reduction in HIV incidence in Cotonou than in Hillbrow. However, the cumulative number of infections averted would be much greater in Hillbrow than in Cotonou, due in part to higher initial incidence in Hillbrow (Vickerman et al., 2006). Thus, mathematical modeling can provide interesting and useful information for understanding the potential impact of an intervention in different epidemiologic settings. Lastly, Watts noted that this type of modeling also can be used to create counterfactual situations to predict the course of an epidemic in the absence of a particular intervention. The counterfactual projections can then be compared to the projected outcome with the intervention implemented.

In summary, Watts said that mathematical modeling is an extremely powerful tool for exploring what-if questions and for both advocacy and rigorous evaluation. It is admittedly a complex technique with many underlying assumptions, and she noted that the modeling field is only just starting to develop guidelines for detailing those assumptions in publications so it becomes less of a “black box” activity. Mathematical modeling “is dependent on good data and strong collaborations with programs,” she said, adding, “We could be using modeling far more than we are currently, both to inform the design and planning of programs, as well as for evaluation.” She noted, too, that the most effective approach to bringing mathematical modeling into program activities is to involve mathematicians at the outset in a multidisciplinary evaluation team, in part to identify the data that will be needed to inform the model and be collected as part of the evaluation strategy.

Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

EXTENDED COST-EFFECTIVENESS ANALYSIS

In the final presentation of the concurrent session, Rachel Nugent, director of the Disease Control Priorities Network at the University of Washington’s Department of Global Health, discussed her team’s approach to using economic modeling, together with other analytical tools such as the epidemiological and mathematical models that Watts discussed, to answer some of the what-if questions relating to economic outcomes of health interventions in the third edition of the Disease Control Priorities in Developing Countries program. The objectives of the program, which is part of the larger Disease Control Priorities Network, are to inform allocation of resources across interventions and health delivery platforms, provide a comprehensive review of the efficacy and effectiveness of priority health interventions, and advance knowledge and the practice of analytical methods for economic evaluation of health interventions. The work that Nugent discussed emerged from the third objective.

“The starting premise for this work,” said Nugent, “is that health decision makers are making choices in a complex environment with limited information.” Economists have something to offer health decision makers, she added, but economists need to move beyond the standard cost-effectiveness analysis. “There are many ‘well-known and well-justified’ criticisms of cost-effective analysis,” she said, “but one of them is simply that it doesn’t provide sufficient information of the type that health ministries and other decision makers need. It’s often too narrow about a given intervention.” To address this shortcoming, she and her colleagues are trying to find a middle ground between cost-effectiveness analysis and cost-benefit analysis to develop what she called a dashboard of economic outcome measures that can be compared across a broad range of intervention choices. These economic outcome measures revolve around adding to the evidence base for equity and financial risk protection for the users of services.

Nugent noted that previous incarnations of the Disease Control Priorities in Developing Countries program were instrumental in advancing understanding of the economic aspects of health and what economic information is useful to inform health decisions. The World Health Report 2000 (WHO, 2000) was also a seminal document, in part because it not only exposed how poorly the U.S. health system was doing on a comparative basis, but also because it asserted that health systems are supposed to provide more than just health. “Yes, health systems should provide improved health outcomes, but they should also provide economic outcomes,” said Nugent. Such economic outcomes include prevention of medical impoverishment and fairness in the final contribution toward health. Along those lines, she and her colleagues are hoping that the measures they are develop-

Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

ing can help inform the discussion about universal health care and how to design a basic insurance package taking into account the needs of individual countries.

One aspect of Nugent’s work has been to move from cost-effectiveness analysis to extended cost-effectiveness analysis that looks at equity issues, such as the distributional consequences across wealth strata of populations and the financial risk protection benefits for households (Verguet et al., 2014). As an example of the use of extended cost-effectiveness analysis, Nugent discussed its application to an analysis of a human papilloma virus (HPV) vaccination policy in China. She observed that “You have to look at the vaccination and then cervical cancer screening and treatment all together, because if you just look at one of them you’re going to miss a lot of the important impacts. They have to go together to really be able to talk about what you get out of a policy of HPV vaccination.” The starting point for this analysis is the introduction of the technology and the policy of a government subsidy for HPV vaccination and a set of expected impacts, measured by the number of cancer deaths averted; household expenditures, measured by cancer treatment expenditures that are averted; and financial risk protection benefits, measured by the relative importance of treatment expenditures to the household budget. These effects were measured by income quintile, but Nugent explained that they could have been measured by urban versus rural status or male versus female to see if the policy favors one sex over the other. “There are different ways we could disaggregate the population if we have the data to measure different distributional aspects,” she said. This example’s analysis showed that China’s policy will favor poorer families in terms of the savings to lower income people as a much higher percentage of their income.

OTHER TOPICS RAISED IN DISCUSSION

During the discussion period, Bendavid was asked about the use of demographic surveillance sites (DSSs) for interventions research. He explained that these are relatively small or constrained communities that can provide very high-quality data. But it can be hard to draw general conclusions from such specific settings, even though in some cases data from these sites have been used to great effect. Increasing the number of sites in a country could increase the value of this information, he added.

Watts expanded on the use of counterfactual parameters, which are essential for modeling the impact of an intervention. The process of choosing what data goes into a model is becoming much more sophisticated and transparent, she said, but it is also complicated by the multiplicity of programs and program elements. Models inevitably must trade complexity for

Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

simplicity. By clearly articulating these trade-offs in published documents, they can be open to questioning and review.

Nugent emphasized the importance of thinking broadly about value questions in health resource allocations. Value for money can mean different things to different people. Overall efficiency is one measure, but so are the effect on households and distributional effects.

Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 61
Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 62
Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 63
Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 64
Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 65
Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 66
Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 67
Suggested Citation:"7 Applying Quantitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 68
Next: 8 Analysis Through Triangulation and Synthesis to Interpret Data in a Mixed Methods Evaluation »
Evaluation Design for Complex Global Initiatives: Workshop Summary Get This Book
×
Buy Paperback | $50.00 Buy Ebook | $39.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Every year, public and private funders spend many billions of dollars on large-scale, complex, multi-national health initiatives. The only way to know whether these initiatives are achieving their objectives is through evaluations that examine the links between program activities and desired outcomes. Investments in such evaluations, which, like the initiatives being evaluated, are carried out in some of the world's most challenging settings, are a relatively new phenomenon. In the last five years, evaluations have been conducted to determine the effects of some of the world's largest and most complex multi-national health initiatives.

Evaluation Design for Complex Global Initiatives is the summary of a workshop convened by the Institute of Medicine in January 2014 to explore these recent evaluation experiences and to consider the lessons learned from how these evaluations were designed, carried out, and used. The workshop brought together more than 100 evaluators, researchers in the field of evaluation science, staff involved in implementing large-scale health programs, local stakeholders in the countries where the initiatives are carried out, policy makers involved in the initiatives, representatives of donor organizations, and others to derive lessons learned from past large-scale evaluations and to discuss how to apply these lessons to future evaluations. This report discusses transferable insights gained across the spectrum of choosing the evaluator, framing the evaluation, designing the evaluation, gathering and analyzing data, synthesizing findings and recommendations, and communicating key messages. The report also explores the relative benefits and limitations of different quantitative and qualitative approaches within the mixed methods designs used for these complex and costly evaluations.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!