Highlights and Main Points Made by Individual Speakers and Participantsa
- Financing and surveillance systems can work together in an iterative process to better understand pandemic risk. (Wolfe)
- The reproductive rate of a virus is a function of the number of people in a network and the likelihood of infection passing between any two of them. The principle of social proof, whereby people behave similarly to others around them, influences the spread of human disease; models need to account for the effects of social proof on disease spread. (Woo)
- In the early stages, there is some degree of opinion in identifying the most menacing infectious threats. People have to make decisions on imperfect data, and it is difficult to convey the uncertainty to government agencies. (Meltzer)
- The rarity of pandemics, concerns with data quality, and the instinct to hide potential outbreaks all increase the uncertainty in pandemic modeling. (Madhav)
- The concern with Ebola and avian influenza might be misguided, as the next epidemic will not be a repeat of one we know well. (Troedsson)
- A simple trigger may be desirable given the data limitations and the need to make decisions quickly in a crisis. (Madhav, Meltzer)
- However, using a simple trigger also has downsides. Surveillance for infectious diseases may yield false positives, resulting in an unwarranted alert that could have serious consequences. (Troedsson)
a This list is the rapporteurs’ summary of the main points made by individual speakers and participants and does not reflect any consensus among workshop participants.
Prashant Yadav of the University of Michigan convened the last session of the day, a panel on identifying triggers and modeling risk. He put the first questions to Nathan Wolfe, whose company Metabiota worked with the African Risk Capacity (ARC) on incorporating pandemic risk into ARC financing models. Wolfe emphasized that pandemic risks are diverse and should not be grouped together. Throughout history, animal viruses have infected humans, but the relative isolation of human populations helped contain these events. He called the audience’s attention to HIV, an animal virus that crossed into humans. Returning to the comparisons between epidemics and natural disasters, he likened HIV to a hurricane still blowing after 40 years; but with epidemics, it is possible to change the course of the storm as it happens.
Wolfe suggested that, like hurricane risk, epidemic risk can be transferred through insurance and that the market for such insurance would only grow over time. He pointed out that, if surveillance activities are limited, response needs to increase, and that the finance mechanism might do well to work in an iterative process with the preparedness and response processes. In the ARC system, the process of insuring the risk requires such iteration, as the country partners work with ARC to develop their contingency plans and better understand the risk to be insured. Wolfe commended the emerging interest in developing stronger surveillance and early warning systems. He pointed to the Cameroonian interagency pandemic prevention program as an example of coordinated surveillance systems across government agencies.
Gordon Woo of Risk Management Solutions built on these points, agreeing that pandemics were different from natural disasters in many ways. Unlike the risk of storms, it is not possible to predict when a new virus is about to emerge, as the propagation of the virus is mediated by complex human behavioral variables. The principle of social proof, whereby people try to behave in similar ways to others around them, influences the spread of human diseases. The reproductive rate of a virus is determined by the number of people in a given social network and the likelihood an infection might be transferred between two people in that network. Customs such as
the exchange of a kiss or handshake as a greeting and other social factors are crucial in calculating the spread of the outbreak. The Ebola crisis transformed social customs, like greetings and funerals, as people in the affected countries changed social norms in an attempt to control the outbreak. Their behavior helped mitigate the crisis, but modeling the spread of disease at the early and later stages of the epidemic would need to account for vastly different sociological variables.
Mathematical modeling of outbreaks in sparsely populated areas, he continued, is complicated. The epidemiologic modeling used for insurance purposes does not generally have to account for this problem as it deals mostly with large catastrophes, but in sparse social networks these equations can break down. Sometimes alarmingly high case-fatality rates are hidden in rural or remote environments.
For Martin Meltzer of the U.S. Centers for Disease Control and Prevention (CDC), the initial concerns in an outbreak generally come down to three points: the potential unmitigated impact of the epidemic (the number of cases, hospitalizations, and deaths), the potential impact of interventions, and the duration of the outbreak. During the 2009 influenza outbreak, the initial data on case-fatality rates were wrong, and the first responses were based on the assumption that the disease was much more lethal than it actually was. There was, however, a spring wave of the virus that afforded some data, which allowed them to better estimate the potential effects of the outbreak and the planned interventions.
He directed the audience to the CDC’s Influenza Risk Assessment Tool, which offers a framework for assessing the potential impact of an influenza strain based on its clinical severity and transmissibility relative to previous pandemics and seasonal flus. He pointed out that, often in the early stages, there is some opinion and nuance to making this assessment. There are also challenges to understanding the trigger point for an epidemic, Meltzer continued. First of all, it takes time to collect data and understand the emerging pathogen. There are usually also questions of data accuracy, and the lack of data in some countries means that analysts commonly extrapolate information on the spread and impact of disease in the United States or Europe to other parts of the world.
Meltzer favored simple triggers over those dependent on complicated modeling and data of questionable accuracy. He has found information about virulence and case fatality to be the most compelling data for decision makers in public health. While people have to make decisions on imperfect data, he observed that it can be challenging to talk about probabilities and uncertainties in the data with public health agencies.
Nita Madhav from AIR Worldwide then described the different kinds of models available to quantify the uncertainty Meltzer described. These models can help understand how disease spreads in a population and how
mitigation factors can alter the trajectory of the epidemic. Models can shed light on which mitigation efforts would be suitable for Ebola, for example, and how those are different from those suitable for influenza.
Madhav cited four major pandemics in the past century, acknowledging that there might be differences of opinion about which ones to count. In any case, pandemics are infrequent events, increasing the uncertainty in pandemic modeling, to say nothing of the reliability of the data or the instinct in some places to hide cases early on. She and her team model where an outbreak might start and the response capacity in those places and, with different combinations of variables, attempt to measure how the disease would spread.
Improving data quality would reduce the uncertainty in the models and ease decision making, Madhav continued. Even in forecasting the length of an outbreak, the difference between 12- and 18-month emergencies is meaningful, and all stakeholders would be grateful for better precision in such estimates.
In the open discussion period, the audience raised questions about model ambiguity. Woo acknowledged that ambiguity is a curse of any hazard, but particularly epidemics. He suggested that the best way to deal with the ambiguity would be to convene a group of experts to review a range of models and ask for their judgment on identifying the risk. In catastrophe risk, it is now fairly common practice to use expert judgment as a formal process to quantify risk after modeling and, he continued, modern markets are fairly savvy with handling ambiguity in models.
Meltzer clarified that he saw a difference between the preepidemic modeling, which can be complex and warrant expert attention to the nuance in the data, and the models produced at the beginning of an epidemic to determine the trigger point. The latter, he felt, should be kept simple because the audience during an emergency is diverse and not necessarily well versed in mathematical modeling. Madhav agreed, stressing the value of flexibility in emergency response plans. She found that simplicity in a trigger point was desirable from a response point of view and also from the investors’ position, as they may feel comfortable with a model that can be more easily replicated. She assured the audience that, while models are only as good as the data feeding into them, there is still a need to make do with imperfect information.
Tendai Biti, formerly of the Ministry of Finance of Zimbabwe, brought up the particular challenges of fragile states in preparedness. He described his region as being particularly prone to disasters and epidemics because of the lack of infrastructure and capacity. He mentioned a recent cholera epidemic in Zimbabwe that killed 4,000 people in a short time because the agencies were not ready. He questioned how a country like his could even produce reliable data to use in modeling. Yadav suggested that the devel-
oping global network of CDCs might be able to help by supplying a data architecture that could help modelers in less developed countries. Meltzer agreed that this network was growing and improving, but pointed out that there are still only small amounts of relevant data readily available.
Hans Troedsson of the World Health Organization (WHO) then reminded the audience that the emphasis on Ebola and avian influenza might be misguided. The next global epidemic will not be a repeat of the ones history has prepared us for. He recalled his experience managing an influenza outbreak in Vietnam, where the local laboratory found a set of specimens all H5N1 negative but retesting in a more developed country indicated hundreds of specimens were positive. Had they pulled the trigger then, it would have been disastrous because it eventually became clear that the laboratory in the developed country had used the wrong primer and produced false positives. If they had used the simple, conservative trigger in that situation, WHO would have run down its financial reserves quickly. But he recognized that pulling the trigger too late, as in the Ebola crisis, can also have negative consequences.
Woo pointed out that, apart from the question of modeling, this experience pointed to the need for better surveillance in developing countries. Much as the insurance industry paid for the first fire brigade in Britain, he reasoned, the life insurance industry could pay for the surveillance systems badly needed in poor countries. As long as the quality and amount of data available from these countries are poor, then the identification of the trigger will be fraught.
Madhav and Godal saw room for monitoring of population movements, both daily commutes and large-scale migrations, in understanding the spread of epidemics. Tappero agreed, saying that modeling of any trigger point is only as reliable as the data underlying the model. The innovative disease surveillance program in sub-Saharan Africa aims to improve data quality by providing technical and financial support for data collection and monitoring in that part of the world. An integrated electronic data management system could allow for more efficient use of this information.
This page intentionally left blank.