Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 67
3 Data Evaluation and Software Development The computational and the value submodels were developed in parallel and then integrated over a software platform that allows users to inter- act with and understand the relationships between the model input and output. The model development and interface development occurred con- currently. The committee received and adjusted its software development strategy based on feedback received from consultant concept evaluators. In the following sections we describe the selection of vaccine candi- dates and of the related data to be fed into the model and then the actual model development and evaluation process. Selection of vaccine candidates The committee considered several hypothetical vaccine candidates from the perspectives of the United States and of a developing country. The com- mittee agreed on South Africa as the particular developing country for this process since its income profile, its population, and its health, economic, and social priorities are vastly different from those of the United States. A second reason for selecting South Africa was the availability of input data for disease burden and vaccine estimates, which were necessary to popu- late and test the model. The five hypothetical candidate vaccines chosen were a universal influenza vaccine plus vaccines against tuberculosis, group B streptococ- cus, malaria, and rotavirus. However, as the work of assembling the data for the first vaccines began, it became clear that the present scope of work 67
OCR for page 68
68 RANKING VACCINES: A Prioritization Framework made it feasible to complete testing for only three of the candidate vac- cines. The committee chose the universal influenza vaccine, the tubercu- losis vaccine, and the group B streptococcal vaccine for this phase for a collection of reasons related to how the candidate vaccines helped capture various health, economic, and vaccine attributes. For example, the universal influenza vaccine addresses a disease that is important in both high- and low-income countries, and the convenience of a single vaccine for all influenza strains would make it readily useful for all parts of the world. Furthermore, influenza affects all age groups and causes widespread morbidity worldwide. In contrast, tuberculosis does not pose a significant threat in high-income nations, thus a vaccine for tuber- culosis would likely be of most use in the low- and middle-income coun- tries. And group B streptococcus vaccine would be pertinent for both low- and high-income countries but is designed for administration to pregnant women (a special population) and would confer benefits to their infants. Additional information on the impact of influenza, tuberculosis, and group B streptococcus can be found in Boxes B-1, B-2, and B-3 in Appendix B. Data sourcing and analysis In its data-gathering process, the committee did not attempt to develop the best or most detailed estimates about each disease. The objective was instead to obtain reasonable data that could help the committee evalu- ate the model rather than to generate precise projections about specific vaccines. The committee chose to develop reasonable estimates for data based on literature reviews and expert opinion, and it sometimes also relied upon committee-generated assumptions because much of the information required for the model, especially information concerning South Africa, was not available. It is thus reasonable to view the data inputs as charac- terizing hypothetical vaccines against influenza-like, tuberculosis-like, and group B streptococcus–like syndromes. The estimates and assumptions used in this model were based upon literature reviews, publicly available data provided by international agen- cies such as the World Health Organization (WHO), and publications of various other organizations, such as the Agency for Healthcare Research and Quality (AHRQ) and the Healthcare Cost and Utilization Project (HCUP) in the United States. For each candidate vaccine, the model used several categories of inputs (see Table 3-1 for specifics):
OCR for page 69
TABLE 3-1 Data Entries, Sources, and Methods of Analysis Data Parameters Sources Method of Analysis Notes • The country life tables Total Population, N, data Demographic Life Tables are available from WHO, was used from a separate Variables Global Health Observatory document provided Data Repository (http:// by WHO officials. bit.ly/H5iYNC) • Standard life expectancy depicts the life expectancy for the Japanese population. Also available through WHO, Global Health Observatory Data Repository (http:// bit.ly/Ho2VI3) • Fryback et al. (2007). HUI-2 scores are derived HUI-2 U.S. norms for six from the literature. Due to generic health-related the lack of HUI-2 data for quality-of-life indexes South Africa, values for the from the National United States are used. Health Measurement study. Medical Care 45(12):1162–1170. • Hourly wage rate is The BLS Current Population Wage Rate for South Africa Wage Rate gathered from the Bureau Survey data were used was crudely estimated of Labor Statistics. for wage rates. by converting the United Parents’ wage rate is States wage rate to South used for children under African wage based on the the age of 15 years. prevailing exchange rate. 69 continued
OCR for page 70
TABLE 3-1 Continued 70 Data Parameters Sources Method of Analysis Notes • Obtained from the total Specific group from Disease Target population data used in the the entire population Burden Population “demographics” section. suited for the vaccine. • CDC, WHO See disease and vaccine Annual tables in Appendix Incidence Rate • Published literature C for details. • Published literature Case Fatality Rate • Expert opinion Vaccine Coverage Vaccine Effectiveness Assumed to be 100 Herd Immunity percent due to infectious Threshold nature of each disease. • Published literature Vaccine Length of Characteristics Immunity • Expert opinion Doses Required per Person • CDC Cost per Dose • Expert opinion Research Costs Licensure Costs Start-Up Costs Time to Adoption
OCR for page 71
• Published literature In cases, where information See disease and vaccine Disease Percent of on exact conditions was not tables in Appendix Morbidities Cases • Expert opinion available in HUI-2 index and C for details. Disutility (Toll) DALY weights, proxies were used to estimate values for Disability tolls and disability weights. Weight Duration • Published literature See disease and vaccine Vaccine- Morbidity tables in Appendix Related • Expert opinion Probability C for details. Complications per Dose Disutility (Toll) Disability Weight Duration • Healthcare Cost and See disease and vaccine Health Care Services Utilization Project (HCUP) tables in Appendix Services Used for the C for details. Treatment • WHO-CHOICE (Choosing of Disease Interventions that are Cost and Potential Effective) tables of costs Complications and prices (WHO, 2003) Caused by the Vaccine 71
OCR for page 72
72 RANKING VACCINES: A Prioritization Framework • Population characteristics, including the number of persons in the population and age and sex distributions. The underlying popula- tion characteristics for both the United States and South Africa were imported from country life tables provided by WHO through its Global Health Observatory Data Repository. • Disease characteristics, including annual incidence rate, case-fatality proportion, and complications. For the United States, disease-burden data were obtained primarily from the literature and reports by the Centers for Disease Control and Prevention (CDC), such as Morbid- ity and Mortality Weekly Reports (MMWR) and National Vital Statis- tical Reports (NVSR). Comparable information for South Africa was not as readily available. Statistics South Africa and SA Health Info were helpful in providing approximate data, which were adapted to best fit the model parameters. • Health characteristics, including disability-adjusted life years (DALYs) and quality-adjusted life years (QALYs), were obtained from the avail- able literature. DALYs were calculated by assigning DALY weights from the Global Burden of Disease study (Mathers et al., 2006). Sim- ilarly, HUI-2 was used as a measure to calculate QALYs. When the exact condition of concern was not categorized in DALY and HUI-2 weights, proxies were used. Appendix C provides a listing of the data used in the model. • Vaccine characteristics, including the number of years to full adoption, population coverage rate, effectiveness, length of immunity, doses required per person, costs of administration, and research and devel- opment costs. Vaccine traits were a combination of factual data and expert panel judgments. Vaccine efficacy, vaccine-associated compli- cations, coverage, and the number of doses required for immunity were estimated from the literature, whereas time to adopt a vaccine within an immunization scheme, development risk, and innovation for new delivery methods were guided by expert opinions. Data on health care costs for disease and vaccine candidates were obtained from both a literature review and governmental Web sites such as those for HCUP and CDC for the United States and WHO’s Choos- ing Interventions that are Cost-Effective (CHOICE) project for esti- mates of health care services costs in South Africa. For each of the selected vaccines, assembling the data needed for the model presented a different set of challenges. Tuberculosis poses a significant health challenge in South Africa, and published literature concerning the magnitude of the disease is available.
OCR for page 73
73 Data Evaluation and Software Development But accurate epidemiologic and health care cost estimates are difficult to obtain. Some assumptions about disease burden were made to generalize available information to South African populations when age-specific data were not available. By comparison, tuberculosis incidence and health care cost records are available for the United States; thus data for the disease in the United States can be considered fairly accurate. Group B streptococcal infection is a serious disease in infants. Regardless of the disease burden posed in this vulnerable population, comprehensive surveillance is lacking throughout the world. Additionally, locating data for economic analyses is a daunting task in light of the limited resources available for this estimation. Thus, it was very difficult to popu- late all the model parameters for group B streptococcus, and many fields of data entry are informed assumptions. Information for influenza, for example, was fairly accessible through U.S. and international flu surveillance modules, and literature on flu vac- cines is abundant, given the global prevalence of the illness. SMART Vaccines submodels SMART Vaccines includes two submodels—the computational submodel and the value submodel. As previously shown (Figure 2-1), the computa- tional submodel calculates multiple health and economic measures asso- ciated with new vaccine candidates. Many of these measures build upon the work presented in the 2000 IOM report. The computational submodel evolved with the improvements in the health and economic attribute list- ing for the model. The desire for interpretable health and economic attri- butes drove much of the computational submodel design. Early prototypes strongly resembled the model presented in the 2000 report. Those prototypes were tested using the same input infor- mation and were determined to reliably replicate the results of the 2000 report. However, this initial prototyping highlighted several limitations in the analytical structure of the 2000 report, specifically in the context of accommodating the following features: • Computations for all desired health and economic attributes. • Variations in timing between vaccine administration and onset of disease or death. • Differences between vaccines that protected for different lengths of time (i.e., 5-year universal influenza vaccine versus 1-year seasonal influenza vaccine).
OCR for page 74
74 RANKING VACCINES: A Prioritization Framework • Potential future improvements accounting for disease or population dynamics. Limitations in flexibility directed the modeling efforts toward a population process model whose technical aspects are presented in Appendix A. The computational submodel comprises seven computed attributes derived from health, vaccine, and economic inputs. The remaining 22 attri- butes, called “qualitative attributes,” were defined in an iterative process by the committee. After formal definitions were developed, levels of assess- ment were specified (Table 2-1). The health and economic attribute measures were stratified by cat- egory (e.g., Level 2 = $/QALY between $0 and $10,000) so as to not over- specify computational model results, given the inherent uncertainty in input information. Determining the appropriate categories for health and economic measures that are to be generalized across populations of vary- ing size, disease incidence, and mortality rates is a complex process. The categorization of the health and economic attributes needs to be conducted through a thorough evaluation of the model, supported by epidemiologic and economic evidence. This categorization has yet to be completed, but the preliminary assessment resulted in an initial set of categories to use as examples. The qualitative attributes not generated by the computational model are directly assessed by users. Definitions of categories for direct assessment were developed in an iterative process and then finalized. After finalizing the attribute definitions and assessment categories, the commit- tee incorporated the multi-attribute weighting approach. The committee chose the rank order centroid method described in Chapter 2 for ease of use and reliability. Development of the computational submodel The computational submodel contains expressions for health and eco- nomic values that are based on a population process model. The process model is initialized at year i = 0 for a stationary population with: no vaccine (i.e., the baseline population); the vaccine in steady state delivery; and the vaccine first being introduced. Annualized health and economic values are calculated by comparing a population with a vaccine in steady state to a baseline population after aging 1 year. Values capturing the efficiency of the investment (i.e., cost- effectiveness) are calculated by comparing a population where the vaccine is first introduced to a baseline population after aging 100 years. The fol- lowing are further relevant details about the three types of populations:
OCR for page 75
75 Data Evaluation and Software Development 1. The baseline population may have received no vaccine for the disease target. However, the baseline population may include the current vaccination state as a reference against which to compare a newly developed vaccine with different (i.e., more desirable) characteris- tics targeting the same disease. 2. When the vaccine is administered to the steady state population, individuals of all ages are assumed to have had the opportunity (i.e., accounting for coverage) to receive the vaccine at model initializa- tion. For example, for a vaccine that is solely targeted for infants, individuals of all ages are assumed to have had the opportunity for vaccination. Achieving steady state for this vaccine would require many years, as compared with a vaccine designed for delivery to all ages. 3. The vaccine first being introduced to a population assumes that the vaccine is delivered solely to the target population (i.e., accounting for coverage) at model initialization. The age-specific population process model simulates measures of population size for the total population, the target population, the vac- cinated immune members of the populations, the vaccinated susceptible members, the not-vaccinated immune members (i.e., those who have indi- rect protection through herd immunity), and the not-vaccinated suscepti- ble members. Simulated health measures include incident cases, deaths by disease, vaccine complications, all-cause deaths, and cause-deleted deaths. Mathematical expressions for these process measures may be found in Tables A-1 and A-2 in Appendix A. Health and economic attributes are calculated from the popula- tion process model with mostly linear expressions (as shown in Tables A-1 and A-2) to serve as a starting point for the committee’s modeling effort. Annualized measures are differentiated over the first year i = 1 between a population with no vaccine and a population with the vaccine in steady state. These annualized measures include deaths averted, cases prevented, QALYs gained, DALYs averted, net direct costs, workforce productivity (i.e., indirect costs), and one-time costs. The length of time associated with the annualized health and economic attributes associated with death and permanent impairment is assumed to be 6 months, as this is the average time of death between year i = 0 and year i = 1. Within these tables, vaccine populations for annualized measures refer to the vaccine-in-steady-state populations. Alternatively, calculations on cost-effectiveness measures (i.e., $/QALY or $/DALY) are performed over 100 years. Time durations incor-
OCR for page 76
76 RANKING VACCINES: A Prioritization Framework porated within QALYs and DALYs (i.e., included in cost-effectiveness only) associated with death and permanent impairment are assumed to be future life expectancy. Life expectancy is adjusted for baseline health utility indi- ces (i.e., HUI2) for QALYs only. Life expectancy is discounted for both QALYs and DALYs when a discount factor is introduced. Expressions for cost-effectiveness measures may be found in Tables A-1 and A-2. Within these tables, vaccine population references are assumed to be the popula- tions where the vaccine is first introduced. Evaluation of the computational submodel The computational submodel has been evaluated using four base cases for preventative vaccine candidates. These cases, given in Table 3-2, are for seasonal influenza, group B streptococcus, and tuberculosis within the United States (2009) and for tuberculosis within South Africa (2009). Table 3-2 presents input assumptions for the target population, the duration of immunity, the cost to administer, the herd immunity threshold, and coverage. It also displays annualized health and economic attribute measures applicable to a vaccine in a steady state population and efficiency measures for a population in which a vaccine is first introduced. These measures are summed over 100 years and discounted at three percent. These evaluations allow for a constructive comparison of characteristics across base cases. The model identifies the vaccine for seasonal influenza (i.e., with 1-year duration of immunity) having the largest health impact in terms of averting deaths, preventing cases, and increasing health-adjusted life years within the United States. Direct costs are notably high because annual administration (i.e., delivery costs) to an assumed undifferentiated target population of all ages is much more expensive than delivering the vaccine solely to infants. However, given improvements in health-adjusted quality of life, the cost-effectiveness is greater for the seasonal influenza vaccine than for other candidates in the United States. The evaluation of the base cases demonstrates major differences between targeting tuberculosis in the United States and in South Africa. The health and efficiency attribute measures are improved within the South African population, where disease incidence is much higher. In South Africa administering the vaccine in steady state is cost-saving (i.e., net direct costs <0). It is important to note that the corresponding effi- ciency measures do not demonstrate cost savings (i.e., cost per QALY or DALY >0). This highlights a difference between examining vaccine candi- dates in steady state and the standard computations of cost-effectiveness
OCR for page 77
77 Data Evaluation and Software Development TABLE 3-2 Computational Submodel Evaluations for Baseline Cases Group B Influenza, Streptococcus, Tuberculosis, Tuberculosis, Demographic United States United States United States South Africa Attributes 2009 2009 2009 2009 Target All ages Infants Infants Infants Population Duration of 1 year Life Life Life Immunity Cost per Dose $13 $100 $50 $25 Herd Immunity None None None None Threshold Coverage 38% 85% 85% 50% (Average) Health Attributes Vaccine Steady State (per Year) Premature 12,095 1,248 671 28,973 Deaths Averted Incident Cases 6,123,612 14,841 7,451 140,239 Prevented QALYs Gained 21,011 3,571 1,373 40,680 DALYs Averted 8,665 1,170 622 21,421 Economic Attributes Vaccine Steady State (per Year) −$95,357,702 Net Direct $1,929,730,356 $274,313,238 $253,174,240 Costs (Delivery— Health Care) Vaccine $2,691,438,051 $570,970,118 $285,485,059 $15,278,835 Delivery Costs Health Care $761,707,695 $296,656,880 $32,310,819 $110,636,537 Costs Averted Workforce $4,619,173,825 $102,210,335 $28,345,945 $285,934,338 Productivity Gained One-Time Costs $150,100,000 $810,000,000 $610,000,000 $610,000,000 (Research + Licensure) Cost Effectiveness Vaccine First Introduced (100 Years) $/QALY $7,389 $40,539 $801,122 $204 $/DALY $14,130 $54,992 $1,195,821 $270
OCR for page 98
98 RANKING VACCINES: A Prioritization Framework FIGURE 3-13 The SMART Vaccines Beta vaccines (complications) screen further asks the user to enter estimated costs associated with vaccine-related complications. Screen Figure 3-13.eps group B streptococcus) might have multiple vaccine candidates. Users can build up the data for a single vaccine, save it (e.g., as “TB Vaccine A”), mod- ify the input data to reflect another candidate vaccine’s characteristics, and save it as another vaccine (e.g., “TB Vaccine B”). As with the disease burden data, these data currently have only four age groups but will be expandable in future versions. Here, the user speci- fies age-specific vaccine coverage (the percent of the population receiving the vaccine) and effectiveness (among those being vaccinated). SMART Vaccines Beta automatically fills in the population numbers for each age group. These data show, for example, that the user expects 40 percent of
OCR for page 99
99 Data Evaluation and Software Development FIGURE 3-14 The SMART Vaccines Beta Value Assessment page allows the user to enter information in eight categories, from health to policy considerations. Each category on this page expands and collapses like an accordion menu. adults to be vaccinated with a 75 percent effectiveness so that 30 percent of the adult population becomes immune. Product Profile In this step the user specifies the potential attributes of a specific vaccine (Figure 3-11). Of course, these are not known with certainty before actual development, so users must use expert opinion to conjecture about the candidate vaccines. These attributes are central to the issues of vaccine prioritization because they include basic aspects of the vaccine (e.g., how many doses and costs per dose to purchase and administer), research and development costs, licensing costs, and expected time to adoption. The
OCR for page 100
100 RANKING VACCINES: A Prioritization Framework FIGURE 3-15 The SMART Vaccines Beta Value Assessment page showing economic entries from the user with pop-up help menus containing definitions of terms. user can subsequently change these product profile attributes and see (on a concurrent view of Step 6) how the computed attributes and the priority score have changed. This gives an “on the fly” capability to see how these attributes affect rankings and their computed components, and it allows users to consider trade-offs between attributes as they focus product devel- opment efforts—for example, choosing larger research and development costs but reducing the costs to administer by removing cold-chain require- ments or product shelf-space demands. Complications Step 4 also includes an entry screen for potential complications that a new vaccine may cause (Figure 3-12). These data are similar in concept to those
OCR for page 101
101 Data Evaluation and Software Development FIGURE 3-16 The SMART Vaccines Beta Value Score (dashboard screen) presents the final values for each vaccine attribute, given the information entered by the user in the earlier steps. in Step 3 (Disease Burden), but in this case they refer to complications of a candidate vaccine rather than to the consequences of unprevented disease. Users need to specify each possible complication and all associated data. Since these complications are unknown until a vaccine is fully field tested (or used widely so as to detect rare complications), users will necessarily draw on expert opinion and work by analogy from vaccines with similar characteristics (e.g., live or inactivated virus or types of adjuvants). Fig- ure 3-13 shows the bottom of the Complication page using the scroll bar at the screen’s right side.
OCR for page 102
102 RANKING VACCINES: A Prioritization Framework FIGURE 3-17 The SMART Vaccines Beta Value Score screen shows a side-by-side comparison of all vaccine candidates. The top priority areas selected by the user are presented in the Drag Vaccine Values to Rank box for refer- ence to enable re-ranking if necessary. Step 5: Value Assessment Step 5 asks users to enter qualitative information about each vaccine. These come in eight categories, as previously shown in Table 2-1. Each one of these categories opens up like an accordion menu to show all of the qualita- tive attributes associated with any vaccine, whereupon the user checks the appropriate category for each attribute. Each category has a pop-up bubble associated with it to describe to the user the committee’s intent or defini- tion regarding a particular categorical choice for each attribute (each indi- cated by a symbol). The user need not fill out these data queries if the attributes in question have not been selected in the value choices (Step 1). Figure 3-14 shows this step with the Health Considerations bar opened up,
OCR for page 103
103 Data Evaluation and Software Development and Figure 3-15 shows the same step with the Economic Considerations bar opened up. Step 6: Value Assessment and Score The screen at Step 6 shows values for all of the calculated attributes for each vaccine under consideration (Figure 3-16). This provides a single “dashboard” point that shows what all of the previous data entries lead to in calculated attributes. For example, Premature Deaths Averted per Year uses data on population size by age, disease incidence by age, vaccination rate by age, vaccine efficacy rate by age, and the case mortality rate to com- pute the number of premature deaths averted per year. A similar computa- tion creates the Incident Cases Prevented per Year. Calculation of QALYs gained and DALYs averted also include information (entered at Step 2) regarding disease burden. As noted before, users may select either DALY or QALY measures, but not both. If a user selects the DALY measure, he or she has the option (at the upper left of the Step 6 screen) to use or avoid the associated age- weights. The calculated illustrative value scores are shown in Figure 3-17. Consideration of uncertainty In this phase, the committee was unable to explicitly model issues relating to uncertainty in SMART Vaccines Beta. In Phase II the committee will consider various elements of uncertainty to be included in SMART Vac- cines 1.0. Sources of uncertainty and how they affect SMART Vaccines are briefly discussed, along with some possible methods to address these issues in Phase II. Uncertainty About the Likelihood of Successful Licensure SMART Vaccines Beta includes one uncertainty component but instead of listing it as a probability the committee characterized it as a value attri- bute: “Likelihood of Successful Licensure in 10 Years” under “Scientific and Business Considerations” (Table 2-1). The uncertainty related to the time the vaccine may become available for public use affects judgments about priority. Otherwise, some possible ways to address the issue of uncertainty include programming the uncertainty component into the computational submodel as a delay between “now”—the time when the priorities are being set—and the time when the health benefits due to vaccination might
OCR for page 104
104 RANKING VACCINES: A Prioritization Framework be expected to accrue, and the time when net costs begin to include the vaccination costs. Earlier, in the 2000 report, each vaccine candidate under consider- ation was assigned to one of three development intervals: 3 years, 7 years, or 15 years. An additional 5-year post-licensure delay was assumed before the vaccine was actually made available for public use. The vaccine can- didates in this study were assigned to the respective development inter- vals based on the 2000 “committee’s assessment of the current state of the vaccine’s development” (IOM, 2000). Once the interval was assigned, no further consideration of uncertainty was made. Costs and benefits were discounted in accordance to the chosen time intervals. SMART Vaccines Beta addresses this uncertainty in a different way consistent with the programming resources available in this phase of the study. The computational submodel computes the health benefit and eco- nomic consequences on an annual basis as if the vaccine is presently avail- able. The committee added the attribute “Likelihood of Successful Licen- sure in 10 Years” to reflect the increase in value of a vaccine that may be developed in the near future versus sometime in the distant future. This attribute requires a subjective assessment by users in the same manner as the 2000 report’s subjective assignment of the development interval. In SMART Vaccines Beta, users are asked to assess the state of the science and market to support the development and licensure of the new vaccine candidate according to a five-point Likert scale (1 reflecting “almost certainly will be licensed within 10 years”; 5 reflecting “almost cer- tainly will not be licensed within 10 years”). This attribute increases the overall priority score of the vaccine as a function of higher likelihood of licensure. The committee determined that 10 years was a reasonable limit for the purpose of modeling. Another possible way to implement this concept as an attribute would be a direct assessment of expected time to vaccine licensure and avail- ability, but this would then not include a sense of uncertainty around this assessment. The effect of using such an attribute in the value submodel is functionally equivalent to including a direct estimate in the computational submodel—vaccine candidates that are expected to be licensed sooner will receive higher scores and those not expected to be licensed soon will receive lower scores when everything else is equal. There are advantages to embedding this uncertainty component in the value submodel. Typically, users think about vaccine benefits and costs as if the vaccine were available, not as if they were discounted to the future. If the time to availability were embedded in the computational submodel, the definitions of certain attributes relating to the benefits and costs must be changed. The user entries would then need to be averaged out as a func-
OCR for page 105
105 Data Evaluation and Software Development tion of the subjective distribution of the estimated licensure time supplied by the users. Although economists are used to thinking in terms of dis- counted quantities, the average user may not be. There are also possible disadvantages to this approach. Because users may not appreciate the exponential effect of discounting benefits delayed to the future, they may underweight the value attribute relating to the likelihood of successful licensure in 10 years. The committee discussed making selection of this particular attribute mandatory among the 29 attri- butes in part to reflect the concern about underweighting. In Phase II, the committee will revisit how to better represent this uncertainty component in SMART Vaccines 1.0. Other Uncertainties Manning and colleagues (1996) identify three sources of uncertainty in cost-effectiveness models (that otherwise affect any computational model such as SMART Vaccines): (1) parameter uncertainty; (2) model structure uncertainty; and (3) model process uncertainty. Parameter Uncertainty The computational submodel in SMART Vaccines Beta, although simplistic in its current form, is a function of many parameters: population modeling, estimates of health burden and benefits, and estimates of health care costs. Each of these parameters has components of uncertainty surrounding it. The current model does not incorporate uncertainty about these parameters in its computations. The most straightforward method to do so would be to specify a distribution surrounding each parameter and then use Monte Carlo simulation to sample from the distributions and compute results for each sample. Then a distribution for each of the computational outputs could be built, and these, in turn, could be used to determine an overall distribution on the priority score. The committee elected not to do this for SMART Vaccines Beta due to two concerns. The first relates to the source of the distributions for input parameters. Some parameters may affect all vaccine candidates, such as population life tables, while others are specific to an attribute or a vaccine candidate. It is well known that life tables are built from population sample data and thus have uncertainty concerning every age-specific mortality rate or life expectancy. Whether these uncertainties should be incorporated in the computational submodel is an open question; many models such as these take population and life-table values as “given” without incorporat- ing any uncertainty surrounding them. In any case, with additional effort, these uncertainties could be represented in the computational submodel.
OCR for page 106
106 RANKING VACCINES: A Prioritization Framework More concerning are uncertainties about health-related quality of life tolls and disability weights for various disease states. These are, in part, based on data and expert opinion. The disability weights used in DALY models are also, in part, based on expert opinion while disutility weight for QALY models can also use results elicited from studies of relevant popula- tions. In the case of low-income countries, the committee anticipates that only sparse data, at best, will assist users in specifying disutilities or (even more challenging) the distributions around them. Additional uncertainty relates to the economic estimates in SMART Vaccines Beta. These too will come from combinations of sparse data and expert opinion. Incorporating uncertainty about these parameters requires a sepa- rate module within SMART Vaccines that is able to elicit subjective dis- tributions for each parameter—a task that the committee will consider in Phase II. The committee can, however, envision what this module may incorporate. It is unlikely that parameters will be estimated from data because most users will not have access to primary data needed for statisti- cal estimation of parameters and their distributions. Instead, the committee may use a subjective estimation approach similar to a Bayesian estimation to elicit distribution. In Phase II, the com- mittee expects to identify a distribution for each parameter. For example, if the parameter is a probability, then a statistical beta distribution may be employed to describe uncertainty about it. Costs may be better described by a distribution bounded below by zero and having a tail to the right. Health utility tolls are bounded and might well be described by statistical beta distributions. Credible interval estimation (used in conjunction with direct esti- mation of means in some cases), specifying equivalent data samples (used in specifying beta distributions) is one way to describe uncertain quanti- ties in the computational submodel. Other parameters in the model whose uncertainty may be best addressed with sensitivity analysis include vaccine effectiveness and the duration of immunity. Computation of outputs which are functions of uncertain inputs can be accomplished either by Monte Carlo simulation, or using Markov Chain Monte Carlo simulation to build a pseudo-distribution for the outputs if simple independent sampling of parameters is not realistic within the com- putational submodel. The committee intends to consider these challenges in the Phase II effort. Another challenge is to determine the rank order distributions for vaccine candidates. Perhaps this would require a secondary Monte Carlo sampling module within SMART Vaccines where the distribution for each of the n vaccine candidates is input to this module and the output is
OCR for page 107
107 Data Evaluation and Software Development n distributions over position in the rank order for each of the candidates. Because these distributions may involve codependency of some candidates on uncertainties about certain diseases and assumptions about health util- ity tolls and costs, the output may not be just a simple independent sam- pling of priority score distributions. Obviously this is a complicated task that the committee will consider in Phase II. Model Uncertainty Manning and colleagues (1996) also identify model uncertainty as uncer- tainty about whether the computational model itself is an adequate rep- resentation of the process that is being investigated. In regards to SMART Vaccines Beta, this uncertainty concerns whether the structure of the com- putational submodel is adequate. There are only two approaches to incor- porating this uncertainty: one is sensitivity analysis where model structure is varied, and the other is to construct a set of alternative models and then to make some weighted combination of them. Either of these is beyond the scope of Phase I or Phase II work of the committee. Model Process Uncertainty This final source of uncertainty stems from the fact that SMART Vaccines Beta was constructed by a particular committee tackling a prioritization exercise. If a different set of individuals were to do the same task under the same constraints, the model that would result would differ and could well arrive at somewhat different results. Manning and colleagues (1996) have called for research concern- ing model process uncertainty to be a priority for further research. The National Cancer Institute has used the multiple modeling team approach to study simulation models of various cancers (e.g., Berry et al., 2005). They found different modeling approaches lead to results that were quantita- tively distinct but qualitatively similar. Similar multiple model approaches are used in climate forecasting (Knutti et al., 2010). The multi-groups or multi-models approach is very expensive and time consuming. The committee judged the consideration of both the model uncer- tainty and model process uncertainty to be far beyond the scope of either Phase I or II development of SMART Vaccines. Current capability for sensitivity analysis SMART Vaccines Beta has the capability to permit variations in attributes to observe the consequences in the final utility score. This sensitivity analy- sis can be conducted manually in the current version, and indeed, differ-
OCR for page 108
108 RANKING VACCINES: A Prioritization Framework ent versions of a single vaccine candidate (with different attributes) can be saved and then compared directly one against another as well as with competing vaccines. For example, suppose a new vaccine against tuberculosis with some predefined set of attributes is entered by a user as TB Vaccine 1. The multi- attribute utility model will create a value score for this vaccine, and the user can save this specific vaccine as one among many. Now let the user alter one or more of the attributes for the same tuberculosis vaccine and save the results as TB Vaccine 2. This can be com- pared against TB Vaccine 1 and other versions. This process thereby allows the user a choice among alternative intensities and distributions if neces- sary data have been provided by the user. Phase II enhancements could incorporate, for example, “tornado diagrams” showing how much each candidate vaccine’s score changes in response to, say, a doubling or halving of each attribute’s value. These dia- grams give an immediate visual representation of the extent to which the outcomes strongly depend on the value of inputs. The committee will also consider the possibilities to expand and automate the sensitivity analyses in Phase II. Beta concept evaluation Following the development of SMART Vaccines Beta, a concept evalua- tion session was organized to obtain feedback from potential users. Each of the 11 consultant evaluators participated in a webinar led by a committee member and staff; four similar webinars were held, with two to four evalu- ators participating in each session. The evaluators were asked to provide feedback regarding the basic concept, software design, technical features, potential applications, and audiences. In general, the overall concept of SMART Vaccines Beta was received positively, even enthusiastically, with the exception of one evaluator who shared concerns regarding the basis and extension of the work. Many of the features of SMART Vaccines Beta have already been updated in response to the comments from concept eval- uators. More important, many features have the potential to be upgraded in Phase II of this study. The committee’s observations and views on the next steps in the enhancement of SMART Vaccines Beta are presented in Chapter 4.