SMART Vaccines is intended to help set relative priorities among candidate vaccines based on user preferences, within the context of health, economic, demographic, scientific and business, programmatic, and policy considerations as well as public concerns. SMART Vaccines integrates computed attributes with qualitative attributes to provide a value score that compares one vaccine opportunity against another. Because SMART Vaccines is built from a complex model, the committee chose to develop user-friendly software to better assist decision makers.
The charge for this study did not call for producing a list of ranked vaccine candidates; instead it asked for the development of a conceptual prioritization model for new preventive vaccines and for that model to be tested against two to three vaccine candidates, at least one of which had an international focus. Thus, the committee wished not only to make sure that the model performed as specified, but also to show that the data were meaningful and, to the extent verifiable, accurate. This section describes the steps the committee took to assure the accuracy of both the model and the data used to exercise the model.
SMART Vaccines requires four types of data for computing and valuing the vaccine attributes.
1. The first type of data used in the model relates to demographics and is verifiable from established sources. Some data sources, however, dif-
fer in their final numbers even for such apparently clear-cut characteristics as the age distribution of the population of the United States for the year 2009. In collecting U.S. population data, for example, at least three potential sources were consulted: the United Nations Population Division, the World Health Organization (WHO), and the U.S. Census Bureau, all of which contain age-specific estimates of the U.S. population (and of the populations of many other nations) by gender. However, the sources differ in minor ways even for such apparently simple data. The United States conducts a complete census only once a decade, and many other nations do so even less frequently. The U.S. Census Bureau often adjusts final estimates to allow for under-reporting by various groups. Thus, even such apparently “hard” data as population demographics may have differences across sources. For example, data are adjusted differently and may be either extrapolated or interpolated differently across years. As part of its testing, the committee used population data for the United States and South Africa drawn from the WHO Global Health Observatory Data Repository (see Appendix B), even though these data differ in some detail from U.S. Census Bureau data.
2. The second type of data relate to disease burden and costs. These data will have a relatively “hard” basis in some nations based on various survey programs, surveillance systems, and one-time research efforts. The committee used such sources to estimate disease burden and treatment costs for the United States and South Africa (see related data tables and sources in Appendix B). For many other settings, especially developing countries, such data will be unavailable immediately and will likely be supplied by a process that relies primarily on expert opinion. Given the uncertainties about these key assumptions, sensitivity analyses will be important to test the robustness of the model’s results. Committee members often relied on their own areas of expertise and judgment to identify potential errors in the data, with the result being a reevaluation of the data checked against the original sources. Because the focus of this study is the development and testing of the model, the committee did not use other possible methods of checking data accuracy; however, the committee acknowledges the value of further data verification to optimize the use and accuracy of the model.
3. The third type of data contains assumptions about the characteristics of each vaccine, including efficacy under ideal circumstances, effectiveness in real-life settings, duration of immunity, and risk of adverse events. Some of these characteristics are approximations.
Vaccine-induced immunity, for example, wanes over time and is highly variable across individuals in a population. The current version of SMART Vaccines does not attempt to incorporate data about the pattern or variability in the waning of immunity; this could be incorporated in future refinements.
4. The fourth type of data is not subject to verification since the data describe mostly qualitative attributes of vaccines that do not yet exist. They will be determined by users, presumably often guided by expert opinion. Hence these data cannot be described as either accurate or inaccurate because they reflect the users’ own judgments about each candidate vaccine. However, these attributes allow diverse users to consider broader perspectives and dimensions of assessment that will permit a more customized and relevant tool for decision makers worldwide.
SMART Vaccines combines data from all three levels to create a series of calculated variables, all of which are reported to the user in the “dashboard” output interface (see Figure 3-16). To ensure rigorous testing, the committee validated the computations both by hand and via spreadsheets to determine the accuracy of the computations. Appendix B presents the data the committee used.
To further enhance and improve SMART Vaccines, the committee will undertake three related sets of activities to advance model and software development. For Phase II of this study, the committee will demonstrate the current version of SMART Vaccines to a wide range of stakeholders and potential users and obtain their feedback about the usefulness of the software. Afterwards, the committee will enhance the model, its functionalities, and the user interface underpinning SMART Vaccines as part of moving the software from the beta stage to version 1.0. Three additional vaccine candidates will be tested in the next phase in order to exercise the model and to expand the data library contained within the software. The next phase of this study is expected to begin immediately.
For further refinement of SMART Vaccines attributes, it will be necessary to obtain feedback from potential users in at least three areas in the Phase II of this study.
First, the rank order centroid method used to acquire and compute weights for the attributes is an approximation. It is a method for reducing the potential workload of the user. Many multi-attribute utility analysts who work one on one with decision makers use extensive questionnaires to elicit weights to represent the decision maker’s values more precisely. In order for the committee to provide users with the flexibility to revise their weights according to their values, additional feedback will be required.
Second, the representation of the attributes themselves can improve with experience. Currently they are presented as a list as shown in Table 2-1. One potential area for refining the attribute representation would be to consider reorganizing the way that they are classified.
Third, the categories that are used to represent quantitative attributes need to be reappraised to ensure that they are sensible and meaningful to users and consistent with their values.
The committee’s model evaluation process included the following steps:
• verification of the software code by modeling consultants;
• exercising the model by the committee and staff to determine if the output changed in meaningful ways;
• replication of results from the 2000 IOM report on vaccine prioritization using its data and specifying a multi-attribute value function that used only $/QALY as the decision rule; and
• construction of a worksheet “simulacrum” of the value model, as discussed in Chapter 3.
As is common with software development, the most reliable method for checking the software’s reliability is to place it in the hands of a user community and provide a process for error reporting and creating fixes for known defects.
The SMART Vaccines framework is based on trade-offs. The trade-offs are determined by the users’ ordering of attributes: Disadvantages on one criterion (e.g., higher costs to vaccinate the target population) may be outweighed by advantages on a different criterion (e.g., long-term health benefits or the demonstration of a new vaccine delivery or production platform).
In this context SMART Vaccines has the potential not only to guide
discussions regarding intra-and inter-institutional vaccine goals, but also to provide a common language for determining priority areas of national and global interests. Appreciating the trade-offs inherent in priority setting exercises may well serve to motivate and focus new vaccine development.
The value of SMART Vaccines will depend, in part, on data that need to be generated as candidate vaccines evolve and as disease epidemiology becomes better characterized in different parts of the world. In the future (beyond Phase II), an active community of users and an open-source environment could likely lead to enhancement of the software’s capabilities through creation and sharing of databases for populations from different countries, generation of data collection templates, refinement of the attributes and the attribute selection process, enhancement of validation tools and the user interface, and other ways to address the risk and uncertainty surrounding the characterization of vaccines that have not yet been developed. This study is the first step in moving toward these goals.