5
Implementation Issues

The Institute of Medicine study committee believes that the priority-setting process presented in Chapter 4 would be valuable to and is feasible for use by all organizations engaged in health technology assessment—not just the Office of Health Technology Assessment (OHTA) of the Agency for Health Care Policy and Research (AHCPR). Both the compilation of the data that are needed for the priority-setting process and a list of priorities created through a national, broadly representative process would be useful to many technology assessment programs.

This chapter describes how the process proposed in Chapter 4 serves several objectives. First is the need for broad-based input in setting criterion weights and in developing subjective criterion scores for priority setting so that the weights and scores reflect societal preferences. Second is the need for professional expertise to integrate diverse scientific data, to adjudicate when data conflict, and to provide a base of experience from which to estimate missing data. Third is the need for an efficient process that can be carried out at a reasonable cost. This chapter describes how to implement the priority-setting process, suggests a cycle for priority setting, and estimates the resources that would be needed to set priorities for health technology assessment and reassessment.

THE PRIORITY-SETTING CYCLE

The priority-setting cycle comprises the steps listed below performed according to the time frames indicated.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process 5 Implementation Issues The Institute of Medicine study committee believes that the priority-setting process presented in Chapter 4 would be valuable to and is feasible for use by all organizations engaged in health technology assessment—not just the Office of Health Technology Assessment (OHTA) of the Agency for Health Care Policy and Research (AHCPR). Both the compilation of the data that are needed for the priority-setting process and a list of priorities created through a national, broadly representative process would be useful to many technology assessment programs. This chapter describes how the process proposed in Chapter 4 serves several objectives. First is the need for broad-based input in setting criterion weights and in developing subjective criterion scores for priority setting so that the weights and scores reflect societal preferences. Second is the need for professional expertise to integrate diverse scientific data, to adjudicate when data conflict, and to provide a base of experience from which to estimate missing data. Third is the need for an efficient process that can be carried out at a reasonable cost. This chapter describes how to implement the priority-setting process, suggests a cycle for priority setting, and estimates the resources that would be needed to set priorities for health technology assessment and reassessment. THE PRIORITY-SETTING CYCLE The priority-setting cycle comprises the steps listed below performed according to the time frames indicated.

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process Repeat every 5 years or more infrequently: Set criterion weights (this step requires a panel, as discussed below). Repeat at least every 3 years: Solicit nominations of candidate conditions and technologies. Reduce the list of nominations through the "winnowing" procedure laid out in Chapter 4 and Appendix 4.1. Obtain the data required for the objective criteria. Review the objective data and decide what will be used to calculate the priority score (this step requires a panel). Establish the subjective criterion scores (this step requires a panel). Calculate the priority score. The next section presents key points about setting criterion weights. Later, the chapter discusses critical concerns regarding the remaining activities in the context of resources needed to implement the process and, more specifically, the responsibilities of the priority-setting panel. SETTING CRITERION WEIGHTS The criterion weights mentioned above in the priority-setting cycle and examined in Chapter 4 are intended to represent the preferences of society. The committee envisions a broadly constituted panel that would set criterion weights not oftener than every 5 years. Once OHTA has established the weight-setting system, it should test and establish its reliability; then it could repeat the procedure only infrequently. Although the committee sees this weighting task as a group process, it might be accomplished by some other means (e.g., voting by mail), if those means were shown to be reliable. Although AHCPR's National Advisory Council might function as this weight-setting panel, the committee suggests that a separate group be constituted for this and subsequent panel tasks, in part because the task requires a particular array of expertise, but also because the workload could be considerable. Apart from setting the criteria weights, the committee sees the priority-setting cycle as occurring every 2 to 3 years, but not less frequently than every 3 years, because of the current pace of technological change. The time that elapses before repeating the process would depend not on a fixed interval but on how many assessments have been completed. The number of assessments—as opposed to the number of conditions and technologies that the quantitative model can rank—will depend principally

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process on staff resources for data collection and secondarily on the experience of the panels in generating criterion scores. As a rule of thumb, the committee suggests that the quantitative model should rank three to four times the number of conditions and technologies that are likely to be assessed in a given cycle. This would allow other organizations to use the list to select topics for technology assessment. RESOURCES NEEDED TO IMPLEMENT THE PROCESS The resources needed to implement the process are the technology assessment (TA) program staff and the weight and priority-setting panels. Both are discussed below. Technology Assessment Program Staff Requirements The committee carefully considered the resource and staffing requirements entailed by the process described in Chapter 4 from two perspectives: the current constraints on OHTA and AHCPR and the (idealized) goals of a credible, sound, defensible model process. This priority-setting process, based on the committee's experience with the pilot test, will require resources. However, the resources required to implement the ideal version of this committee's process may not be available, given the current budget and staffing levels of OHTA. The committee viewed its report, in part, as setting reasonable goals for the agency and for OHTA. Therefore, the following detailed discussion of program resources is appropriate for an optimum program rather than a minimum program. The process will require enough staff to accomplish its mission of allocating the country's technology assessment resources wisely. The committee views the priority-setting process as a public good that will be one of OHTA's most valued products, and it recommends that the agency provide sufficient staff to generate priority rankings that will be useful not only for its purpose but for other organizations that also perform technology assessment. During the process of compiling data for the quantitative model, OHTA will create a valuable data base (containing, for example, such information as cost per case for the top 50-ranked disease conditions), which will itself be a resource to other organizations. A further benefit of the data base will be that once information on candidate conditions and technologies accumulates, later iterations of priority setting are likely to be less expensive. The committee believes that implementing a process such as the one suggested in this report requires staffing that is at least comparable to that for a grant review study section: a mid- or senior-level, analytically trained scientist who is well grounded in health services research; one or two junior

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process to mid-level staff; and clerical support. Staff responsibilities would include the following: Conduct regular literature searches to maintain information about conditions and technologies that have been assessed previously. Convene and manage the panels. Solicit nominations of candidate conditions and technologies for assessment and reassessment. Compile data on the frequency of conditions, the costs associated with their care, and variations in practice patterns. Draft summary documents for the panels to use in assigning scores for each criterion of the quantitative model. During the committee's pilot test of the quantitative model, one full-time-equivalent staff person took a day to assemble the data for one condition; by that metric, over the course of a year, one staff person could probably assemble data for about 200 conditions. After the panel has generated priority rankings, staff would also conduct informal surveys of other professional and assessment organizations to determine whether any of the conditions and technologies being considered for assessment by OHTA are already being evaluated. Because the quantitative model requires information on seven aspects of each candidate condition or technology, the number of program staff will determine the number of conditions that will be ranked. The process of reducing the list of nominations to a number that is within the staff's capacity—the "winnowing" of the list—is relatively crude. A number of options for such winnowing are discussed in Chapter 4 and its Appendix 4.1. However, a general rule of thumb should be kept in mind: the "cruder" this preliminary winnowing process is, the more likely it will be that important technologies are mistakenly omitted from the final list. Priority-Setting Panel To understand the tasks of the priority-setting panel, it is helpful to refer to Figure 4.1, which showed the steps and participants for the proposed model. Specifically, the priority-setting panel has four primary tasks, as listed below. As noted earlier, task 1 occurs approximately once every 5 years, and tasks 2-4 occur about once every 3 years. Task 1: Select criteria and set criterion weights. Task 2: Reduce the long list of candidate conditions and technologies to a more manageable size (i.e., "winnowing"). Task 3: Generate subjective criterion scores. Task 4: Generate objective criterion scores.

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process The committee recommends that OHTA convene a single "standing" panel to perform all of these tasks. The panel could be organized like a research study section: panel members thus would serve rotating, staggered 3-year terms. Staggered membership is important to sustain an institutional memory about the conceptual details of the priority-setting system. The committee sees the panel as a broadly representative standing committee with individuals who represent a balance of perspectives and who have "front-line" knowledge of health care as providers, patients, and third-party payers. Thus, it will require individuals with expertise in the following areas: medicine and surgery, nursing, social work, health economics, epidemiology, health care statistics and health demography, law, bioethics, health administration, health technology manufacturing, employee health benefits, and health insurance. The committee also advises that the panel include one or more patient and consumer representatives. Most, if not all, members of the main panel should have sufficient knowledge of clinical conditions and technologies (for instance, to be able to generate scores for the subjective criteria, as in task 4). Some, but not all, members will need the quantitative and medical knowledge to be able to make informed quantitative estimates for objective criteria (as in task 3). For the first task—defining criteria and assigning criterion weights (see above)—the panel would be brought together and function as a "plenary" group. For other tasks, as explained below, it might be divided into more specialized subpanels. Depending on the eventual workload or the needed perspectives and expertise, or both, additional persons might be appointed to one or another of the subpanels. The discussion that follows, however, is couched in terms of all subpanels being constituted with individuals from the main standing group. It is also assumed that members of the main panel might well serve on more than one subpanel. For task 2—winnowing the larger list of topics to produce the final set of candidates toward which the remaining priority-setting activities are directed—the committee believes that more than one subpanel might be created from the original panel. Generally, for this task, the subpanel(s) should be as broadly representative as possible—within the constraints that arise from dividing up the main panel. Tasks 3 and 4—developing the criterion scores for the subjective and objective criteria, respectively—might also be performed by subpanels created from the main panel. Here, the assignments to subpanels might be more along "expert" lines, with groups for the subjective criteria being more "broadly representative" and those for the objective criteria being more ''quantitatively expert." The latter subpanels, for instance, require individuals with quantitative reasoning skills and epidemiologic expertise to adjudicate among conflicting data and estimate prevalence, costs, illness burdens, and practice variations in cases where data are missing.

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process The workload of the subpanels, and therefore the number of subpanels, will depend on the number of conditions or technologies under consideration. If, for tasks 3 and 4, the workload requires more than one of each type of subpanel (as proposed above), the subpanels can divide their assignments. In this case, the committee recommends that each subpanel work with every topic and assign a subset of the criterion scores rather than take a limited number of topics and assign a score for each criterion. This approach would ensure that the subpanel is consistent across all topics when it assigns the scores for a given criterion. IMPLEMENTATION CONSIDERATIONS FOR OHTA AND OTHER ORGANIZATIONS The foregoing discussion addressed the tasks and resources necessary to implement the committee's proposed priority-setting process. During implementation itself, OHTA must resolve several additional issues, including the following: Establishing the validity and reliability of the priority-setting process and its various elements. The committee believes that OHTA (as well as other professional organizations that may employ this suggested process) has an obligation to examine its validity and reliability. Altering the definitions and weights of criteria—points to keep in mind during this effort. Developing a strategy for cases in which the data necessary to develop criterion scores are missing (a separate problem from lack of data to conduct an assessment, which is addressed separately below). Determining the kind of product or products that the priority-setting process should yield. Validity and Reliability How can OHTA validate a process that is based in part on subjective judgment and prediction? The concept of ''validity," in the sense of the "correct," "true," or "gold standard" does not seem entirely suitable to priority setting. It would be appropriate, however, to determine the usefulness, appropriateness, or cost-effectiveness of the results of the process, holding that adoption of the process is evidence of acceptability, feasibility, and generalizability. One can ask whether the process seems reasonable to people who are familiar with either priority setting, technology assessment, or the technologies themselves. This mechanism would gauge what is sometimes called face validity. Another aspect of face validity is whether the process is, in

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process fact, used and considered useful by the group for which it is intended (i.e., OHTA) and by other groups. A priority-setting process is reliable if the same group produces the same (or similar) rankings at different times or if a different group (or subgroup), whether constituted similarly or quite differently, produces similar results when given exactly the same information and instructions. Reliability could be tested readily for parts of the process (e.g., criteria weighting, estimation of data for an objective criterion, ratings for a subjective criterion) and for the entire process. The use of systematic sampling frames and sufficiently large groups would allow standard statistical tests of reliability. Criteria Choosing—and Changing—Criteria After extended discussion, the committee selected seven criteria by which to implement its principles of priority setting; these were described in Chapter 4. The criteria encompass the current social impact of a condition for which a technology is used, variations in use rates, and the likely changes that an assessment would engender. Further, because a simple listing of criteria would be insufficient to consistently implement the process proposed by the committee, the criteria have been carefully defined so that they can be used dependably in a quantitative model. Their reliability, however, under different conditions of use, must be established through field testing. Other organizations may wish to augment or change the criteria or their definitions. When doing so, it is important to understand several features of the seven criteria being proposed in this report. First, in terms of social impact, the criteria are symmetrical with respect to current health and economic burden and expected change in health outcome and cost as a result of the assessment. Burden of illness and cost are considered separately as valid social and economic aspects of illness. Because they are considered separately, they can, and might be expected to be, given different weights by different organizations. Second, impact of illness is commonly viewed as the product of burden of illness and prevalence. This formulation treats as equivalent a large burden of illness borne by a few individuals and a small burden of illness borne by many persons. Different weights given to each criterion, however, can express social attitudes about such mathematical equivalence. Further, a low prevalence score can, to a degree, be counteracted by a high score for the criterion concerning ethical, legal, and social issues. This balancing might occur in instances in which the priority-setting panel has special concerns about the assessment of a technology used for a small, defined

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process patient population whose illness might not otherwise have sufficient leverage to attain a high priority score. Third, the criteria do not assume that a given direction of change (e.g., higher or lower cost, improved or worsened health outcome) raises (or lowers) the assessment priority of a condition or technology. Although the direction of change may be of considerable concern in doing assessments, the magnitude, not the direction, of change is important in setting priorities. Those choosing technologies for assessment are presumed to be equally interested in whether, for instance, a technology is likely to cause a large rise or an equally large decrease in expenditures. Criterion Weights During its pilot test, the committee designated weights (see Appendix A, Figure A.1) for the priority-setting criteria in its process; in this effort, it attempted to use the perspective of a public agency. The committee considers these weights merely illustrative and recognizes that a given organization would probably wish to derive its own weights for priority setting. Availability of Data to Generate Criterion Scores The priority-setting process recommended by the committee requires the use of data in explicit ways. However, the committee is well aware of the limitations of published data for use in generating criterion scores. Prevalence and mortality data are not necessarily available at the level of specificity needed; additional problems are that they may include only subpopulations such as the elderly and may be confounded by severity and case mix. Moreover, cost estimates inevitably will not include all costs, and aggregate data on functioning and well-being are scant. Nevertheless, the committee argues that the priority-setting process should proceed with whatever data are available. It should also use the best estimates it can generate for resolving conflicting or missing data. Further, it should encourage the development of better epidemiological data bases. In this sense, the distinction that is drawn between subjective and objective criteria is a matter of degree. For instance, the criterion "burden of illness" must at present be considered largely subjective. Yet if the high weight given this criterion by the pilot test is replicated by other groups, this would argue strongly for greatly improved data on health outcomes for untreated and "conventionally" treated illnesses (see Ellwood, 1988). Publicly Available Products The committee envisions two products from the OHTA priority-setting

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process process that would be publicly available: a listing of the priority-ranked technologies and the data base used to construct it. In addition, both products would form the basis of a priority-setting document to be published by OHTA. The list might include all technologies, even those that were winnowed out, or it might include only those that remained after winnowing to which the quantitative model was applied. The list might include specific priority rankings as an indication of the distance between a given technology and the next highest (or lowest) ranked technology, or it might simply group the technologies. As noted earlier, grouping the technologies in the final product would help to avoid a false sense of precision. The committee is strongly in favor of an open priority-setting process. To that end, it believes that the priority-setting document should include rankings and selected summary data that contributed to the criterion scores of each technology. Each highly ranked technology should also be accompanied by a discussion of the features of the technology that were considered in its ranking, a description of the data sources that were used, and a discussion of the level of confidence that the panels assigned to these data (the strength of the scientific evidence). Calling for such documentation is consistent with recommendations from another IOM committee concerned with the appropriate development and implementation of clinical practice guidelines (IOM, 1990c, forthcoming). The data base available to the public would include the weights assigned to each criterion and the objective and subjective criterion scores for each condition and technology to which the quantitative model was applied. Such a data base would be useful not only to OHTA but to other organizations that wished to set priorities. It can be challenged, corrected, and amplified by researchers, specialists, and disease-oriented interest groups; and it might well act as a stimulus to better data acquisition. Both functions are consistent with the goals of AHCPR in promoting the public good through improved information about health care. In a formalized process, such as the one proposed in this report, an important consideration is how to acknowledge and include strongly held minority views and, if needed, stimulate further data development. The committee recommends that each time a substantial or strongly held minority view is voiced by members of the panel, the document include those views either in a "discussion section" or in a section immediately following the discussion of the majority view. The inclusion of such opinions would be especially important later during reevaluation or reassessment because they would alert TA program staff to specific events or evidence that might prompt reassessment. WHEN THE SCIENTIFIC EVIDENCE IS INSUFFICIENT FOR ASSESSMENT Often, a topic has high priority for assessment but insufficient evidence

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process to support the assessment activity. In such circumstances, the committee believes that OHTA's appropriate response is to recommend a first-time assessment. Taken together, these statements (high priority, lack of data) would be a logical basis for the development of an AHCPR technology assessment research agenda. This concept, linking priority setting, assessment of the evidence, and a research agenda, is an important component for the future of technology assessment and for further enhancement of evidence-based medical practice. Other responses to insufficient evidence—for example, an interim conditional statement or a decision-analytic model—are also possible. Given that data will always be inadequate, in some sense, the presence or absence of information does not affect whether but how a technology assessment should be done. In some cases, literature synthesis will be possible; in others, AHCPR may decide to fund secondary data analysis or primary data collection. It should be recognized, however, that the cost of generating such data may be significant. Interim Statements When the topic is of high priority but insufficient data are available for an assessment, OHTA might consider an analysis that would begin with the question, What level of effectiveness is necessary for this technology to be cost-effective? The congressional Office of Technology Assessment (OTA) took this approach to assess pneumococcal vaccine before the clinical trials to measure its efficacy had been completed. Because OTA did not know whether vaccine immunity would last for 8 years or a lifetime, its estimates had wide ranges of uncertainty. Nevertheless, the agency's assessment was sufficiently convincing that it led to a recommendation that Medicare cover pneumococcal vaccine. Modeling Assessments using decision-analytic modeling techniques to estimate expected costs and the effectiveness of alternative management strategies can be useful to simulate missing empirical data. In place of such data, the model uses expert subjective estimates of probabilities and outcomes. Analysts then employ sensitivity analysis to determine which clinical factors could cause the currently preferred alternative to be superseded by another management strategy. Later research that includes primary data collection can measure the true value of these "sensitive" variables and provide an empirical basis for further policy recommendations. Using decision modeling to focus the attention of clinical investigators on the most important variables for decision making is a powerful concept.

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process For example, one assessment organization did such an assessment of the automatic implantable defibrillator when there were not enough data to conduct a cost-effectiveness analysis (Kuppermann et al., 1990). Using efficacy data and clinical studies from the literature, a panel of electrophysiologists simulated the clinical outcomes and the cost for a hypothetical cohort of patients that received the defibrillator and compared them with another cohort that did not receive the device. The estimated cost per life-year saved, using the new device as it was configured in 1986, was about $17,000. The analysts also estimated costs and effectiveness of expected updated versions of the defibrillator as it was expected to perform in 1991. In another effort, the federal government funded a separate project to collect primary data over a 3-year period. Both of these approaches are legitimate technology assessments, and both are useful responses to a lack of data. The decision modeling effort provided timely analysis using uncertain data; the empirical study will use much more valid and reliable data in a much less timely manner at a much higher cost. In sum, empirical data from reliable published sources are currently required to conduct an assessment because OHTA conducts only literature-based assessments. This requirement presupposes that the technology has been available long enough to have been evaluated empirically. However, the armamentarium of technology assessment includes other approaches such as those described—decision modeling, other forms of estimation, analyses of administrative data sets, such as that available in the Medicare files, and interim statements—that OHTA (or other programs in AHCPR) should consider using. SUMMARY The committee envisions priority setting as occurring in a cycle. The panel sets criterion weights approximately every 5 years. The priority-setting cycle itself repeats at least once every 3 years and leads to a rank-ordered list of conditions and technologies. The priority-setting cycle begins and ends with involvement of persons and institutions outside the federal government. At the beginning, OHTA asks a broad range of persons and institutions to nominate conditions and technologies that they wish to see assessed. Then, OHTA staff collect the data required to set objective criterion scores and convene panels to assign criterion scores to each condition or technology. Staffing for this OHTA priority-setting activity is likely to require a level comparable to AHCPR study sections: a mid-career or senior-level professional, several junior to mid-level research staff, and clerical staff. A broadly representative panel would be established to help set criterion weights, to reduce the list of nominations of conditions or technologies, and

OCR for page 103
Setting Priorities for Health Technology Assessment: A Model Process to assign criterion scores to each of these topics. Separately constituted subpanels might also be required to divide the workload and to assign subjective or objective criterion scores. The subpanel(s) assigning subjective criterion scores would be composed of individuals with the range of perspectives of the full panel; the subpanel(s) assigning objective criterion scores would require experts in epidemiology and health statistics to review the data collected by OHTA staff and produce estimates when necessary. The committee envisions two products that would be publicly available: a list of the priority-ranked technologies and the data base used to construct the list. Both would be part of a priority-setting document published by OHTA. Each highly ranked technology should also be accompanied by a discussion of the features that contributed to its ranking, the data sources that were used, the level of confidence the panels assigned to the data, and any strongly held minority views. OHTA should adopt methods that will enable it to conduct preliminary assessments even when there is not yet adequate evidence on which to base a strong clinical policy recommendation. The committee advocates using decision analysis as a way to identify which missing evidence is most important for decision making and to use the results as input to the development of an agenda for empirical research sponsored by AHCPR.