4
Recommendations for a Priority-Setting Process

Chapter 3 outlined the general principles of a priority-setting process for conducting technology assessments. Such a process should (1) be consistent with the mission of the organization, (2) provide a product compatible with its needs, (3) be efficient, and (4) be sensitive to the political and social context in which it is used. The process proposed in this chapter incorporates elements that the committee believes are in accord with these principles.

First, the committee's approach uses a broadly representative panel, and the priority-setting criteria reflect several dimensions of social need. It is an explicit process that includes a quantitative model as described in this chapter. The process is intended to be open, understandable, and modifiable as experience with it grows. These characteristics are consistent with the mission of the Office of Health Technology Assessment (OHTA) as a public agency.

Second, the process will produce a list of conditions and technologies ranked in order of their importance for assessment.

Third, the process provides for broad public participation in assembling a list of candidate conditions but then winnows the list to identify important topics, using data when they are available and consensus judgments when data are unavailable. The committee believes that this approach will result in a process that is efficient but that still serves the other principles.

Fourth, the committee's priority-setting process is intended to be sensitive to its political context: it is open to scrutiny, resistant to control by special interests, and includes review by a publicly constituted and accountable advisory body.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process 4 Recommendations for a Priority-Setting Process Chapter 3 outlined the general principles of a priority-setting process for conducting technology assessments. Such a process should (1) be consistent with the mission of the organization, (2) provide a product compatible with its needs, (3) be efficient, and (4) be sensitive to the political and social context in which it is used. The process proposed in this chapter incorporates elements that the committee believes are in accord with these principles. First, the committee's approach uses a broadly representative panel, and the priority-setting criteria reflect several dimensions of social need. It is an explicit process that includes a quantitative model as described in this chapter. The process is intended to be open, understandable, and modifiable as experience with it grows. These characteristics are consistent with the mission of the Office of Health Technology Assessment (OHTA) as a public agency. Second, the process will produce a list of conditions and technologies ranked in order of their importance for assessment. Third, the process provides for broad public participation in assembling a list of candidate conditions but then winnows the list to identify important topics, using data when they are available and consensus judgments when data are unavailable. The committee believes that this approach will result in a process that is efficient but that still serves the other principles. Fourth, the committee's priority-setting process is intended to be sensitive to its political context: it is open to scrutiny, resistant to control by special interests, and includes review by a publicly constituted and accountable advisory body.

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process Figure 4.1 Overview of the IOM priority-setting process. The proposed process includes a quantitative model for calculating a priority score for each candidate topic. In this chapter, the term process is used for the entire priority-setting mechanism; the term model is used for the quantitative portion of that process that combines criterion scores to produce a priority score. The model incorporates seven criteria with which to judge a topic's importance. It combines scores and weights for each criterion to produce a priority ranking for each candidate topic. Nevertheless, using the model requires judgments by a panel, data gathering by OHTA program staff, and

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process review by the National Advisory Council of the Agency for Health Care Policy and Research (AHCPR). During the summer of 1991, the IOM committee pilot-tested its methodology by gathering data on a number of conditions and technologies and using an early version of its model to rank 10 topics. The committee compared two methods of obtaining inputs for the model—a panel meeting and a mail ballot—and modified the model based on this experience. The methods and results of the pilot test are described in Appendix A. Figure 4.1 is an overview of the proposed steps and participants in the priority-setting process. PREVIEW OF THE QUANTITATIVE MODEL The committee's proposed process is a hybrid. It combines features of ''objective,'' model-driven priority-setting methods (such as that of Phelps and Parente [1990]), and a consensus-based Delphi approach, such as that used by the IOM's Council on Health Care Technology (IOM/CHCT) in its pilot study (described in Chapters 1 and 2).1 The model combines three components: (1) seven criteria; (2) a corresponding set of seven criterion weights (W1 ... W7) that reflect the importance of each criterion; and (3) a set of seven criterion scores (S1 ... S7) for each candidate condition or technology. The final "index" of importance of a topic is its priority score, which is the sum of the seven weighted criterion scores (Si), each multiplied by its criterion weight (wi).2 This priority score or index is calculated as shown in Equation (1) below: 1   As noted in Chapter 1, the Council on Health Care Technology no longer exists at the Institute of Medicine. The pilot study described here is referred to as the IOM/CHCT pilot study to distinguish it from the pilot test conducted for the present project. 2   In the Phelps-Parente model, characteristics such as equality of access, gender-or race-related differences in disease incidence, and similar characteristics have no effect on the overall ranking. Indeed, the output of the Phelps-Parente model was, by design, only one of several inputs into the consensus process that the IOM/CHCT pilot study committee used. In addition, the Phelps-Parente model uses only objective data to measure such things as spending and degree of medical disagreement (as measured by the coefficient of variation). This characteristic is an important limitation to the use of such models, since they cannot be applied when such formal, objective data are unavailable. This drawback may be especially severe when data are limited (e.g., nursing home, home care, ambulatory care) and when new technologies have had little use and have not (yet) been captured in data bases.

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process where W is the criterion weight, S is the criterion score, and In is the natural logarithm of the criterion score. The derivation of this formula, an explanation of why the natural logarithm is used, and a description of its component terms are discussed fully in a later section of this chapter. The model incorporates several forms of knowledge about technologies. The first is "empirical data," such as the prevalence of a condition. The second is "estimated data," which are used when objective data are missing, incomplete, or conflicting (e.g., the number of patients who will use erythropoietin 5 years from now). Third are intrinsically subjective ratings, such as the likelihood that a technology assessment will affect health outcomes. ELEMENTS OF THE PROPOSED PRIORITY-SETTING PROCESS The IOM committee recommends a priority-setting process with seven primary components, or "steps." These steps are numbered in Figure 4.1 and are described briefly below; they are discussed in greater detail in the remainder of this chapter. In this discussion, the committee uses the term technology assessment (TA) program staff to mean people in a government agency or private-sector organization who are responsible for implementing a technology assessment and reassessment program of sufficient size to warrant a priority-setting process. Similarly, although the term staff is used to refer to the staff of OHTA at AHCPR, the term could apply equally to the staff of any agency or technology assessment organization. Step 1. Selecting and Weighting Criteria Used to Establish Priorities The first step that OHTA should take is to convene a broadly representative panel to select and define criteria for priority setting. Criteria can be both objective and subjective. The panel should also assign to each criterion a weight that reflects its relative importance. The IOM committee proposes and later defines seven criteria: three objective criteria—prevalence, cost, and variation in rates of use; and four subjective criteria—burden of illness, potential of the results of the assessment to change clinical outcomes, potential of the results of the assessment to change costs, and potential of the results of the assessment to inform ethical, legal, and social (ELS) issues. Table 4.1 defines these criteria. The justification for each and "instructions for use" appear later in this chapter under Step 5. Different organizations might, through their own procedures, choose different criteria and assign different weights to each. The IOM committee believes there are good reasons why the seven criteria that it chose are the

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process Table 4.1 Criteria Recommended for the IOM Priority-Setting Process No. Criterion (Typea) Definition 1 Prevalence (O) The number of persons with the condition per 1,000 persons in the general U.S population 2 Burden of illness (S) The difference in quality-adjusted life expectancy (QALE) between a patient who has the condition and receives conventional treatment and the QALE of a person of the same age who does not have the condition 3 Cost (O) The total direct and induced cost of conventional management per person with the clinical condition 4 Variation in rates of use (O) The coefficient of variation (standard deviation divided by the mean) 5 Potential of the results of an assessment to change health outcomes (S) The expected effect of the results of the assessment on the outcome of illness for patients with the illness 6 Potential of the results of an assessment to change costs (S) The expected effect of the results of the assessment on the cost of illness for patients with the illness 7 Potential of the results of an assessment to inform ethical, legal, or social issues (S) The probability that an assessment comparing two or more technologies will help to inform important ethical, legal, or social issues a O = Objective criterion; S = subjective criterion. best for OHTA to select. These criteria and the weights assigned to them are used in a quantitative model for calculating priority scores for each candidate for assessment. Step 2. Identifying Candidate Conditions And Technologies To generate the broadest possible list of candidate technologies for assessment, TA program staff should seek nominations from a wide range of groups concerned with the health of the public. These groups include patients, payers, providers, ethicists, health care administrators, insurers, manufacturers, legislators, and the organizations that represent or advocate for them. TA program staff should also track candidate technologies and gather information on relevant political, economic, or legal events; these might include the emergence of a new technology or new information regarding practice patterns for an established technology, a legal precedent-setting case, an assessment of a technology performed by another organization, completion of a pertinent randomized clinical trial, or the appearance of other new scientific information.

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process Step 3. Winnowing the List of Candidate Conditions and Technologies Once TA program staff have identified what is likely to be a very large set of candidate conditions, they should set in motion some method to identify the most important topics, using a method to "winnow" this initial list to one that is more manageable. The reason for reducing the list of candidate topics is to reduce the workload of TA program staff (who must obtain a data set about each topic that will be ranked) and to reduce the workload of the panels. Ideally, this process of winnowing will be much less costly than the full ranking system and will be, like the overall priority-setting process, free of bias, resistant to control by special interests, open to scrutiny, and clearly understandable to all participants. The committee discusses several possible methods later in this chapter and proposes one for OHTA and other groups. Step 4. Data Gathering When the starting point for the priority-setting process is a clinical condition, TA program staff should define all alternative technologies for managing that condition. In this context, "managing" includes primary screening and prevention, diagnosis, treatment, rehabilitation, palliation, and other similar elements of care. For each condition under consideration, OHTA staff must gather the data required for each priority-setting criterion. Analogously, when the starting point is a technology, TA program staff need to specify the most important clinical conditions for which it is relevant and any other relevant technologies and amass the data required for each priority-setting criterion. The data include numbers (e.g., prevalence, cost) and facts with which to inform a subjective judgment (e.g., a list of current ethical, legal, and social issues). Step 5. Creating Criterion Scores At this point, the IOM process calls for panels to develop criterion scores (the S1-S7 elements in Equation [1]). One or more expert panels, which might be subpanels of the broadly representative panel that sets criterion weights, would determine criterion scores for objective criteria, using the data that have been assembled by TA program staff for each condition. Assigning scores for objective criteria will require expertise in epidemiology, clinical medicine, health economics, and statistics when data are missing, incomplete, or conflicting. One or more representative panels, which might be the same individuals as those setting criterion weights, would use consensus methods to assign scores for subjective criteria.

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process Step 6. Computing Priority Scores From all these inputs, TA program staff would use the quantitative model embodied in Equation (1) to calculate a priority score for each condition. This calculation is performed as follows: (a) find the natural logarithm of each criterion score; (b) multiply that figure (i.e., the natural logarithm) by the criterion weight to obtain a weighted criterion score; and (c) sum these weighted scores to obtain a priority score for each condition or technology. The quantitative model combines empirical rates (e.g., number of people affected per 1,000 in the U.S. population) and subjective ratings (e.g., burden of illness) for each criterion (each given a certain "importance" by virtue of its particular weight) to produce a priority score. Table 4.2 illustrates the process. In the second part of this step, TA program staff list the candidate technologies and conditions in the order of their priority scores. According to the model, higher scores will be associated with conditions and technologies of higher priority. TA program staff should also at this time determine whether another organization is already assessing a topic and delete such Table 4.2 Nomenclature for Priority Setting Example: Priority Score =W2lnS2 + ... + W7lnS7 where W1 = subjectively derived weight for criterion 1, S1 = criterion score for criterion 1, and In = the natural logarithm of the criterion score. Criterion Name (Type)a Criterion Weight (W) Criterion Score (S) Prevalence (O) W1 Number/1,000 persons Cost (O) W2 Cost/person Variations in rates (O) W3 Coefficient of variation for rate of use Burden of illness (S) W4 1-5 rating Potential for the result of an assessment to change health outcomes (S) W5 1-5 rating Potential for the results of an assessment to change costs (S) W6 1-5 rating Potential for the results of an assessment to inform ELSb (S) W7 1-5 rating a Criterion types: O = objective; S = subjective. b ELS = Ethical, legal, and social issues.

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process topics from the priority-ranked list for assessment. In addition, staff must decide whether the published literature is sufficient to support an assessment; if it is not, it has a number of options, as described in Chapter 5. Step 7. Review By Ahcpr National Advisory Council The seventh and final step involves an authoritative review of the priority list as it exists at the end of Step 6; in the case of OHTA, the AHCPR National Advisory Council would conduct this review. Other agencies or organizations would use other definitive review entities. For simplicity, this discussion focuses on OHTA and AHCPR. To complete the priority-setting process, TA program staff would provide the advisory council with definitions of the criteria, a list of the criterion weights, the criterion scores for each candidate topic, and the priority list itself. After review and discussion of this material, the council might take one of several actions: recommend adopting the priority list as a whole; recommend adopting it in part and adjusting the priority rankings in various ways; or reject it outright and request a complete revision for re-review. Depending on its conclusions at this stage, the council would then advise the AHCPR administrator about implementing assessments of the highest ranking topics. DETAILS OF THE PROPOSED PRIORITY-SETTING PROCESS Step 1. Selecting And Weighting The Criteria Used To Establish Priority Scores Selecting Criteria As will be clear from the technical discussions that follow, the criteria established for this priority-setting process have great importance because so much rests on their clear, unambiguous definition and on the weights that are assigned to them. To ensure that this crucial part of the process is given due attention, the IOM committee recommends that a special panel be convened to participate in a consensus process. This panel would choose the criteria that will determine the priority scores and assign a weight to each criterion. It should broadly reflect the entire health care constituency in the United States because its purpose is to characterize the preferences of society. (The assumption is that, for OHTA, the agency itself would convene this panel. Other organizations might empanel such bodies independently, use the product of an AHCPR panel, or turn to some neutral institution, such as the IOM, to carry out this critical first step.) The panel would perform this function only once. (Although the IOM committee envisions a face-to-face group process, the criteria might be se-

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process lected and weighted by means of a mail balloting procedure that uses a formal group judgment method such as a Delphi process. A mailed ballot would require that the staff prepare especially thorough background educational training materials.) The IOM committee considered many possible criteria and recommends the seven that appear in Table 4.1 and that are described fully in Step 5. Chapter 3 argued that the public interest would be well served by a process that assigned priority based on the potential of the assessment to (a) reduce pain, suffering, and preventable deaths; (b) lead to more appropriate health care expenditures; (c) decrease social inequity; and (d) inform other pressing social concerns. The criteria proposed by the committee address these interests. Weighting Criteria Various approaches can be used to assign criterion weights. After some discussion of alternatives, the committee chose the following procedure, which is relatively straightforward and can be easily explained, defended, and applied. The discussion below addresses how to assign weights and what scale to use. It includes a description of a workable group method. The panel, by a formal vote, would choose one criterion to be weighted lowest, and it would give that criterion a weight of 1. (Any criterion given this weight is neutral in its effect on the eventual priority score.) Panel members would then assign weights to the remaining criteria relative to this least important criterion. For example, assume that criterion A is considered the least significant and is accorded the weight of 1. If criterion C were considered three times as important as criterion A, it would be given a weight of 3. The scale of the weights is arbitrary. The committee chose to bound the upper end of the scale at 5. Therefore, individual weights need not reach, but should not exceed, 5. Weights need not be integers; for example, 2.5 is an acceptable weight. In addition, the same weight can be used more than once. If a panel member believes that no criterion is more important than any other, he or she would assign to each a weight of 1. After each panel member assigns weights, the panel would discuss the weights and, depending on the degree of initial consensus, take one or more revotes. The mean of the weights of individual panel members3 following 3    Because the criterion weighting scale is a rational scale in which, for instance, a weight of 2 indicates twice the importance of a weight of 1, one might wish to use the geometric rather than arithmetic mean. There is, however, no logical necessity for using the geometric mean, and the process of determining social preferences (relative importance) can be carried out in any way the panel finds comfortable. The goal is to have the panel replicate something akin to a "social utility function" showing the importance of various component parts of the priority-scoring model. How those weights are determined does not depend on the mathematical way in which they are eventually used—which, in the committee's model, is in a multiplicative fashion, as expressed in Equation (1).

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process the second (or last) revote is the criterion weight to be used in Equation (1) for the remainder of the priority-setting process. Step 2. Identifying Candidate Conditions And Technologies The second step in the IOM committee's process is to identify a list of candidate conditions. An ongoing function of a technology assessment program is to assemble lists of candidate conditions and technologies. This process includes soliciting nominations directly for a large pool of candidate conditions and technologies, accepting suggestions from usual sources and "customers" of technology assessment, and tracking external events that may affect either the pool or the eventual priority-ranked list. As a first stage, TA program staff would routinely solicit from a very broad group a list of topics (technologies and clinical conditions) that might be considered for assessment. The IOM/CHCT pilot study assembled a long list of candidate topics using such a process; that list might serve as a source of topics and a taxonomy of technologies for AHCPR and other organizations that conduct assessments. Simultaneously, the TA program would compile and catalog requests that arrive in the usual manner from the Health Care Financing Administration (HCFA), from the Medicaid and CHAMPUS programs, from practitioners and providers and their professional associations, and from other sources. Finally, TA program staff would be alert to events that affect the characteristics of a technology, clinical condition, or current practice, including the potential to modify patient outcomes. Events that would put a technology or condition on a list of candidates for assessment are a recent rapid and unexplained change in utilization of a technology; an issue of compelling public interest; an issue that is likely to affect health policy decisions; a topic that has created considerable controversy; new scientific information about a new application of an existing technology or the development of a new technology for a particular condition or practice; and a "forcing event," such as a major legal challenge, or any other event that might raise any of a topic's criterion scores. Step 3. Winnowing The List Of Candidate Conditions And Technologies Any process of obtaining nominations that allows for the input of a broad range of groups should lead to a large number of candidate conditions and technologies. When the IOM/CHCT pilot study used this sort of approach, it received 496 different nominations. Because each technology or condition that

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process receives a final ranking will require data gathering by OHTA staff and work by the priority-setting panels, it is desirable to find an efficient, low-cost method to reduce the initial list of nominees to a more manageable number. Thus, winnowing the list is the third element of the IOM priority-setting process. The winnowing step should have several features. First, it should be less costly than the full ranking system; otherwise, it contributes little to the priority-setting process. Second, it should be free of bias and resistant to control by special interests. (For example, no one organization or person should be able to "blackball" a nomination, nor should they be able to force a nomination onto the list.) The process should be clearly understandable to all participants. Possible approaches fall into three groups: intensity ranking, criterion-based preliminary ranking, and panel-based preliminary ranking. Intensity ranking. The original nominator (a person or organization) would be asked to express some degree of intensity of preference for having individual technologies evaluated. TA program staff would aggregate those rankings and eliminate topics at the lower end of the list before proceeding to a complete ranking of the remaining list. Criterion-based preliminary ranking. TA program staff would rank all nominated technologies and conditions according to a subset of criteria. They would eliminate some topics on that basis and then proceed to rank the remaining set fully. Panel-based preliminary ranking. TA program staff would use panels to provide subjective rankings on all or a subset of candidate technologies. Only the highest ranking topics would remain for the full ranking process. After discussing all three approaches and variants of each, the IOM committee recommends using the last method—panel-based preliminary ranking. A full description of each approach and the rationale for favoring the panel-based method for OHTA are given in Appendix 4.1 at the end of this chapter. The panel-based method uses one or several panels to provide preliminary (subjective) rankings of the nominated technologies. To minimize costs, these activities could be conducted using mail ballots or (a more modern variant) electronic mail. Two versions of this process can be described: a double-Delphi system and a single-panel, in-and-out system. The committee does not view one or the other as preferable. Double-Delphi system. This method would use two panels that might be constituted with quite different memberships. Each would select (for example) their top 150 unranked technologies. The list for priority setting would include only those technologies that appeared on both lists. In an alternative method, each panel would "keep" (for example) 50 technologies,

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process the methods used in the study; types and sources of data (e.g., claims data from Medicare, randomized controlled trial data from the literature); the dates of collection of the study's data and of the study report; indications of how much an assessment has been used (e.g., changes in financing policy, citations in medical journals); and, when feasible, evidence of effects on clinical practice. The catalog is the starting point for tracking technologies that have been previously assessed. Ideally, a public agency would also track assessments by other organizations, and OHTA is a logical repository for these data. Such a task might also be undertaken by the National Library of Medicine. Monitoring the Published Literature on Previously Assessed Topics. The agency should establish a system to monitor the published literature on previously assessed topics, given that up-to-date knowledge of a topic is the foundation for reassessment. Using the search strategies of the original assessment, the staff should monitor the literature to identify high-quality studies that could have a bearing on the decision to reassess and the occurrence, if any, of one of the triggering events listed in Box 4.1. OHTA might also consider creating a network of expert consultants or seeking the support of medical specialty groups who would take responsibility for monitoring the literature on a topic and calling attention to developments that might warrant OHTA reassessment. The IOM has made recommendations for augmenting information resources on health technology assessment in two recent reports (IOM, 1989b, 1991b). Evaluation of the Quality of Studies Once literature regarding a topic has accumulated, OHTA staff should evaluate the quality of the studies. Additionally, experts in the content and methodology of the clinical evaluative sciences could review designated studies to advise OHTA on the quality of the evidence being presented. The agency could then decide whether a reassessment is desirable—that is, whether events have occurred since the first assessment that have rendered the original conclusions obsolete. An OHTA panel, presumably a subpanel of OHTA's priority-setting panel, should periodically review the data on previous assessments and decide whether the circumstances warrant reassessment. Ranking Candidates for Reassessment The committee recommends that candidates for reassessment be considered on the same basis as candidates for first-time assessment, using the

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process Figure 4.5 Relationship between process for first-time assessment and process for reassessment. same process. Thus, OHTA panels would consider topics for first-time assessment at the same time that they consider topics for reassessment, and OHTA would forward one list of candidates for assessment (or reassessment) to the AHCPR Advisory Council. That list would have both candidates for first-time assessment and candidates for reassessment. Figure 4.5 shows the interrelationship of the process for first-time assessments and the process for reassessment.

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process Final Steps after Establishing Priority for Reassessment After calculating priority scores for reassessment candidates, OHTA should address two additional pertinent topics: the results of a sensitivity analysis and the cost of reassessment. Sensitivity Analysis If a previously assessed topic has achieved a high priority score, OHTA staff should use the data that have been assembled for setting criterion scores to perform a sensitivity analysis. The purpose of the analysis is to test whether the new information would change the conclusions of a previous assessment. For example, if a diagnostic device (technology A) was assessed previously and a new device that is potentially more accurate but that will cost $300 more per patient has become available, a simple sensitivity analysis might indicate whether the recommendations about the use of technology A would change. If those recommendations would not change, even if technology B had perfect sensitivity and specificity, there would be no reason to conduct an assessment of these technologies—not, at least, until the cost of technology B falls relative to technology A. Cost Analysis The cost of reassessment will vary widely. Some reassessments will be simple and relatively inexpensive to perform; others will require almost a complete rethinking of the problem. For instance, some analyses, such as those using decision-tree formats, easily permit reassessment as data change. If a new randomized trial alters the perceived treatment effect of an intervention, one can readily incorporate the new data in such an analysis and re-estimate the cost-effectiveness of various interventions included in the tree. Other reassessments, however, may require a more fundamental change in the analytic approach or incorporation of an entirely new measure of outcomes or costs. SUMMARY The committee has proposed a priority-setting process that includes seven elements: (1) selecting and weighting criteria for establishing priorities, (2) eliciting broad input for candidate conditions and technologies; (3) winnowing the number of topics; (4) gathering the data needed to assign a score for each priority-setting criterion for each topic; (5) assigning criterion scores to each topic, using objective data for some criteria and a rating scale anchored by low- and high-priority topics for subjective criteria; (6) calcu-

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process lating priority scores for each condition or technology arid ranking the topics in order of priority; and (7) requesting review by the AHCPR National Advisory Council. The chapter defined the seven criteria and explained how to assign scores for each one. Three of the criteria—prevalence, cost, and clinical practice variations—are objective; they are scored using quantitative data to the extent possible. The other four—burden of illness and the likelihood that the results of the assessment will affect health outcomes, costs, and ethical, legal, and social issues—are subjective; they are scored according to ratings on a scale from 1 to 5. The chapter also addressed special aspects of priority setting that apply only to reassessment of previously assessed technologies; these include recognizing events that trigger reassessment (e.g., change in the nature of the condition, in knowledge, in clinical practice); the need to track information related to previous assessments; and the obligation to update a previous assessment as a fiduciary responsibility and to preserve the credibility of the assessing organization. APPENDIX 4.1: WINNOWING PROCESSES This appendix discusses in greater detail some of the issues that arise in reducing a long list of candidate conditions and technologies (or "winnowing") for possible assessment by the Office of Health Technology Assessment (OHTA). Three general methods are discussed as a basis for the winnowing process: (1) eliciting some sense of the intensity of preference regarding a candidate on the list on the part of those who nominate it and using this information to winnow; (2) using a single criterion and a process similar to but much simpler than the quantitative model; and (3) using an implicit, panel-based process. The appendix offers options within each method and provides a rationale for the committee's suggested choices. Intensity Rankings by Nominating Persons and Organizations One difficulty with the "open" nomination process is that it does not necessarily reveal the intensity of the preferences of nominating individuals and organizations. Thus, one way to help establish preliminary priorities is to request nominators to include a measure of intensity and then to add these measures of interest across all nominating organizations and persons, using the final total as a preliminary ranking. Several variants on this approach are available: Option A. Ask each nominating group to assign a rank from 1 to 5 (1 = least important; 5 = most important). If an item is not mentioned on a ballot, it receives a rank of 0.) Sum the ranks across all ballots. Using that

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process figure as a preliminary ranking, proceed to final ranking on (for example) the top 50 candidates. Option B. Proceed as in Option A but allow each ballot to have a fictitious budget of $1,000 to allocate across all candidate technologies. TA program staff would then add the budget allocations across ballots. For example, an organization could specify $4 for 250 technologies and conditions, $250 to only 4 technologies, or $1,000 to a single technology. This process has the desirable feature of reflecting the scarcity of research resources available for technology assessment. Option C. Use a more formal "willingness to pay" (WTP) revelation process familiar to economists (e.g., "Clark taxes"). Such techniques attempt to measure directly the willingness of an organization or person to pay for the assessment of a specific technology. The aggregate willingness to pay for a technology assessment (summed across all ballots) represents a measure of the social value of the assessment. (Indeed, some people would assert that, if a WTP assessment is properly done, it could be used as the final priority-setting list.) The committee does not believe that enough is known about the actual conduct and reliability of Clark tax-type methods to base current priority-setting methods on this approach alone, but some organizations may find this technique useful at least in a preliminary stage. Overall, the committee believes that the use of methods like these for preliminary priority setting—at least in pure form—within the context of a public agency creates some important problems. Its questions center on the issue of who is eligible to submit "ballots" and how much each of those ballots should "count." For example, if open submissions of ballots are allowed or welcomed, and each has equal weight, then lobbying organizations could readily "stuff" the ballot-box with numerous ballots, each emphasizing a single technology. (All-Star baseball voting exhibits some of this characteristic, in that fans in some cities may try to tilt the balance in favor of players on their home teams.) One alternative is to limit the distribution of ballots or to determine in advance how much each ballot counts. (For example, the ballot of a large health insurer might count much more than that of an individual provider, and the ballot of a single-purpose charity devoted to the cure of a single disease might reflect some estimate of the size of its constituency.) However, a preliminary assignment process of this kind inherently opens up the entire process to intense political pressure, and, indeed, makes it likely that the process will become so expensive that it loses its value as a low-cost screening device. "Open" voting with specified preference intensity (i.e., option A) raises the possibility that private interests with a strong interest in having a single technology evaluated might spend considerable resources to bring this

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process about. The importance of OHTA assessments in some Medicare coverage decisions is an obvious reason for attempts to control the priority-setting process. Yet in other settings, these intensity-based preference systems might function extremely well. For example, an association of primary care physicians or a large health maintenance organization (HMO) might wish to undertake its own technology assessment activities and establish its own priorities for this activity. In this case, the membership of the society, the staff, or enrollees of the HMO form a natural basis for voting, and there would be no presumed preference on the part of any one person in these groups to have any single technology evaluated—except as it might affect the well-being of patients. For this reason, the committee includes a description of these preference-intensity voting systems, but it cautions against their use in settings in which they invite strategic responses. Preliminary Ranking Processes The winnowing process uses (initially) one criterion from the final ranking system (e.g., prevalence of disease, disease burden, cost per treatment, variability in use) and provides an initial ranking on that basis. This method is more data intensive than the first set of winnowing methods described above but less data intensive than a complete ranking. There are two main variants on this idea: Option D. Rank all nominations on the criterion that receives the highest weight in the final priority-setting process, keep (say) the top 250, and rank those, using both the highest- and second-highest-weight criteria. This list becomes, in effect, a restricted version of the final ranking process. Keep (for example) the top 100 candidates, and conduct a full ranking on that set. The logic of this approach is that the criterion weighted highest will in many ways determine the final ranking; at least, it must be true that nominations receiving a low score on the highest-ranked criterion cannot ever receive a high enough score to make a "final 20" or some comparable list. This hierarchical approach thus eliminates nonviable candidate technologies, at a lower data-gathering cost than a complete ranking of each technology, while preserving the essential features of the ranking system. Option E. In the preliminary ranking, one could select the criterion to be used in the initial ranking according to not only the weight assigned in the process but also the costs of data gathering. For example, if the highest-weighted criterion had very high data-gathering costs but the next-highest-weighted criterion had much lower data costs associated with it, one could conduct the initial ranking using the second-highest-weighted criterion instead of the highest-weighted criterion.

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process These types of preliminary ranking systems have an obvious disadvantage: they require that data be collected on a potentially large number of technologies. This reason alone may argue against their use in a setting where a widespread ''call'' to suggest interventions is likely to produce a large number of candidate topics. Another, more subtle issue deserves mention: using other methods for preliminary screening produces an independence between two parts of the priority-setting system that the use of only one technique cannot achieve. Some people may view as a virtue the idea that the winnowing system and the final ranking system follow the same methodological basis. Others may see this commonality as a defect to be guarded against by using an alternative method for preliminary screening. Panel-Based Preliminary Weighting On balance, the committee believes that methods from a third group of options are preferable for preliminary screening. This approach uses one or more panels of experts to provide preliminary (subjective) rankings of the nominated technologies. To minimize costs, these activities could be conducted using mail ballots, or (a modern variant) electronic mail. Two principal versions of this process are possible: Option F. Double-Delphi system. Use two separate panels, constituted with quite different memberships, and have them select (say) their top 150 technologies (and leave them unranked). Keep for final priority setting only those technologies that appear on both lists. As an alternative, each panel could "keep" perhaps 50 technologies, and the final ranked list would include those that appeared on at least one list. The Delphi rankings could be based either on subjective, implicit judgments of panel members (which makes this tactic a relatively low-cost alternative) or on data supplied to the Delphi panels (a higher-cost option). The two Delphi panels should have distinctly different memberships; in one case, perhaps, the panel would be entirely health care practitioners, and in the other, health services researchers, consumer representatives, and others not directly involved in providing care. Particularly if no data were to be presented, it would be necessary to have panels that possessed sufficient technical expertise to understand the implications of their decisions. Option G. Single-panel in-and-out system. This approach would use only a single expert panel that would generate two sets of technology lists. The topics on the first list (consisting, for instance, of 5 percent of the submitted nominations) would automatically go forward to the next step in the process. The bottom (say) 50 percent of nominations would be excluded from further consideration. The remaining 45 percent (in this example)

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process would go to a second-tier winnowing process. The second-tier process could be either (a) a repeat of this process or (b) some sort of data-driven system similar to option D above. This process is entirely self-contained if used recursively. As an example of such a "recursive" use, let us suppose that the original round of nominations had produced 800 candidate technologies for evaluation. During the first round, 5 percent (40 technologies) would be retained and 50 percent (400 technologies) would be eliminated from consideration, leaving 360 technologies. Reapplying the same rules but keeping (for example) 10 percent during this round and eliminating (again) 50 percent would retain 36 and eliminate another 180 technologies, making a total of 76 technologies preserved at this stage and 144 as yet unclassified. Finally, keeping 10 percent of these unclassified technologies would bring the total to 90, and the process could stop there. These 90 technologies would continue through the full priority-setting process. Comment The final choice of a winnowing method for an organization could well depend on the degree of openness rather than the expert knowledge desired in the process. The most open systems are those in the first group discussed earlier (options A-C) because they rely on intensity of preferences as expressed by nominating organizations. Because of their openness, however, they intrinsically invite expenditure of private resources and attempts to control the system. The third group of approaches (options F and G) makes fullest use of expert knowledge. The second set—ranking on the basis of a subset of the eventual criteria—best preserves the intent of the final priority-setting process but is more data intensive and thus potentially more costly. Organizations engaged in priority setting may also find it useful to use a winnowing process that quite deliberately does not use the same approach as the final process. They would then use activities from the first or third groups rather than the second. As was argued earlier, because of the intrinsic problems associated with the first two groups of winnowing strategies as applied in the OHTA setting, the committee recommends that OHTA adopt as a preliminary winnowing system either options F or G in the panel-based set. The committee has no strong preference for either option; ease of implementation thus could be a key consideration in the ultimate choice. Option G offers a recursive system on the grounds that it is easier to pick "the top" and "the bottom" of any list than it is to rank every element within it. In other settings, the other methods have potential value and may well be preferred to any from the third group. For example, in settings with a narrow focus that leads to a limited number of submissions, dam-driven methods similar to those in the second group (options D and E) may be the

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process best, particularly if the organization conducting the priority assessment values consistency of method in the preliminary and final priority-setting process. In other arenas, options from the first group (A, B, and C) may have considerable appeal, particularly for those with a naturally closed population from which nominations might emerge and in which no particularly strong stake exists in having any single technology or condition studied. Finally, one can imagine combinations of the processes; for instance, a panel-based preliminary ranking might use options B or C to distribute votes or hypothetical dollars. Regardless of the choice of process, the committee believes that it is desirable in any priority-setting process to rely at least in part on nominating organizations to provide information relevant to the final process—for example, information on costs, prevalence of disease, burden of disease, and variability of treatment use across geographic regions. Finally, it must be remembered that the winnowing process plays only a minor role in determining the eventual set of activities chosen for technology assessment. Its only goal is to speed up (and reduce the costs of) the final priority-setting process. To the extent that winnowing achieves this goal at low cost and without eliminating technologies that would otherwise be assigned high priority, it has succeeded; conversely, to the extent it becomes elaborate and expensive, it defeats the purpose of using any winnowing strategy. For these reasons, the committee advises the choice of a winnowing technique that reflects the goals of simplicity, avoidance of control by special interests, and low cost.6 APPENDIX 4.2: METHODOLOGIC ISSUES Two key methodologic issues for deriving a formula for the technology assessment priority score are (1) the scale on which each of the criterion scores is expressed and (2) the means used to maintain consistent relationships among the weights assigned to each criterion. In regard to scaling, the priority-setting process outlined by the committee uses logarithms of each criterion score. The discussion that follows explains the choice of the particular logarithmic approach used by the committee. The IOM priority-setting process uses "natural" units for objective criteria, such as the prevalence of a condition and the costs of care for the condition (e.g., "head counts" for the number of affected, dollars for cost). 6   Low cost in this context includes both public and private expenditures. Procedures established within the priority-setting process that invite considerable investment by private parties to manipulate them are self-defeating.

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process Yet, one approach, using natural units would mean that every weight could be affected by a change in the scale of any other measure. For example, a quite natural scale of measurement of per-capita spending is the proportion of spending on the disease that is related to the range of spending on all diseases under consideration. Thus, if per-capita spending ranged from $1 per person (at the low end) to $1,000 (at the high end), one could use a measure of where $500 fits between the low and the high end (i.e., 500/(1,000 - 1) = 500/999 = 0.5), rather than the $500 itself. With this type of measure (scaled by the range across all interventions), the value for the criterion score would be 0.5. In a model based on natural units, to maintain the importance of "spending," one would have to modify the weight so that the product of the criterion weight and the criterion score remained unchanged. Thus, if the scale of measurement of any of these components were changed, the weight would also have to be changed to keep the product unchanged. The problem of the interaction of the weights and the scale of measurement of the values that determine a criterion score can be avoided by a simple mathematical modification. By using relative importance to determine the criterion weights, the logarithmic transformation provides the same results independent of the scale by which each of the component "scales" is measured. Properties of Logarithms For those not familiar with the mathematics of logarithms, it may be helpful to review two of their properties. In the general equation by = x, the exponent y is called the log of x to the base b (one can describe the log as the exponent y to which the base b must be raised to get number x). Thus, an equivalent expression is y = log bx. The first property of logarithms is that logb (xw) = logbx + logbw. That is, the log of the product of x and w is the sum of the log of x and the log of w (thus, we use the term "multiplicative" or "log additive" to describe the committee's model). The committee's model might use a logarithm with any base, but the committee chose to use natural logarithms (ln) in which the base is e (an irrational number whose decimal expression is 2.71832 ...). Substituting the term In for log and the expressions S1 and S2 for x and w, respectively, it is apparent that y = lnS1 + lnS2. The second property of logarithms helps to explain the role of criterion weights: logb (xr) = rlogbx. In other words, raising the log of x to the power r is equivalent to multiplying r by the log of x. Again, substituting the committee's expressions as above shows that raising the log of the criterion score to the weight for that criterion is equivalent to multiplying the trite-

OCR for page 57
Setting Priorities for Health Technology Assessment: A Model Process rion weight by the natural log of the criterion score. Thus, y = Priority Score, and, as in Chapter 4, Equation (1) is as follows: Application to the Iom Model The use of logarithms is neither intuitive nor familiar to most people, but it does express a natural way of thinking. The logarithmic transformation will accomplish the desired scaling, no matter what "natural" scaling is used. All that is necessary to implement this approach is for participants in the priority-setting process to agree that the relative weights represent the relative importance of a criterion. One can then measure the individual score components in any way one desires, as long as measurements are consistent across technologies for a criterion. Weights of 1 yield proportional increases in priority as a component increases. Weights of 2 increase a priority score 20 percent for each 10 percent increase in a criterion score. To provide an example of how use of relative importance can eliminate worry about how the various components in the criterion scores are measured, let us consider the Phelps and Parente (1990) model. In this model, N = number of people treated annually, P = average cost per procedure performed, Q = average per-capita quantity, COV = the coefficient of variation for the procedure across regions, and e = the demand elasticity. The priority-setting index I for intervention j is: If one assigns the relative importance weights for each element in Equation (5) as 1, 1, 1, 2, and -1 and takes the logarithm of each, then Mathematically, the effect of changing the values of the variables on the right side of Equations (5) and (6) can be expressed in terms of percentages. Thus, a 10 percent increase in the number of people treated for intervention j (Nj) raises the value of Ij by 10 percent (and similarly for Pj and Qj); a 10 percent increase in the COVj increases the index by 20 percent; and a 10 percent increase in ej decreases the index by 10 percent. Using logarithms is an approach that is intended to reflect relative place on a scale of importance. In producing priority scores for each candidate condition or technology, the relative ranking of each procedure will be the same, regardless of how each of the criterion scores is measured. The relative difference in priority scores similarly will be unaffected by changes in the scale used to measure any criterion score.