National Academies Press: OpenBook

Setting Priorities for Health Technologies Assessment: A Model Process (1992)

Chapter: 4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS

« Previous: 3 GUIDING PRINCIPLES
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

4
Recommendations for a Priority-Setting Process

Chapter 3 outlined the general principles of a priority-setting process for conducting technology assessments. Such a process should (1) be consistent with the mission of the organization, (2) provide a product compatible with its needs, (3) be efficient, and (4) be sensitive to the political and social context in which it is used. The process proposed in this chapter incorporates elements that the committee believes are in accord with these principles.

First, the committee's approach uses a broadly representative panel, and the priority-setting criteria reflect several dimensions of social need. It is an explicit process that includes a quantitative model as described in this chapter. The process is intended to be open, understandable, and modifiable as experience with it grows. These characteristics are consistent with the mission of the Office of Health Technology Assessment (OHTA) as a public agency.

Second, the process will produce a list of conditions and technologies ranked in order of their importance for assessment.

Third, the process provides for broad public participation in assembling a list of candidate conditions but then winnows the list to identify important topics, using data when they are available and consensus judgments when data are unavailable. The committee believes that this approach will result in a process that is efficient but that still serves the other principles.

Fourth, the committee's priority-setting process is intended to be sensitive to its political context: it is open to scrutiny, resistant to control by special interests, and includes review by a publicly constituted and accountable advisory body.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Figure 4.1 Overview of the IOM priority-setting process.

The proposed process includes a quantitative model for calculating a priority score for each candidate topic. In this chapter, the term process is used for the entire priority-setting mechanism; the term model is used for the quantitative portion of that process that combines criterion scores to produce a priority score.

The model incorporates seven criteria with which to judge a topic's importance. It combines scores and weights for each criterion to produce a priority ranking for each candidate topic. Nevertheless, using the model requires judgments by a panel, data gathering by OHTA program staff, and

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

review by the National Advisory Council of the Agency for Health Care Policy and Research (AHCPR).

During the summer of 1991, the IOM committee pilot-tested its methodology by gathering data on a number of conditions and technologies and using an early version of its model to rank 10 topics. The committee compared two methods of obtaining inputs for the model—a panel meeting and a mail ballot—and modified the model based on this experience. The methods and results of the pilot test are described in Appendix A.

Figure 4.1 is an overview of the proposed steps and participants in the priority-setting process.

PREVIEW OF THE QUANTITATIVE MODEL

The committee's proposed process is a hybrid. It combines features of ''objective,'' model-driven priority-setting methods (such as that of Phelps and Parente [1990]), and a consensus-based Delphi approach, such as that used by the IOM's Council on Health Care Technology (IOM/CHCT) in its pilot study (described in Chapters 1 and 2).1

The model combines three components: (1) seven criteria; (2) a corresponding set of seven criterion weights (W1 ... W7) that reflect the importance of each criterion; and (3) a set of seven criterion scores (S1 ... S7) for each candidate condition or technology. The final "index" of importance of a topic is its priority score, which is the sum of the seven weighted criterion scores (Si), each multiplied by its criterion weight (wi).2

This priority score or index is calculated as shown in Equation (1) below:

1  

As noted in Chapter 1, the Council on Health Care Technology no longer exists at the Institute of Medicine. The pilot study described here is referred to as the IOM/CHCT pilot study to distinguish it from the pilot test conducted for the present project.

2  

In the Phelps-Parente model, characteristics such as equality of access, gender-or race-related differences in disease incidence, and similar characteristics have no effect on the overall ranking. Indeed, the output of the Phelps-Parente model was, by design, only one of several inputs into the consensus process that the IOM/CHCT pilot study committee used. In addition, the Phelps-Parente model uses only objective data to measure such things as spending and degree of medical disagreement (as measured by the coefficient of variation). This characteristic is an important limitation to the use of such models, since they cannot be applied when such formal, objective data are unavailable. This drawback may be especially severe when data are limited (e.g., nursing home, home care, ambulatory care) and when new technologies have had little use and have not (yet) been captured in data bases.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

where W is the criterion weight, S is the criterion score, and In is the natural logarithm of the criterion score. The derivation of this formula, an explanation of why the natural logarithm is used, and a description of its component terms are discussed fully in a later section of this chapter.

The model incorporates several forms of knowledge about technologies. The first is "empirical data," such as the prevalence of a condition. The second is "estimated data," which are used when objective data are missing, incomplete, or conflicting (e.g., the number of patients who will use erythropoietin 5 years from now). Third are intrinsically subjective ratings, such as the likelihood that a technology assessment will affect health outcomes.

ELEMENTS OF THE PROPOSED PRIORITY-SETTING PROCESS

The IOM committee recommends a priority-setting process with seven primary components, or "steps." These steps are numbered in Figure 4.1 and are described briefly below; they are discussed in greater detail in the remainder of this chapter. In this discussion, the committee uses the term technology assessment (TA) program staff to mean people in a government agency or private-sector organization who are responsible for implementing a technology assessment and reassessment program of sufficient size to warrant a priority-setting process. Similarly, although the term staff is used to refer to the staff of OHTA at AHCPR, the term could apply equally to the staff of any agency or technology assessment organization.

Step 1. Selecting and Weighting Criteria Used to Establish Priorities

The first step that OHTA should take is to convene a broadly representative panel to select and define criteria for priority setting. Criteria can be both objective and subjective. The panel should also assign to each criterion a weight that reflects its relative importance.

The IOM committee proposes and later defines seven criteria: three objective criteria—prevalence, cost, and variation in rates of use; and four subjective criteria—burden of illness, potential of the results of the assessment to change clinical outcomes, potential of the results of the assessment to change costs, and potential of the results of the assessment to inform ethical, legal, and social (ELS) issues. Table 4.1 defines these criteria. The justification for each and "instructions for use" appear later in this chapter under Step 5.

Different organizations might, through their own procedures, choose different criteria and assign different weights to each. The IOM committee believes there are good reasons why the seven criteria that it chose are the

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Table 4.1 Criteria Recommended for the IOM Priority-Setting Process

No.

Criterion (Typea)

Definition

1

Prevalence (O)

The number of persons with the condition per 1,000 persons in the general U.S population

2

Burden of illness (S)

The difference in quality-adjusted life expectancy (QALE) between a patient who has the condition and receives conventional treatment and the QALE of a person of the same age who does not have the condition

3

Cost (O)

The total direct and induced cost of conventional management per person with the clinical condition

4

Variation in rates of use (O)

The coefficient of variation (standard deviation divided by the mean)

5

Potential of the results of an assessment to change health outcomes (S)

The expected effect of the results of the assessment on the outcome of illness for patients with the illness

6

Potential of the results of an assessment to change costs (S)

The expected effect of the results of the assessment on the cost of illness for patients with the illness

7

Potential of the results of an assessment to inform ethical, legal, or social issues (S)

The probability that an assessment comparing two or more technologies will help to inform important ethical, legal, or social issues

a O = Objective criterion; S = subjective criterion.

best for OHTA to select. These criteria and the weights assigned to them are used in a quantitative model for calculating priority scores for each candidate for assessment.

Step 2. Identifying Candidate Conditions And Technologies

To generate the broadest possible list of candidate technologies for assessment, TA program staff should seek nominations from a wide range of groups concerned with the health of the public. These groups include patients, payers, providers, ethicists, health care administrators, insurers, manufacturers, legislators, and the organizations that represent or advocate for them. TA program staff should also track candidate technologies and gather information on relevant political, economic, or legal events; these might include the emergence of a new technology or new information regarding practice patterns for an established technology, a legal precedent-setting case, an assessment of a technology performed by another organization, completion of a pertinent randomized clinical trial, or the appearance of other new scientific information.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Step 3. Winnowing the List of Candidate Conditions and Technologies

Once TA program staff have identified what is likely to be a very large set of candidate conditions, they should set in motion some method to identify the most important topics, using a method to "winnow" this initial list to one that is more manageable. The reason for reducing the list of candidate topics is to reduce the workload of TA program staff (who must obtain a data set about each topic that will be ranked) and to reduce the workload of the panels. Ideally, this process of winnowing will be much less costly than the full ranking system and will be, like the overall priority-setting process, free of bias, resistant to control by special interests, open to scrutiny, and clearly understandable to all participants. The committee discusses several possible methods later in this chapter and proposes one for OHTA and other groups.

Step 4. Data Gathering

When the starting point for the priority-setting process is a clinical condition, TA program staff should define all alternative technologies for managing that condition. In this context, "managing" includes primary screening and prevention, diagnosis, treatment, rehabilitation, palliation, and other similar elements of care. For each condition under consideration, OHTA staff must gather the data required for each priority-setting criterion. Analogously, when the starting point is a technology, TA program staff need to specify the most important clinical conditions for which it is relevant and any other relevant technologies and amass the data required for each priority-setting criterion. The data include numbers (e.g., prevalence, cost) and facts with which to inform a subjective judgment (e.g., a list of current ethical, legal, and social issues).

Step 5. Creating Criterion Scores

At this point, the IOM process calls for panels to develop criterion scores (the S1-S7 elements in Equation [1]). One or more expert panels, which might be subpanels of the broadly representative panel that sets criterion weights, would determine criterion scores for objective criteria, using the data that have been assembled by TA program staff for each condition. Assigning scores for objective criteria will require expertise in epidemiology, clinical medicine, health economics, and statistics when data are missing, incomplete, or conflicting. One or more representative panels, which might be the same individuals as those setting criterion weights, would use consensus methods to assign scores for subjective criteria.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Step 6. Computing Priority Scores

From all these inputs, TA program staff would use the quantitative model embodied in Equation (1) to calculate a priority score for each condition. This calculation is performed as follows: (a) find the natural logarithm of each criterion score; (b) multiply that figure (i.e., the natural logarithm) by the criterion weight to obtain a weighted criterion score; and (c) sum these weighted scores to obtain a priority score for each condition or technology. The quantitative model combines empirical rates (e.g., number of people affected per 1,000 in the U.S. population) and subjective ratings (e.g., burden of illness) for each criterion (each given a certain "importance" by virtue of its particular weight) to produce a priority score. Table 4.2 illustrates the process.

In the second part of this step, TA program staff list the candidate technologies and conditions in the order of their priority scores. According to the model, higher scores will be associated with conditions and technologies of higher priority. TA program staff should also at this time determine whether another organization is already assessing a topic and delete such

Table 4.2 Nomenclature for Priority Setting

Example:

Priority Score =W2lnS2 + ... + W7lnS7

where W1 = subjectively derived weight for criterion 1, S1 = criterion score for criterion 1, and In = the natural logarithm of the criterion score.

Criterion Name (Type)a

Criterion Weight (W)

Criterion Score (S)

Prevalence (O)

W1

Number/1,000 persons

Cost (O)

W2

Cost/person

Variations in rates (O)

W3

Coefficient of variation for rate of use

Burden of illness (S)

W4

1-5 rating

Potential for the result of an assessment to change health outcomes (S)

W5

1-5 rating

Potential for the results of an assessment to change costs (S)

W6

1-5 rating

Potential for the results of an assessment to inform ELSb (S)

W7

1-5 rating

a Criterion types: O = objective; S = subjective.

b ELS = Ethical, legal, and social issues.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

topics from the priority-ranked list for assessment. In addition, staff must decide whether the published literature is sufficient to support an assessment; if it is not, it has a number of options, as described in Chapter 5.

Step 7. Review By Ahcpr National Advisory Council

The seventh and final step involves an authoritative review of the priority list as it exists at the end of Step 6; in the case of OHTA, the AHCPR National Advisory Council would conduct this review. Other agencies or organizations would use other definitive review entities. For simplicity, this discussion focuses on OHTA and AHCPR.

To complete the priority-setting process, TA program staff would provide the advisory council with definitions of the criteria, a list of the criterion weights, the criterion scores for each candidate topic, and the priority list itself. After review and discussion of this material, the council might take one of several actions: recommend adopting the priority list as a whole; recommend adopting it in part and adjusting the priority rankings in various ways; or reject it outright and request a complete revision for re-review. Depending on its conclusions at this stage, the council would then advise the AHCPR administrator about implementing assessments of the highest ranking topics.

DETAILS OF THE PROPOSED PRIORITY-SETTING PROCESS

Step 1. Selecting And Weighting The Criteria Used To Establish Priority Scores

Selecting Criteria

As will be clear from the technical discussions that follow, the criteria established for this priority-setting process have great importance because so much rests on their clear, unambiguous definition and on the weights that are assigned to them. To ensure that this crucial part of the process is given due attention, the IOM committee recommends that a special panel be convened to participate in a consensus process.

This panel would choose the criteria that will determine the priority scores and assign a weight to each criterion. It should broadly reflect the entire health care constituency in the United States because its purpose is to characterize the preferences of society. (The assumption is that, for OHTA, the agency itself would convene this panel. Other organizations might empanel such bodies independently, use the product of an AHCPR panel, or turn to some neutral institution, such as the IOM, to carry out this critical first step.)

The panel would perform this function only once. (Although the IOM committee envisions a face-to-face group process, the criteria might be se-

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

lected and weighted by means of a mail balloting procedure that uses a formal group judgment method such as a Delphi process. A mailed ballot would require that the staff prepare especially thorough background educational training materials.)

The IOM committee considered many possible criteria and recommends the seven that appear in Table 4.1 and that are described fully in Step 5. Chapter 3 argued that the public interest would be well served by a process that assigned priority based on the potential of the assessment to (a) reduce pain, suffering, and preventable deaths; (b) lead to more appropriate health care expenditures; (c) decrease social inequity; and (d) inform other pressing social concerns. The criteria proposed by the committee address these interests.

Weighting Criteria

Various approaches can be used to assign criterion weights. After some discussion of alternatives, the committee chose the following procedure, which is relatively straightforward and can be easily explained, defended, and applied. The discussion below addresses how to assign weights and what scale to use. It includes a description of a workable group method.

The panel, by a formal vote, would choose one criterion to be weighted lowest, and it would give that criterion a weight of 1. (Any criterion given this weight is neutral in its effect on the eventual priority score.) Panel members would then assign weights to the remaining criteria relative to this least important criterion. For example, assume that criterion A is considered the least significant and is accorded the weight of 1. If criterion C were considered three times as important as criterion A, it would be given a weight of 3.

The scale of the weights is arbitrary. The committee chose to bound the upper end of the scale at 5. Therefore, individual weights need not reach, but should not exceed, 5. Weights need not be integers; for example, 2.5 is an acceptable weight. In addition, the same weight can be used more than once. If a panel member believes that no criterion is more important than any other, he or she would assign to each a weight of 1.

After each panel member assigns weights, the panel would discuss the weights and, depending on the degree of initial consensus, take one or more revotes. The mean of the weights of individual panel members3 following

3  

 Because the criterion weighting scale is a rational scale in which, for instance, a weight of 2 indicates twice the importance of a weight of 1, one might wish to use the geometric rather than arithmetic mean. There is, however, no logical necessity for using the geometric mean, and the process of determining social preferences (relative importance) can be carried out in any way the panel finds comfortable. The goal is to have the panel replicate something akin to a "social utility function" showing the importance of various component parts of the priority-scoring model. How those weights are determined does not depend on the mathematical way in which they are eventually used—which, in the committee's model, is in a multiplicative fashion, as expressed in Equation (1).

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

the second (or last) revote is the criterion weight to be used in Equation (1) for the remainder of the priority-setting process.

Step 2. Identifying Candidate Conditions And Technologies

The second step in the IOM committee's process is to identify a list of candidate conditions. An ongoing function of a technology assessment program is to assemble lists of candidate conditions and technologies. This process includes soliciting nominations directly for a large pool of candidate conditions and technologies, accepting suggestions from usual sources and "customers" of technology assessment, and tracking external events that may affect either the pool or the eventual priority-ranked list.

As a first stage, TA program staff would routinely solicit from a very broad group a list of topics (technologies and clinical conditions) that might be considered for assessment. The IOM/CHCT pilot study assembled a long list of candidate topics using such a process; that list might serve as a source of topics and a taxonomy of technologies for AHCPR and other organizations that conduct assessments.

Simultaneously, the TA program would compile and catalog requests that arrive in the usual manner from the Health Care Financing Administration (HCFA), from the Medicaid and CHAMPUS programs, from practitioners and providers and their professional associations, and from other sources.

Finally, TA program staff would be alert to events that affect the characteristics of a technology, clinical condition, or current practice, including the potential to modify patient outcomes. Events that would put a technology or condition on a list of candidates for assessment are

  • a recent rapid and unexplained change in utilization of a technology;

  • an issue of compelling public interest;

  • an issue that is likely to affect health policy decisions;

  • a topic that has created considerable controversy;

  • new scientific information about a new application of an existing technology or the development of a new technology for a particular condition or practice; and

  • a "forcing event," such as a major legal challenge, or any other event that might raise any of a topic's criterion scores.

Step 3. Winnowing The List Of Candidate Conditions And Technologies

Any process of obtaining nominations that allows for the input of a broad range of groups should lead to a large number of candidate conditions and technologies. When the IOM/CHCT pilot study used this sort of approach, it received 496 different nominations. Because each technology or condition that

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

receives a final ranking will require data gathering by OHTA staff and work by the priority-setting panels, it is desirable to find an efficient, low-cost method to reduce the initial list of nominees to a more manageable number. Thus, winnowing the list is the third element of the IOM priority-setting process.

The winnowing step should have several features. First, it should be less costly than the full ranking system; otherwise, it contributes little to the priority-setting process. Second, it should be free of bias and resistant to control by special interests. (For example, no one organization or person should be able to "blackball" a nomination, nor should they be able to force a nomination onto the list.) The process should be clearly understandable to all participants. Possible approaches fall into three groups: intensity ranking, criterion-based preliminary ranking, and panel-based preliminary ranking.

  • Intensity ranking. The original nominator (a person or organization) would be asked to express some degree of intensity of preference for having individual technologies evaluated. TA program staff would aggregate those rankings and eliminate topics at the lower end of the list before proceeding to a complete ranking of the remaining list.

  • Criterion-based preliminary ranking. TA program staff would rank all nominated technologies and conditions according to a subset of criteria. They would eliminate some topics on that basis and then proceed to rank the remaining set fully.

  • Panel-based preliminary ranking. TA program staff would use panels to provide subjective rankings on all or a subset of candidate technologies. Only the highest ranking topics would remain for the full ranking process.

After discussing all three approaches and variants of each, the IOM committee recommends using the last method—panel-based preliminary ranking. A full description of each approach and the rationale for favoring the panel-based method for OHTA are given in Appendix 4.1 at the end of this chapter.

The panel-based method uses one or several panels to provide preliminary (subjective) rankings of the nominated technologies. To minimize costs, these activities could be conducted using mail ballots or (a more modern variant) electronic mail.

Two versions of this process can be described: a double-Delphi system and a single-panel, in-and-out system. The committee does not view one or the other as preferable.

  • Double-Delphi system. This method would use two panels that might be constituted with quite different memberships. Each would select (for example) their top 150 unranked technologies. The list for priority setting would include only those technologies that appeared on both lists. In an alternative method, each panel would "keep" (for example) 50 technologies,

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

and the list for priority setting would include those technologies that appeared on either list.

  • Single-panel, in-and-out system. This approach would use only a single panel that would generate two sets of technology lists. Those topics on the first list (e.g., the top 5 percent of the submitted nominations) would automatically go forward to the next step in the process. The bottom 50 percent of nominations would be excluded from further consideration. The remaining 45 percent (in this example) would go to a second-tier winnowing process that would consist of several more cycles of this process or an entirely different approach, such as a data-driven system.

Secondary Winnowing Processes

Apart from whatever initial winnowing system is used, two other features can enhance any winnowing process. These include provisions for "arguing-in and arguing-out" and requesting or requiring supporting data.

  • Arguing-in and arguing-out. This tactic allows for an appeal or "re-hearing" to convince others in the process to include or exclude a candidate technology from the final list that will receive complete ranking.

  • Supporting data. To use this feature, TA program staff request (or require) organizations that nominate candidate technologies and conditions to submit data with their nominations or, at a minimum, references to relevant data; the objective is to obtain sufficient information to allow complete ranking of the technology. For example, submissions might include information on the prevalence of the condition, current costs of treatment, variability of use of the intervention in question, and so forth.

Either of these approaches could be used in combination with the chosen method of winnowing.

Step 4. Data Gathering

The fourth element of the IOM process is gathering data that the panels will use to assign criterion scores. This task first requires specifying the principal clinical conditions for each technology or the alternative technologies used for each clinical condition. The second step is to assemble the required data for each condition and each criterion.

Specifying Alternative Technologies And Clinical Conditions

After winnowing the initial list of candidate topics, TA program staff would specify all relevant alternative approaches for care of a given clinical

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

condition. For example, if the clinical problem was ''predicting the course of illness for men with chronic stable angina,'' the alternative technologies might include exercise stress electrocardiogram, stress thallium scintigraphy, echocardiography, and coronary angiography. For this task, OHTA staff should define clinical conditions to include the most important subgroups (as defined by age, gender, or clinical criteria). Framing the topic in this way must be done with care to prevent the clinical condition from being defined improperly, which might result in its undeservedly receiving a low priority score or its mistakenly receiving a high score.

Staff Summaries Of Clinical Conditions

As a first step in assigning priority scores, OHTA staff would conduct a literature search for each candidate condition and technology to summarize for the panels the data they will need to assign a score to each priority-setting criterion. The panels would use the summaries to make subjective judgments; they would use the objective data (e.g., prevalence, costs, variation in practice) to assign scores to the priority-setting criteria.

Step 5. Creating Criterion Scores

General Points

Criterion scores (Sn in Equation 1) are of two kinds: objective and subjective. Where objectively measurable data (e.g., costs, prevalence) are available, the committee recommends using them. When no objective measure is available or a probability has to be estimated, a panel can create subjective scores in the form of ratings. These distinctions are briefly elaborated here; detailed discussion of the seven recommended criteria follow.

  • Objective Criteria. TA program staff would collect data for each of the conditions appearing on the list of candidate conditions or technologies. The units in which objective criteria are expressed must be consistent from condition to condition. For example, when counting the number of people affected, one must count "people with the illness," not "people treated for the illness," for all conditions. Similarly, when estimating per-capita spending, the measure must be dollars in every case, not dollars for some diseases and Relative Value Scale units in others. TA program staff should express prevalence as number of persons per 1,000 in the total U.S. population, even for those illnesses that affect only one segment of the population, such as women, a particular ethnic group, or children.

Good information will be available for some disease conditions (e.g., prevalence of lung cancer) but not for others (e.g., prevalence of hemochrom

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

atosis-related impotence). When objective data are not available or are conflicting, or when a criterion requires combining several measures with different units, the panel can use a formal group process to estimate missing information and resolve conflicting data; the IOM committee's pilot-test subcommittee did just that (see Appendix A).

Members of the panel engaged to assist with the objective criteria might be a subpanel of the criteria weighting panel. The subpanel would include epidemiologists, statisticians, health economists, and health care practitioners.

  • Subjective Criteria. The committee proposes using subjective estimates or ratings when no objective measure is available or when probabilities must be estimated. An example of a subjective criterion is the likelihood that health outcomes will change as a result of an assessment. A formal consensus process provides a good way to perform this estimation.

The panel engaged to assign subjective criterion scales would be constituted differently from the panels for creating the "objective criterion scores." The panel should be broadly representative and include a range of health professions as well as users of health care.

Each subjective criterion score can be represented by a rating on a scale of 1 to 5 (the length of the scale is arbitrary). If possible, the ends of the scale should be defined for each criterion. The panel for assigning scores for subjective criteria would use these scales to create "criterion scores" (ratings), which are inputs into the priority score calculation in the same way that objective data are inputs.

The magnitude of a topic's criterion score reflects the topic's priority for technology assessment. Scores between 1 and 5 will increase a topic's priority for assessment. A score of 1 has no effect on priority, no matter what weight is chosen, because the natural logarithm of 1 is zero, and the contribution of the criterion to the priority score is obtained by multiplying the criterion weight by the criterion score.4 (Recall that the priority score is calculated as: PS = W1lnS1 + W2lnS2 + ... + W7lnS7.)

There are two methodologic issues to be resolved in setting the upper and lower bounds for the subjective criterion scores: first, whether to set the bounds in a one-stage or a two-stage process, and second, whether

4  

 The committee considered a symmetrical scale that would run from (for instance) 0.2 to 5 to allow the subjective criterion scores to lower the priority of a technology for assessment. Scores of less than 1 but greater than 0 would reduce a topic's priority score because the natural logarithm of a number less than 1 is a negative number. In a multiplicative scoring system, a criterion score of 0.5 (1/2) would reduce a priority score by the same proportion that a score of 2 increases a priority score (e.g., the natural logarithm of 2 is 0.693, and the natural logarithm of 0.5 is -0.693). Similarly, scores of 0.333 (1/3) and 3 would have corresponding effects, as would scores of 0.25 (1/4) and 4, and 0.2 (1/5) and 5. However, because the objective criteria, costs and prevalence, unlike the subjective criteria, cannot be negative, the committee decided to use a single positive scale that runs from 1 to 5 for all subjective criteria.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

scores must be comparable from one priority-setting cycle to the next or from one organization to the next.

First, for a one-stage process, each panel member might independently choose a highest ranking condition or technology and assign it a rating of 5; similarly, each member could do the same for the lowest ranking technology or condition and assign it a 1. Each panel member would then set the scores for the other criteria. The committee believed, however that this task should be done in a two-stage process. After the panel decides on the highest and lowest rated conditions or technologies, each panel member would then individually assign scores to the remaining topics.

Second, there are several alternative ways to define the ends of the scale for a subjective criterion. It is possible to anchor the ends of the scales independently of a particular set of topics to be assessed and a particular technology assessment organization. There are advantages to this system in allowing consistency over time and from one organization to the next. The committee believed, however, that the need to spread ratings across the entire scale outweighed the possible virtues of comparing across organizations; thus, it recommended anchoring the scales with the high- and low-rated condition each time priorities are established.

Criteria Recommended For The Iom Priority-Setting Model

The committee recommends seven criteria for use in its model (see Table 4.1). The first three criteria form a set that estimates the aggregate social burden posed by a candidate clinical condition. The first criterion considers the general population afflicted with the condition, that is, its prevalence. The second and third criteria consider the burden to the patient, or the burden of illness, and the economic burden, or costs.

The fourth criterion, variation in rates of use, addresses clinical practice and the possible role of uncertainty on the part of health care providers about the best way to manage the clinical problem.

The Fifth, sixth, and seventh criteria also form a set. They consider the possible effect of the results of the technology assessment itself: whether the results of the assessment are likely to affect health outcomes, affect costs, or inform ethical, legal, and social concerns. These seven criteria are described in greater detail below.

Criterion 1: Prevalence

Definition: Prevalence is the number of persons with the clinical condition per 1,000 persons in the U.S. general population. This definition applies to assessments of a clinical condition and to assessments of a technology.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Comments. As applied to assessing a technology, this definition presupposes an assessment of the technology's application to relevant clinical conditions. If the technology is applied to more than one condition, prevalence should be the sum of the prevalence of the individual conditions, each weighted by the relative frequency with which the technology is used for that condition.

To maintain consistent units for this criterion, which is one of the objective criteria in this process, the time frame for measuring them must be the same. There are two alternative but equivalent ways to define prevalence and the other objective measure of social burden, the cost of care. In one, the time horizon is one year. Thus, prevalence is the number of cases per 1,000 persons in the U.S. general population, and costs are annual expenditures. In the other, the time horizon is the length of the illness. "Preva-

Table 4.3 Consistent Units for Prevalence Criterion, by One Year and Lifetime Time Horizons

 

Two Time Horizons

Criterion

One Year

Lifetime

Prevalence

Prevalence

Incidence

Cost

Annual

Lifetimea

Variations in rates of use

Coefficient of variation

Coefficient of variation

Burden of illness

Change in quality-adjusted life days in the next year as a result of illness

Change in quality-adjusted life expectancy due to illnessa

Potential of the results of an assessment to change health outcomes

Expected change in outcomes in the next year as a result of assessment

Expected change in outcomes over average patient's lifetime owing to assessmenta

Potential of the results of an assessment to change costs

Expected change in costs in the next year as a result of assessment

Expected change in costs over average patient's lifetime as a result of assessmenta

Potential of an assessment to inform ethical, legal, and social issues

Expected change in ELSb issues in the next year

Expected change in ELS issues in the next year

a Requires a consistent discount rate.

b ELS = Ethical, legal, and social issues.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

lence," then, is the number of people who acquire the illness per year (in other words, the incidence of the condition), and costs are the lifetime costs of the condition. Table 4.3 indicates units for the prevalence criterion that are consistent for the two possible time horizons, per year (which uses actual prevalence) or lifetime (which uses incidence).

Using either approach requires the analyst to determine the relevant denominator for estimating prevalence and to use that same measure as the denominator for all candidate topics. If a particular age range, gender, or other characteristic of the population at risk is not specified, all conditions and technologies assume an equivalent basis for determining national priorities. Organizations thus should not define the denominator in terms of a particular population at risk, lest the condition receive too much weight relative to a condition whose prevalence is expressed in terms of the general population.

Determining the numerator for procedures and tests is another important methodologic issue. Whether the rate of testing is defined as the current rate or the projected rate may depend on the particular condition for which the technology is relevant. If the incidence of disease is changing rapidly, using projected rates may be appropriate. When evaluating a technology for which indications are changing, identifying the correct at-risk population is important. For instance, if erythropoietin were a candidate technology, assessors would need to determine whether the population of interest is all patients with anemia, those with anemias of chronic illness, or those with anemia due to renal failure. Prevalence must be expressed in terms of the general population to be consistent with the denominator and to maintain consistency among candidate topics.

Data Sources. These data can be found in Medicare or insurance company data files or in survey data compiled by the National Center for Health Statistics.

Criterion 2: Burden Of Illness

Definition. Burden of illness is the difference in quality-adjusted life expectancy (QALE) between a patient who has the condition and who receives conventional treatment and the QALE of a person of the same age who does not have the condition.

Comments. This definition applies to assessments of a clinical condition and to assessments of a technology. Although some data on mortality and morbidity are available, at present these data are seldom obtainable at the level of specificity needed; consequently, the panels will have to assign criterion scores by a subjective estimate of the burden of illness of one

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

candidate clinical condition as compared with the others. QALE is the product of life expectancy and quality of life. Examples are given in Figures 4.2 and 4.3.

The best measure of burden of illness is the change in quality-adjusted life expectancy attributable to a condition, because this unit of measure takes into account both mortality (shortened life expectancy) and morbidity (quality-of-life adjustment factors). As applied to assessments of a technology, the definition of burden of illness presupposes an investigation of the

Figure 4.2 Hypothetical example of burden of illness for a person without Type II diabetes and for individuals with untreated diabetes, with conventionally treated diabetes, and with new, beneficial treatment for diabetes. Given a specific QALE for a person without diabetes, the burden of illness is seen here as the difference in quality-adjusted life expectancy for a person with diabetes treated conventionally (not an untreated diabetic) and a comparable person of similar age who does not have diabetes. If the technology to be assessed is new (e.g., continuous subcutaneous insulin), the compromise in quality of life due to diabetes would be estimated for patients managed without the new technology but with conventional technology (e.g., once-dally insulin therapy and diet); similarly, if the technology to be assessed is an established one, the QALE would include that technology (e.g., once-daily insulin therapy and diet).

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Figure 4.3 Hypothetical example of burden of illness over time for a person with asymptomatic biliary disease, acute gallstone attack, and surgically treated biliary disease. Here, the burden of illness for persons suffering from this acute condition (gallbladder disease) would include (1) measures of pain or other symptoms during a symptomatic period, (2) pain at the time of an acute gallstone attack, (3) the burden of surgery, (4) the burden of hospitalization and recovery, and (5) postrecovery. All measures are averaged over a year (or a lifetime, if that time horizon is used). The conventional treatment would be standard surgical treatment (e.g., open or laparoscopic cholecystectomy, depending on which is considered "standard").

technology's application to relevant clinical conditions. If the technology is appropriate for more than one condition, the burden of illness could be expressed as the sum of the burden of illness scores of the individual conditions, each weighted by the relative frequency with which the technology is used for that condition.

Burden of illness here is expressed at the level of an individual patient, albeit for the "typical" patient, not as aggregate burden of illness over the

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

entire nation. This latter is a function of another priority-setting criterion—the prevalence of the condition.

For technologies, the burden of illness is that caused by all (or the most important) conditions for which the technology is used in medical practice. For example, if the topic of assessment is computed tomography (CT) of the chest and abdomen, the change in QALE would be the sum of the changes in QALE of conditions that can be diagnosed by this type of CT scan, each weighted by the relative frequency with which the technology is used for that indication. In the case of a technology that is to be assessed for a single use, such as CT scan for gallbladder disease, the burden of illness would be the burden for gallbladder disease of patients managed without CT scans compared with patients of a similar age without gallbladder disease.

Induced Suffering. In most illnesses, the patient bears the brunt of the suffering. In illnesses such as substance abuse, however, other people are often victims of crime, assault, and motor vehicle accidents attributable to the patient. This induced suffering is important in assessing the societal importance of a clinical condition. For example, for each alcohol-involved driver who dies in a vehicle crash, other lives are lost (statistically, an additional 0.7 person dies in addition to the alcohol-involved driver; Phelps, 1988). The life-years lost for the driver count as a "direct" burden of the alcohol consumption; the life-years lost for the additional 0.7 person count as an "indirect" burden of illness and could be convened to a quality-adjusted measure of life expectancy. The committee recommends including in estimates of the burden of suffering for a clinical condition the suffering experienced by the victim of a patient's illness.

Instructions. For each condition and technology, it will be necessary to make a subjective judgment that takes into account mortality, morbidity, and health-related quality-of-life data in trying to estimate quality-adjusted life expectancy. The first step is to identify the technology with the highest burden of illness and assign it a scale score of, say, 5. The second step is to identify the technology with the lowest burden of illness and assign it a scale score of, say, 1. The third step is to assign intermediate scale values to the other listed conditions and technologies.

Data Sources. The data used to develop scores are mortality and morbidity data and health status measures, when available. Data on the loss of quality-adjusted life expectancy from all medical conditions are not sufficient to estimate burden of illness as defined by the IOM committee for all candidate topics; as a result, the panels must use surrogate measures. The Centers for Disease Control publishes information on years of productive life lost for some conditions; the IOM committee believes that extending

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

these data to all conditions should have high priority. Because these data are not, at present, widely available, estimates are likely to be based on currently available data on mortality, morbidity, and functional status measures. For instance, Stewart and coworkers (1989) and Wells and colleagues (1989) have demonstrated health status "profiles" in terms of physical, social, and role functioning and well-being for nine chronic conditions and for depression as part of the Medical Outcomes Study. Condition- or age-specific measures have also been reported for conditions such as asthma, diabetes, and chronic obstructive pulmonary disease, for children, and for patients receiving outpatient renal dialysis (Medical Care Supplement, forthcoming); other measures are being developed by patient outcomes research teams. As health status measures become more available, this criterion will become increasingly data based.

Criterion 3: Cost

Definition: Cost is the total direct and induced cost of conventional management per person with the clinical condition. This definition applies to assessments of a clinical condition and to assessments of a technology.

Comments. As applied to assessing a technology, this definition presupposes an assessment of the technology's application to one specific clinical condition. If the technology is applied to more than one condition, costs are calculated as the sum of the costs of the individual conditions, each weighted by the relative frequency with which the technology is used for that condition.

Costs may be defined as annual costs or lifetime costs, depending on the time horizon, but the definition must be consistent with the definition of prevalence, as noted in the "Comments" on the preceding criterion (see also Table 4.2). The ideal solution is to use lifetime costs. In most cases, however, the lack of data on lifetime costs and the natural history of a clinical condition mean that annual costs must be used.

As defined by the committee, total cost does not include indirect costs, such as time lost from work because of illness or as a result of obtaining medical care. Indirect costs are not included because they are a part of the measure of burden of illness.

Total cost does include expected cost, which takes into account the unpredictable consequences of a clinical condition. The expected cost of an event is the product of the probability of the event and its cost. For instance, for the clinical condition "chest pain due to ischemic heart disease" (angina pectoris), the expected costs would include the possibility of suffering a myocardial infarction in addition to the known or already experienced

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

effects of the angina. In other words, expected cost is the sum of the costs attributed to the most important of the consequences of angina pectoris weighted by the probability that each would occur.

Induced Costs. Sometimes medical conditions or events create externalities that impose costs on others. Total costs comprise the costs induced by the condition (including its impact on people other than the patient) as well as the costs directly attributable to the clinical condition. In the IOM committee's approach, the costs of such externalities should be added on a per-disease basis to direct costs just as induced burdens ("induced suffering") are added to the burdens of illness in criterion 2. These "indirect" costs and burdens are likely to occur most often for contagious diseases or for medical conditions that contribute to the occurrence of "accidents,'' interpersonal violence, and so forth. Costs associated with the suffering of victims of crime, assault, and motor vehicle accidents attributable to the patient are important in assessing the societal importance of a clinical condition. The committee recommends including these costs when they constitute a significant proportion of the total costs of a clinical condition.

Data Sources. Data are available from HCFA files on hospital payments aggregated by diagnostic groups and on paid and reimbursed amounts for Medicare Part B (e.g., physicians' services). Charges may also be obtained from insurance company and state data bases and from publications of the National Center for Health Statistics (e.g., Vital and Health Statistics). Required data, however, may not be available in the form needed or may not be available at all; new data sources may be required. The true costs of production often are not available. Because many health care delivery systems have complex accounting and financing systems that depend on discounting and cost-shifting, the use of charges must be accepted as a tenuous, but often necessary, proxy for costs. A further complication is that different organizations may not use the same accounting assumptions. Although obtaining accurate data on costs appears to be complex, the problem can and should be solved, not only for the purposes of priority setting but also for planning for health care systems of the future.

Criterion 4: Variation In Rates Of Use

Definition: Variation in rates of use is the coefficient of variation (the standard deviation divided by the mean).

Comments. The purpose of this criterion is to measure the degree of consensus about appropriate management. The premise of practice variation research is that patients are the same across the compared units. Thus,

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

a large coefficient of variation of use rates implies a low level of consensus on appropriate management but may also reflect the availability of technology and health care financing.5 A low level of consensus may mean that there is great benefit from doing an assessment that might lead to a higher level of consensus.

Instructions. The biggest challenge to using this criterion is to define the most relevant units for measuring variation: these might be rates of hospital admission for a condition, rates of performing a procedure for a given condition, or rates of performing a diagnostic test for a given condition.

Data Sources. For this criterion, TA program staff would assemble data on variations in per-capita use rates across different venues of care. Comparisons of per-capita use rates may be among small geographic areas, among nations, or even among different methods of paying for health care. Staff would gather the data using Medicare files, insurance company claims, or state data files. For a number of procedures and services, coefficients of variation for small geographic areas are already available in the health services research literature.

Criterion 5: Potential Of The Results Of An Assessment To Change Health Outcomes

Definition: An assessment's potential to change outcomes is the expected effect of results of the assessment on the outcome of illness for patients with the illness.

Comments. The expected effect of an assessment on patient outcomes is the probability that the assessment will affect outcomes multiplied by the magnitude of the anticipated effect. Using the expected effect takes into account both the size of an effect and the likelihood that it will occur. Panel members derive a score by estimating the probability that the assessment will lead to a change in quality-adjusted life expectancy.

The expected effect on patient health outcomes can be either beneficial or deleterious. The committee believes that the absolute value of a change,

5  

 Although availability of technology and health care financing may contribute to variation in use rates, their contribution has not been demonstrated convincingly in the literature. For example, regional differences in insurance coverage in the United States cannot add more than about 0.02 to a coefficient of variation (Phelps, forthcoming; Phelps and Mooney, 1991). Further, variations in Britain and Canada are similar in magnitude and pattern to the United States, despite the differences in financing.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

not its direction, is the important attribute when ranking a clinical condition or technology for the assessment.

Estimating this criterion is not a simple matter. When assigning a criterion score, the panel needs to take into account (1) the possible results of the assessment (will it show that one of the patient management strategies leads to a large change in outcomes?), (2) the likelihood that administrators, payers, and policymakers will use the findings for decision making, (3) the likelihood that clinicians will modify their practices, and (4) the likelihood that patients will accept the change.

Instructions. Criterion 5 is measured on a subjective 1-to-5 scale. In practice, the panel would identify the condition or technology whose assessment has the highest potential to change health outcomes and assign it a scale score of 5. Panel members would also vote on a condition or technology whose assessment has the lowest potential to change health outcomes and assign it a score of 1. TA program staff would then count the votes and identify the panel's choice of the conditions or technologies for which the results of the assessments would be most likely and least likely to affect patient outcomes. Subsequently, individual panel members would assign intermediate scale values to the other technologies, and program staff would calculate the mean scale value of each candidate topic.

Because the estimate should encompass the population for which the technology will be used, the TA staff's background briefing must specify that population. For example, for testing or screening, the population is that group to which the test is applied, not those who actually benefit. The population must be the same as the one used to estimate the burden of illness.

Data Sources. This is a subjective criterion.

Criterion 6: Potential Of The Results Of An Assessment To Change Costs

Definition: An assessment's potential to change costs is the expected effect of the results of the assessment on the costs of illness for patients with the illness.

Comments. The expected effect of the results of an assessment on costs is the probability that the results of the assessment will affect costs multiplied by the magnitude of the anticipated effect. Using the expected effect takes into account both the size of an effect and the likelihood that it will occur.

The expected effect of an assessment can be either beneficial or deleterious; that is, an assessment may lead to large decreases or to large increases

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

in cost. The absolute value of a change, not its direction, is the important attribute when ranking a clinical condition or technology.

Instruction. Criterion scores are assigned using the method described for criterion 5.

Data Sources. This is a subjective criterion.

Criterion 7: Potential of the Results of an Assessment to Inform Ethical, Legal, and Social Issues

Definition: The potential to resolve ethical, legal, and social (ELS) issues is the probability that the results of an assessment comparing two or more conditions or technologies will help to inform an important ELS issue.

Comments. This seventh criterion gives panelists an opportunity to take a broad social perspective and to ask whether there is anything about this particular condition or technology that has not been captured in the first six criteria and that warrants an assessment. The expected effect of the results of an assessment on ethical, legal, and social issues is the probability that the assessment will affect the issues multiplied by the magnitude of the anticipated effect. The expected effect of the results of an assessment can be either beneficial or deleterious. The committee believes that the absolute value of a change, not its direction, is the important attribute when ranking a clinical condition or technology for assessment.

Instructions. Each panel member would select a scale score from 1 to 5, which would express the probability that the results of an assessment will provide information about an important ethical, legal, or social issue, multiplied by a subjective estimate of the size of the effect, if there is an effect. The committee believes that panelists will usually assign a technology or clinical condition a scale score of 1, or close to 1. It identified three categories of questions to help in estimating this criterion score:

1. ''Orphan" issues. Does the panel member believe that information about the care of this condition has been retarded because this condition is relevant only to a very small number of individuals with the condition? If so, would the results of an assessment reduce this gap in information? An example might be gene therapy for a particular type of hereditary anemia. If this topic does not achieve a high priority score based on prevalence (as would surely be the case), it might still achieve high priority on the basis of the ELS criterion if the panel believes that concerns about gene therapy in general are significant.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

2. Inequity. Does the panel member believe that services are inequitably distributed among persons with this condition and that this maldistribution might be reduced by information from technology assessment? For example, if a screening test is covered by private but not public insurance, would information from an assessment showing it to be very cost-effective be likely to lead to coverage by public programs?

3. Legal and legislative controversy. Does the panel member believe that an important legal or legislative controversy might alter existing clinical practice or coverage policy? If so, could the controversy be resolved through information from technology assessment? For example, a pending legal case about coverage for autologous bone marrow transplantation might be resolved by an assessment, thereby avoiding lengthy legal proceedings. In another example, if an assessment showed that breast cancer screening was not cost-effective, the assessment might lessen pressure for state legislation mandating coverage.

To assign a criterion score, each panel member would consider the ELS issues for each candidate condition or technology and determine a score as follows, depending on his or her response to the issues and questions described above:

  • a score of 1 corresponds to "no" (i.e., no important ELS issues are likely to be resolved);

  • a score of 5 corresponds to an intense "yes" (i.e., important ELS issues are likely to be resolved).

All panelists must have access to the same list of possible ELS issues that an assessment might resolve. A two-stage process could be used to produce such a list. In the first stage, each panelist would write down all of the ELS issues that came to mind. If the panel is actually meeting, it could discuss each issue; if the rating process is to be conducted by mail, staff would compile a list of ELS issues and send it to the panel members. In the second stage, the panelists would individually assign an ELS scale score to each condition or technology. The panel would discuss any condition for which the range of scores is greater than 2 scale points (where the range is defined as the difference between the highest and lowest score given to that condition), and panelists would have an opportunity to revise their scores. The first-round scoring can proceed quickly, as most of the conditions or technologies will be rated as 1. Discussion could be limited to those issues with a score response range of at least 2 points.

Some committee members recommended that a panelist's final ELS score for a condition or technology be the highest score given for any of the three categories; others preferred to use the average score. In either case, the final criterion score is the mean of all panelists' scores (either highest or average) and can range from 1 to 5.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Criteria Rejected by the Committee

The committee considered but rejected many topics as criteria for assessment or reassessment. Two bear special mention because of their inclusion in the IOM/CHCT pilot study. The first—likely enhancement of national capacity for technology assessment—is a useful and desirable secondary effect, but the committee did not consider it central to priority setting. The second—the availability of sufficient data to complete the assessment— seems at first to be a reasonable criterion. However, the committee believes that if a condition merits assessment on the basis of other factors, the response to lack of data ought to be to set in motion some process that would yield the needed data (see further discussion of this issue in Chapter 5). The one exception might be a technology that is too new to be assessed.

Step 6. Computing Priority Scores

The sixth element of the IOM process is calculation of priority scores. Once criterion scores and weights are assembled, the priority score for each condition or technology can be computed by combining the objective and subjective criterion scores. Priority scores for each condition or technology are derived from the data for the objective criteria and the scale scores for the subjective ratings, each adjusted by the weight given to each criterion. Once priority scores have been calculated, TA program staff list the candidate technologies and conditions in the order of their priority scores. Higher scores will be associated with conditions and technologies of higher priority.

The formula for calculating the priority score is the sum of the natural logarithms of the criterion score weighted by the importance of the criterion. The formula for a priority score for condition or technology j, then, is:

or their mathematically equivalent forms,

where W is the weight for each criterion described in Step 1 of this process. Two illustrative calculations are shown in Table 4.4. Priority scores for the

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Table 4.4 Calculation of Two Examples of the Priority Score

 

 

Example 1: Cardiac Condition

Example 2: Acute Surgical Procedure

Criterion

Criterion Weight (W)

Criterion Score (S)

W(lnS)

Criterion Score (S)

W(lnS)

Prevalence

1.6

30a

5.44

100a

7.37

Burden of illness

2.25

4.3

3.28

1.7

1.19

Cost

1.5

9,000b

13.66

1,800b

11.24

Variation in use ratesc

1.2

0.36

-1.23

0. 17

-2.13

Potential of the results of assessment to change health outcomes

2.0

3.2

2.33

3.7

2.62

Potential of the results of assessment to change costs

1.5

4

2.08

2

1.04

Potential of the results of an assessment to inform ELS issues

1.0

2.0

.69

1

0

Priority score

 

 

26.25

 

21.34

a Number/1,000 in U.S. general population.

b Annual cost (in dollars) per year.

c Coefficient of variation.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

entire list of candidate topics are easily calculated with a spreadsheet program, using as input the mean criterion scores.

One can convert the log-additive model to the multiplicative model by taking the antilog, that is,

where expy equals e raised to the y power.

Derivation of the Model

As shown above, the multiplicative model becomes additive when one takes the logarithm of it. The committee adopted a multiplicative model for priority setting because such models exhibit a number of desirable characteristics in comparison with additive models. In multiplicative models, both the rank order and relative size of the priority scores of various medical interventions are preserved regardless of the scale of measurement of the criterion scores. Thus, for example, it does not matter whether prevalence is measured in cases per 1,000 or cases per 100,000 in the general population, as long as the same unit of measurement is used for every technology and condition being assessed. (A shift from measuring prevalence in cases per 1,000 to cases per 100,000 would cause that particular criterion score to fall by a factor of 100 for every intervention; the overall priority score for every intervention would fall by 100 raised to the power of the relevant criterion weight [e.g., 1, 0.5, 2, or whatever weight had been applied to that criterion score in the model]). Such changes shift the magnitude of every score by an equal amount, and hence do not alter ranks or relative sizes of scores. The change is similar to counting the size of the national debt in dollars, pennies, or billions of dollars; the units of measurement do not change the actual size of the debt.

Similarly, in terms of the subjective components of the priority system (e.g., the ELS score), it will not matter whether the minimum score can be 0.01, 0.5, 1, 10, or some other number; as long as all scores are set relative to the smallest score used, the relative ranking is preserved. Thus, a criterion with a weight of 1 should be defined as twice as important as a criterion with a weight of 0.5 in the same way that a criterion with a weight of 2 should be twice as important as a criterion with a weight of 1. The magnitude of the minimum ELS score is immaterial as long as the ELS panel consistently assigns criterion scores for other conditions and technologies relative to the smallest score assigned. Appendix 4.2 to this chapter discusses scaling and transformation issues in more detail.

Readers will observe that the formula for calculating the priority score corresponds to the conventional understanding of the social impact of dis-

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

ease, which is seen as the product of the number of people with the illness times the burden per person times the cost per person, and so forth. Moreover, as the committee recognized and as is seen in Table 4.4, prevalence and cost, because they are expressed as real rates rather than as subjective scores, may tend to dominate the final value for the priority score unless higher weights are given to the other criteria.

In sum, the model yields a constant relative rank ordering regardless of the units in which the criterion scores are expressed. The same is true for the magnitude of the priority score for a condition or technology relative to all others. Table 4.5 illustrates this point using a model with two criteria, cost and prevalence, applied to three conditions (A, B, and C).

Determining Whether Assessment is Desirable and Feasible

The next step in the priority-setting process is to decide whether a highly ranked candidate topic should be assessed by OHTA and whether enough information exists to perform the assessment. Two circumstances would argue for deferring technology assessment for a given condition or technology despite a high priority score: (1) another organization with a record of performing rigorous and credible assessments has recently completed or has an assessment under way; and (2) there is insufficient high-quality clinical and scientific information about the technology to conduct an assessment. One important task for OHTA staff will be to obtain information

Table 4.5 Example of Priority Scores Obtained Using Prevalence Expressed as Cases per 1,000 and per 100,000 Persons in the General Population

 

 

Cases/1,000 Persons/Year

Cases/100,000 Persons/Year

Condition

Unit Cost

Prevalence

Priority Score

Prevalence

Priority Score

A

$100

1

9.2

100

23.0

B

$100

10

16.1

1,000

29.9

C

$100

100

23.0

10,000

36.8

Note: In this example, all three conditions have the same unit cost ($100) but differ in prevalence. If one applies a model with two criteria (cost and prevalence) to three conditions (A, B, and C), and assigns a weight of 2 to unit cost and a weight of 3 to prevalence, the priority score would be calculated as below:

where S is the criterion score.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

about the activities of other organizations that do technology assessment. Obtaining such information will require some form of network through which organizations can share information about their current activities. Familiarity with the published literature is another important responsibility of OHTA staff and will require searching the literature for relevant material and evaluating the usefulness of such articles.

When the information on which to base an assessment is too weak to support it, OHTA might choose to issue an interim statement to the effect that data are unavailable for assessment of the condition or technology. One function of such a statement would be to call for action (e.g., funding for extramural research) to eliminate the information gap.

Indeed, the committee urges that all candidates for assessment be assigned priority scores, even when the staff or panels realize at an early stage in the priority-setting process that the data for an assessment are not available, because a high priority score for a candidate could help to shape the nation's research agenda. This discussion is continued in Chapter 5.

Step 7. Review by Ahcpr National Advisory Council

The seventh and final element of the IOM priority-setting process is review. The IOM committee recommends that there be independent, broad-based oversight of the priority-setting process, preferably through the AHCPR National Advisory Council. After taking the council's advice into account, the administrator of AHCPR would publish a list of the agency's priorities for assessing medical technology.

The purpose of the AHCPR National Advisory Council is to advise the Secretary of Health and Human Services and the Administrator of AHCPR; it includes "making recommendations to the Administrator regarding priorities for a national agenda and a strategy for (A) the conduct of research, demonstration projects, and evaluations..." (Public Law 101-239, SEC 921,b,2,A). The Council meets three times a year. Apart from ex-officio members, it includes eight individuals distinguished in the conduct of research, demonstrations, and evaluations; three from the field of medicine; two from the health professions; two from business, law, ethics, economics, or public policy; and two representing the interests of consumers of health care. The legislation also calls for establishment of a subcouncil (the Subcouncil on Outcomes Research and Guideline Development); currently, an additional subcouncil on general health services research and technology assessment functions as well.

The AHCPR National Advisory Council has authority to review and recommend adjustment of the results of peer review study sections that review grant proposals for extramural research funding; it can raise the standing of a grant proposal that does not score high enough to receive funding. Re-

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

viewing and, if warranted, recommending the adjustment of priority rankings for technology assessment would be an analogous function in the sphere of OHTA's work.

This committee recommends that the AHCPR National Advisory Council be involved in review of the results of the priority-setting process. A major outcome of such involvement would be to lend credibility and political support to the priority-setting process. The council could perform other functions as well. For example, it could group the priority scores into categories, such as ''most important to assess," "very important to assess," and "low priority for assessment.'' Within these categories, appropriate designations can indicate items that were borderline in terms of the group into which they fell. This form of categorization according to priority score would allow a "softening" of the numerical priority score to prevent the process from being seen as more precise than it actually is. After reviewing the priority list, the AHCPR Council could, on the basis of its own deliberations, recommend changes in the priority ranking of individual items in the interests of balance—for example, balancing technologies that are chiefly used for one age or for another demographic group; balancing interventions for preventive, diagnostic, and therapeutic procedures; or balancing technologies used in various settings of care.

Similarly, the Secretary of Health and Human Services and the AHCPR administrator may need to preempt the process or adjust the priority rankings. There may be rare circumstances in which the national interest will dictate that the priority-setting process be set aside for the sake of a compelling issue of public importance that the formal criteria do not capture. Building these functions of the council, administrator, and secretary into the priority-setting process is an important precaution against too mechanized an approach. A balance needs to be maintained between a systematic, logical, tamper-proof process and an approach that is flexible enough to have credibility and to serve the national interest when circumstances so dictate.

REASSESSMENT

Role of Reassessment in the Complete Assessment Program

Is the process of assessing a technology or condition for the first time different from the process of reassessing a topic that has been previously considered? The committee believes that these processes should be fundamentally similar. Moreover, for OHTA at least, a single budget allocation covers evaluation of health technology—whether for the first time or as a reassessment. Therefore, the committee recommends that only one process for setting priorities for technology assessment be invoked.

Operationalizing this process means that conditions and technologies that

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

have never been assessed by OHTA will compete for priority with topics that OHTA assessed at an earlier time. The panel should apply the same priority-setting criteria to candidates for a first-time assessment and candidates for reassessment. The committee also believes, however, that OHTA has a special obligation to consider previously assessed topics as candidates for re-evaluation. There are several reasons for this view.

First, and foremost, OHTA assessments (although not formal recommendations to the Health Care Financing Administration, or HCFA) are a matter of public record. These assessments may carry considerable weight among payers, physicians, and even patients. Therefore, it is important that these opinions reflect current knowledge. If more recently available information might invalidate an earlier OHTA recommendation, OHTA must decide whether it is necessary to reconsider the evidence by reassessing the condition or technology. For instance, one or more newly published journal articles may contain information that sheds new light on the performance of a previously assessed technology, or a study of a new, competing technology may appear in the literature. Another impetus for reassessment might be the occurrence of a serious epidemic that raises the prevalence of a disease to the point where guidance for using a technology may require revision (Box 4.1)

Second, OHTA may itself acquire advance knowledge of information that might lead it to consider reassessment. For instance, during a first-time

Box 4.1 Events That Might Trigger Reassessment

  • A change in the incidence of a disorder (or its prevalence, if the condition is chronic) or in the degree of infectiousness of a biological agent

  • A change in professional knowledge or clinical practice, including a recent rapid change in utilization and increased variability in the use of a given technology

  • Publication of new information about a technology that suggests a change in its performance or cost

  • The introduction of a new competing technology

  • A proposal to expand the use of the treatment to populations not included in the original assessment (e.g., expanding breast cancer screening to women aged 40 to 49 when earlier work focused only on women aged 50 and older)

  • Publication by another organization of a high-quality, conflicting assessment.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

assessment, OHTA may know that circumstances are likely to change within some reasonable, known time frame and that that change would warrant a consideration of reassessment. Similarly, OHTA may be aware that pivotal clinical trials or effectiveness studies are under way or about to be started, or staff may learn of new technologies that are being tested outside government circles. Alternatively, AHCPR may provide funding for studies (e.g., the Patient Outcomes Research Team investigations) to generate new knowledge, perhaps on topics that had been brought into the foreground initially by an OHTA assessment.

Third, for OHTA, conducting a reassessment may be more efficient than performing an initial assessment. Both the method and process used in an initial assessment may still be applicable, and many of the original data sources may still be useful. In these circumstances, the greater ease and lower cost of a reassessment may make it an attractive choice and may raise its priority standing.

Fourth, because a topic cannot be reassessed unless it has received a high enough priority score to warrant a first-time assessment, topics that are chosen for reassessment are likely to be important by definition.

In sum, although previously assessed technologies and conditions should compete for available assessment monies on fundamentally the same criteria that are used to determine first-time assessments, the committee concluded that OHTA has an obligation explicitly to consider previously examined topics.

Methods of Identifying Candidates for Reassessment

The IOM committee recommends a four-step process for considering a previously assessed topic for reassessment (Figure 4.4): (1) tracking of topics of prior assessments, (2) evaluation of the quality of those studies that suggest that reassessment might be needed, (3) panel review to decide if changes in a given technology or clinical practice seem to warrant reassessment, and (4) placement of the topic on the candidate list for ranking with candidates for first-time assessment.

Ongoing Tracking of Events Related to Previously Assessed Topics

Stated Time of Review for First-time Assessments. Both at the time of an initial and of a subsequent assessment, OHTA should explicitly state whether a reassessment is likely to be needed and when it expects that circumstance to occur. Early reassessment might be necessary in fields that are changing rapidly or when a clinical trial is completed. Often, however, OHTA will assess relatively stable, mature technologies. When setting priorities, OHTA should informally review all previously assessed conditions and technolo-

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Figure 4.4 Proposed process for reassessment.

gies and decide whether newly emerging information about any topic might indicate a need for reassessment.

Catalog of First-time Assessments. OHTA currently provides information about its assessments in individual Health Technology Assessment Reports. To document events that might apply to previously assessed topics, the committee strongly recommends that OHTA create a separate catalog of its previous assessments, keep it current, and cross-reference it by conditions and technologies. The catalog should include specific characteristics for each assessment, such as

  • the disease condition studied;

  • the intervention(s) studied;

  • the population to which the assessment applies (similar to entry criteria for a randomized controlled trial);

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
  • the methods used in the study;

  • types and sources of data (e.g., claims data from Medicare, randomized controlled trial data from the literature);

  • the dates of collection of the study's data and of the study report;

  • indications of how much an assessment has been used (e.g., changes in financing policy, citations in medical journals); and, when feasible,

  • evidence of effects on clinical practice.

The catalog is the starting point for tracking technologies that have been previously assessed. Ideally, a public agency would also track assessments by other organizations, and OHTA is a logical repository for these data. Such a task might also be undertaken by the National Library of Medicine.

Monitoring the Published Literature on Previously Assessed Topics. The agency should establish a system to monitor the published literature on previously assessed topics, given that up-to-date knowledge of a topic is the foundation for reassessment. Using the search strategies of the original assessment, the staff should monitor the literature to identify high-quality studies that could have a bearing on the decision to reassess and the occurrence, if any, of one of the triggering events listed in Box 4.1.

OHTA might also consider creating a network of expert consultants or seeking the support of medical specialty groups who would take responsibility for monitoring the literature on a topic and calling attention to developments that might warrant OHTA reassessment. The IOM has made recommendations for augmenting information resources on health technology assessment in two recent reports (IOM, 1989b, 1991b).

Evaluation of the Quality of Studies

Once literature regarding a topic has accumulated, OHTA staff should evaluate the quality of the studies. Additionally, experts in the content and methodology of the clinical evaluative sciences could review designated studies to advise OHTA on the quality of the evidence being presented. The agency could then decide whether a reassessment is desirable—that is, whether events have occurred since the first assessment that have rendered the original conclusions obsolete. An OHTA panel, presumably a subpanel of OHTA's priority-setting panel, should periodically review the data on previous assessments and decide whether the circumstances warrant reassessment.

Ranking Candidates for Reassessment

The committee recommends that candidates for reassessment be considered on the same basis as candidates for first-time assessment, using the

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Figure 4.5 Relationship between process for first-time assessment and process for reassessment.

same process. Thus, OHTA panels would consider topics for first-time assessment at the same time that they consider topics for reassessment, and OHTA would forward one list of candidates for assessment (or reassessment) to the AHCPR Advisory Council. That list would have both candidates for first-time assessment and candidates for reassessment. Figure 4.5 shows the interrelationship of the process for first-time assessments and the process for reassessment.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Final Steps after Establishing Priority for Reassessment

After calculating priority scores for reassessment candidates, OHTA should address two additional pertinent topics: the results of a sensitivity analysis and the cost of reassessment.

Sensitivity Analysis

If a previously assessed topic has achieved a high priority score, OHTA staff should use the data that have been assembled for setting criterion scores to perform a sensitivity analysis. The purpose of the analysis is to test whether the new information would change the conclusions of a previous assessment. For example, if a diagnostic device (technology A) was assessed previously and a new device that is potentially more accurate but that will cost $300 more per patient has become available, a simple sensitivity analysis might indicate whether the recommendations about the use of technology A would change. If those recommendations would not change, even if technology B had perfect sensitivity and specificity, there would be no reason to conduct an assessment of these technologies—not, at least, until the cost of technology B falls relative to technology A.

Cost Analysis

The cost of reassessment will vary widely. Some reassessments will be simple and relatively inexpensive to perform; others will require almost a complete rethinking of the problem. For instance, some analyses, such as those using decision-tree formats, easily permit reassessment as data change. If a new randomized trial alters the perceived treatment effect of an intervention, one can readily incorporate the new data in such an analysis and re-estimate the cost-effectiveness of various interventions included in the tree. Other reassessments, however, may require a more fundamental change in the analytic approach or incorporation of an entirely new measure of outcomes or costs.

SUMMARY

The committee has proposed a priority-setting process that includes seven elements: (1) selecting and weighting criteria for establishing priorities, (2) eliciting broad input for candidate conditions and technologies; (3) winnowing the number of topics; (4) gathering the data needed to assign a score for each priority-setting criterion for each topic; (5) assigning criterion scores to each topic, using objective data for some criteria and a rating scale anchored by low- and high-priority topics for subjective criteria; (6) calcu-

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

lating priority scores for each condition or technology arid ranking the topics in order of priority; and (7) requesting review by the AHCPR National Advisory Council. The chapter defined the seven criteria and explained how to assign scores for each one. Three of the criteria—prevalence, cost, and clinical practice variations—are objective; they are scored using quantitative data to the extent possible. The other four—burden of illness and the likelihood that the results of the assessment will affect health outcomes, costs, and ethical, legal, and social issues—are subjective; they are scored according to ratings on a scale from 1 to 5.

The chapter also addressed special aspects of priority setting that apply only to reassessment of previously assessed technologies; these include recognizing events that trigger reassessment (e.g., change in the nature of the condition, in knowledge, in clinical practice); the need to track information related to previous assessments; and the obligation to update a previous assessment as a fiduciary responsibility and to preserve the credibility of the assessing organization.

APPENDIX 4.1: WINNOWING PROCESSES

This appendix discusses in greater detail some of the issues that arise in reducing a long list of candidate conditions and technologies (or "winnowing") for possible assessment by the Office of Health Technology Assessment (OHTA). Three general methods are discussed as a basis for the winnowing process: (1) eliciting some sense of the intensity of preference regarding a candidate on the list on the part of those who nominate it and using this information to winnow; (2) using a single criterion and a process similar to but much simpler than the quantitative model; and (3) using an implicit, panel-based process. The appendix offers options within each method and provides a rationale for the committee's suggested choices.

Intensity Rankings by Nominating Persons and Organizations

One difficulty with the "open" nomination process is that it does not necessarily reveal the intensity of the preferences of nominating individuals and organizations. Thus, one way to help establish preliminary priorities is to request nominators to include a measure of intensity and then to add these measures of interest across all nominating organizations and persons, using the final total as a preliminary ranking. Several variants on this approach are available:

  • Option A. Ask each nominating group to assign a rank from 1 to 5 (1 = least important; 5 = most important). If an item is not mentioned on a ballot, it receives a rank of 0.) Sum the ranks across all ballots. Using that

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

figure as a preliminary ranking, proceed to final ranking on (for example) the top 50 candidates.

  • Option B. Proceed as in Option A but allow each ballot to have a fictitious budget of $1,000 to allocate across all candidate technologies. TA program staff would then add the budget allocations across ballots. For example, an organization could specify $4 for 250 technologies and conditions, $250 to only 4 technologies, or $1,000 to a single technology. This process has the desirable feature of reflecting the scarcity of research resources available for technology assessment.

  • Option C. Use a more formal "willingness to pay" (WTP) revelation process familiar to economists (e.g., "Clark taxes"). Such techniques attempt to measure directly the willingness of an organization or person to pay for the assessment of a specific technology. The aggregate willingness to pay for a technology assessment (summed across all ballots) represents a measure of the social value of the assessment. (Indeed, some people would assert that, if a WTP assessment is properly done, it could be used as the final priority-setting list.) The committee does not believe that enough is known about the actual conduct and reliability of Clark tax-type methods to base current priority-setting methods on this approach alone, but some organizations may find this technique useful at least in a preliminary stage.

Overall, the committee believes that the use of methods like these for preliminary priority setting—at least in pure form—within the context of a public agency creates some important problems. Its questions center on the issue of who is eligible to submit "ballots" and how much each of those ballots should "count." For example, if open submissions of ballots are allowed or welcomed, and each has equal weight, then lobbying organizations could readily "stuff" the ballot-box with numerous ballots, each emphasizing a single technology. (All-Star baseball voting exhibits some of this characteristic, in that fans in some cities may try to tilt the balance in favor of players on their home teams.)

One alternative is to limit the distribution of ballots or to determine in advance how much each ballot counts. (For example, the ballot of a large health insurer might count much more than that of an individual provider, and the ballot of a single-purpose charity devoted to the cure of a single disease might reflect some estimate of the size of its constituency.) However, a preliminary assignment process of this kind inherently opens up the entire process to intense political pressure, and, indeed, makes it likely that the process will become so expensive that it loses its value as a low-cost screening device.

"Open" voting with specified preference intensity (i.e., option A) raises the possibility that private interests with a strong interest in having a single technology evaluated might spend considerable resources to bring this

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

about. The importance of OHTA assessments in some Medicare coverage decisions is an obvious reason for attempts to control the priority-setting process.

Yet in other settings, these intensity-based preference systems might function extremely well. For example, an association of primary care physicians or a large health maintenance organization (HMO) might wish to undertake its own technology assessment activities and establish its own priorities for this activity. In this case, the membership of the society, the staff, or enrollees of the HMO form a natural basis for voting, and there would be no presumed preference on the part of any one person in these groups to have any single technology evaluated—except as it might affect the well-being of patients. For this reason, the committee includes a description of these preference-intensity voting systems, but it cautions against their use in settings in which they invite strategic responses.

Preliminary Ranking Processes

The winnowing process uses (initially) one criterion from the final ranking system (e.g., prevalence of disease, disease burden, cost per treatment, variability in use) and provides an initial ranking on that basis. This method is more data intensive than the first set of winnowing methods described above but less data intensive than a complete ranking. There are two main variants on this idea:

  • Option D. Rank all nominations on the criterion that receives the highest weight in the final priority-setting process, keep (say) the top 250, and rank those, using both the highest- and second-highest-weight criteria. This list becomes, in effect, a restricted version of the final ranking process. Keep (for example) the top 100 candidates, and conduct a full ranking on that set. The logic of this approach is that the criterion weighted highest will in many ways determine the final ranking; at least, it must be true that nominations receiving a low score on the highest-ranked criterion cannot ever receive a high enough score to make a "final 20" or some comparable list. This hierarchical approach thus eliminates nonviable candidate technologies, at a lower data-gathering cost than a complete ranking of each technology, while preserving the essential features of the ranking system.

  • Option E. In the preliminary ranking, one could select the criterion to be used in the initial ranking according to not only the weight assigned in the process but also the costs of data gathering. For example, if the highest-weighted criterion had very high data-gathering costs but the next-highest-weighted criterion had much lower data costs associated with it, one could conduct the initial ranking using the second-highest-weighted criterion instead of the highest-weighted criterion.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

These types of preliminary ranking systems have an obvious disadvantage: they require that data be collected on a potentially large number of technologies. This reason alone may argue against their use in a setting where a widespread ''call'' to suggest interventions is likely to produce a large number of candidate topics.

Another, more subtle issue deserves mention: using other methods for preliminary screening produces an independence between two parts of the priority-setting system that the use of only one technique cannot achieve. Some people may view as a virtue the idea that the winnowing system and the final ranking system follow the same methodological basis. Others may see this commonality as a defect to be guarded against by using an alternative method for preliminary screening.

Panel-Based Preliminary Weighting

On balance, the committee believes that methods from a third group of options are preferable for preliminary screening. This approach uses one or more panels of experts to provide preliminary (subjective) rankings of the nominated technologies. To minimize costs, these activities could be conducted using mail ballots, or (a modern variant) electronic mail. Two principal versions of this process are possible:

  • Option F. Double-Delphi system. Use two separate panels, constituted with quite different memberships, and have them select (say) their top 150 technologies (and leave them unranked). Keep for final priority setting only those technologies that appear on both lists. As an alternative, each panel could "keep" perhaps 50 technologies, and the final ranked list would include those that appeared on at least one list. The Delphi rankings could be based either on subjective, implicit judgments of panel members (which makes this tactic a relatively low-cost alternative) or on data supplied to the Delphi panels (a higher-cost option). The two Delphi panels should have distinctly different memberships; in one case, perhaps, the panel would be entirely health care practitioners, and in the other, health services researchers, consumer representatives, and others not directly involved in providing care. Particularly if no data were to be presented, it would be necessary to have panels that possessed sufficient technical expertise to understand the implications of their decisions.

  • Option G. Single-panel in-and-out system. This approach would use only a single expert panel that would generate two sets of technology lists. The topics on the first list (consisting, for instance, of 5 percent of the submitted nominations) would automatically go forward to the next step in the process. The bottom (say) 50 percent of nominations would be excluded from further consideration. The remaining 45 percent (in this example)

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

would go to a second-tier winnowing process. The second-tier process could be either (a) a repeat of this process or (b) some sort of data-driven system similar to option D above. This process is entirely self-contained if used recursively. As an example of such a "recursive" use, let us suppose that the original round of nominations had produced 800 candidate technologies for evaluation. During the first round, 5 percent (40 technologies) would be retained and 50 percent (400 technologies) would be eliminated from consideration, leaving 360 technologies. Reapplying the same rules but keeping (for example) 10 percent during this round and eliminating (again) 50 percent would retain 36 and eliminate another 180 technologies, making a total of 76 technologies preserved at this stage and 144 as yet unclassified. Finally, keeping 10 percent of these unclassified technologies would bring the total to 90, and the process could stop there. These 90 technologies would continue through the full priority-setting process.

Comment

The final choice of a winnowing method for an organization could well depend on the degree of openness rather than the expert knowledge desired in the process. The most open systems are those in the first group discussed earlier (options A-C) because they rely on intensity of preferences as expressed by nominating organizations. Because of their openness, however, they intrinsically invite expenditure of private resources and attempts to control the system. The third group of approaches (options F and G) makes fullest use of expert knowledge. The second set—ranking on the basis of a subset of the eventual criteria—best preserves the intent of the final priority-setting process but is more data intensive and thus potentially more costly. Organizations engaged in priority setting may also find it useful to use a winnowing process that quite deliberately does not use the same approach as the final process. They would then use activities from the first or third groups rather than the second.

As was argued earlier, because of the intrinsic problems associated with the first two groups of winnowing strategies as applied in the OHTA setting, the committee recommends that OHTA adopt as a preliminary winnowing system either options F or G in the panel-based set. The committee has no strong preference for either option; ease of implementation thus could be a key consideration in the ultimate choice. Option G offers a recursive system on the grounds that it is easier to pick "the top" and "the bottom" of any list than it is to rank every element within it.

In other settings, the other methods have potential value and may well be preferred to any from the third group. For example, in settings with a narrow focus that leads to a limited number of submissions, dam-driven methods similar to those in the second group (options D and E) may be the

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

best, particularly if the organization conducting the priority assessment values consistency of method in the preliminary and final priority-setting process. In other arenas, options from the first group (A, B, and C) may have considerable appeal, particularly for those with a naturally closed population from which nominations might emerge and in which no particularly strong stake exists in having any single technology or condition studied. Finally, one can imagine combinations of the processes; for instance, a panel-based preliminary ranking might use options B or C to distribute votes or hypothetical dollars.

Regardless of the choice of process, the committee believes that it is desirable in any priority-setting process to rely at least in part on nominating organizations to provide information relevant to the final process—for example, information on costs, prevalence of disease, burden of disease, and variability of treatment use across geographic regions.

Finally, it must be remembered that the winnowing process plays only a minor role in determining the eventual set of activities chosen for technology assessment. Its only goal is to speed up (and reduce the costs of) the final priority-setting process. To the extent that winnowing achieves this goal at low cost and without eliminating technologies that would otherwise be assigned high priority, it has succeeded; conversely, to the extent it becomes elaborate and expensive, it defeats the purpose of using any winnowing strategy. For these reasons, the committee advises the choice of a winnowing technique that reflects the goals of simplicity, avoidance of control by special interests, and low cost.6

APPENDIX 4.2: METHODOLOGIC ISSUES

Two key methodologic issues for deriving a formula for the technology assessment priority score are (1) the scale on which each of the criterion scores is expressed and (2) the means used to maintain consistent relationships among the weights assigned to each criterion. In regard to scaling, the priority-setting process outlined by the committee uses logarithms of each criterion score. The discussion that follows explains the choice of the particular logarithmic approach used by the committee.

The IOM priority-setting process uses "natural" units for objective criteria, such as the prevalence of a condition and the costs of care for the condition (e.g., "head counts" for the number of affected, dollars for cost).

6  

Low cost in this context includes both public and private expenditures. Procedures established within the priority-setting process that invite considerable investment by private parties to manipulate them are self-defeating.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

Yet, one approach, using natural units would mean that every weight could be affected by a change in the scale of any other measure. For example, a quite natural scale of measurement of per-capita spending is the proportion of spending on the disease that is related to the range of spending on all diseases under consideration. Thus, if per-capita spending ranged from $1 per person (at the low end) to $1,000 (at the high end), one could use a measure of where $500 fits between the low and the high end (i.e., 500/(1,000 - 1) = 500/999 = 0.5), rather than the $500 itself. With this type of measure (scaled by the range across all interventions), the value for the criterion score would be 0.5. In a model based on natural units, to maintain the importance of "spending," one would have to modify the weight so that the product of the criterion weight and the criterion score remained unchanged. Thus, if the scale of measurement of any of these components were changed, the weight would also have to be changed to keep the product unchanged.

The problem of the interaction of the weights and the scale of measurement of the values that determine a criterion score can be avoided by a simple mathematical modification. By using relative importance to determine the criterion weights, the logarithmic transformation provides the same results independent of the scale by which each of the component "scales" is measured.

Properties of Logarithms

For those not familiar with the mathematics of logarithms, it may be helpful to review two of their properties. In the general equation by = x, the exponent y is called the log of x to the base b (one can describe the log as the exponent y to which the base b must be raised to get number x). Thus, an equivalent expression is y = log bx.

The first property of logarithms is that logb (xw) = logbx + logbw. That is, the log of the product of x and w is the sum of the log of x and the log of w (thus, we use the term "multiplicative" or "log additive" to describe the committee's model).

The committee's model might use a logarithm with any base, but the committee chose to use natural logarithms (ln) in which the base is e (an irrational number whose decimal expression is 2.71832 ...). Substituting the term In for log and the expressions S1 and S2 for x and w, respectively, it is apparent that y = lnS1 + lnS2.

The second property of logarithms helps to explain the role of criterion weights: logb (xr) = rlogbx. In other words, raising the log of x to the power r is equivalent to multiplying r by the log of x. Again, substituting the committee's expressions as above shows that raising the log of the criterion score to the weight for that criterion is equivalent to multiplying the trite-

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

rion weight by the natural log of the criterion score. Thus, y = Priority Score, and, as in Chapter 4, Equation (1) is as follows:

Application to the Iom Model

The use of logarithms is neither intuitive nor familiar to most people, but it does express a natural way of thinking. The logarithmic transformation will accomplish the desired scaling, no matter what "natural" scaling is used. All that is necessary to implement this approach is for participants in the priority-setting process to agree that the relative weights represent the relative importance of a criterion. One can then measure the individual score components in any way one desires, as long as measurements are consistent across technologies for a criterion. Weights of 1 yield proportional increases in priority as a component increases. Weights of 2 increase a priority score 20 percent for each 10 percent increase in a criterion score.

To provide an example of how use of relative importance can eliminate worry about how the various components in the criterion scores are measured, let us consider the Phelps and Parente (1990) model. In this model, N = number of people treated annually, P = average cost per procedure performed, Q = average per-capita quantity, COV = the coefficient of variation for the procedure across regions, and e = the demand elasticity. The priority-setting index I for intervention j is:

If one assigns the relative importance weights for each element in Equation (5) as 1, 1, 1, 2, and -1 and takes the logarithm of each, then

Mathematically, the effect of changing the values of the variables on the right side of Equations (5) and (6) can be expressed in terms of percentages. Thus, a 10 percent increase in the number of people treated for intervention j (Nj) raises the value of Ij by 10 percent (and similarly for Pj and Qj); a 10 percent increase in the COVj increases the index by 20 percent; and a 10 percent increase in ej decreases the index by 10 percent.

Using logarithms is an approach that is intended to reflect relative place on a scale of importance. In producing priority scores for each candidate condition or technology, the relative ranking of each procedure will be the same, regardless of how each of the criterion scores is measured. The relative difference in priority scores similarly will be unaffected by changes in the scale used to measure any criterion score.

Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 57
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 58
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 59
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 60
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 61
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 62
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 63
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 64
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 65
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 66
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 67
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 68
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 69
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 70
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 71
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 72
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 73
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 74
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 75
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 76
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 77
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 78
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 79
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 80
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 81
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 82
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 83
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 84
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 85
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 86
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 87
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 88
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 89
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 90
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 91
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 92
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 93
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 94
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 95
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 96
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 97
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 98
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 99
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 100
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 101
Suggested Citation:"4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 102
Next: 5 IMPLEMENTATION ISSUES »
Setting Priorities for Health Technologies Assessment: A Model Process Get This Book
×
 Setting Priorities for Health Technologies Assessment: A Model Process
Buy Paperback | $50.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The problem of deciding which health care technologies to evaluate is urgent. With new technologies proliferating alongside steadily increasing health care costs, it is critical to discriminate among technologies to direct tests and treatments at those who can benefit the most.

Given the vast number of clinical problems and technologies to be evaluated, the many months of work required to study just one problem, and the relatively few clinicians with highly developed analytic skills, institutions must set priorities for assessment. This book sets forth criteria and a method that can be used by public agencies such as the Office of Health Technology Assessment (in the U.S. Public Health Service) and by any private organization conducting such work to decide which technologies to assess or reassess.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!