National Academies Press: OpenBook

Setting Priorities for Health Technologies Assessment: A Model Process (1992)

Chapter: 6 RECOMMENDATIONS AND CONCLUSIONS

« Previous: 5 IMPLEMENTATION ISSUES
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

6
Recommendations and Conclusions

This committee was charged to propose a process for setting priorities for technology assessment for use by the Office of Health Technology Assessment (OHTA) (in the Agency for Health Care Policy and Research—AHCPR) and by other assessment organizations. In responding to this charge, the committee organized its work—and this report—at three levels of specification: general principles, a proposed process, and information about how to implement the process within OHTA and in other organizations that conduct health technology assessment.

This chapter has three main parts. First, it reviews the main points of the report: the rationale for the process developed by the committee, 11 recommendations (details are given in Chapters 3, 4, and 5), seven steps or tasks needed to implement the proposed process, anticipated resources and periodicity of the process, and issues that might arise during implementation.

Second, it examines how the proposed priority-setting process might be used or adapted by other organizations and for purposes other than technology assessment. Third, it discusses the committee's views on some potential problems that may arise.

REVIEW OF THE COMMITTEE'S RATIONALE AND RECOMMENDATIONS

At the outset of its work, the committee reviewed the priority-setting processes of a number of organizations. From this review, it established a set of principles to govern priority setting in a public agency such as OHTA. The basic principle is that OHTA's process should be consistent with its

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

mission of directing its resources for technology assessment toward medical conditions and technologies that have the greatest impact on the well-being of the public and on the public's expenditures for health care. By adhering to this principle, the committee believes that OHTA can identify and evaluate the medical conditions and technologies whose assessment will offer maximum benefits to the nation's citizens.

Several specific benefits of an OHTA priority-setting process include the potential to improve the health and well-being of the public, reduce needless or inappropriate health expenditures, reduce inequities and maldistribution of health care, and inform ethical, legal, and social issues related to candidate topics. The committee enunciated three other objectives of a priority-setting process: it must (1) meet the information needs of users, (2) be efficient, and (3) be sensitive to the assessing organization's political, economic, and social constraints and be—as well as appear to be—objective and fair. A process that satisfies these principles and objectives is summarized in the 11 recommendations that follow.

Recommendations

Recommendation 1

OHTA should adopt a systematic process to assist decision making about which medical conditions and technologies it should assess or reassess. The process should involve a broad spectrum of interested parties and should be open to public view, resistant to control by special interests, and clearly understandable.

The process proposed by the committee would be conducted in two phases: (1) setting weights for criteria, which would be performed approximately every 5 years, and (2) implementing the rest of the priority-setting process, which would be performed approximately every 3 years.

Recommendation 2

OHTA technology assessment, whenever feasible, should focus on a clinical problem (e.g., diagnosis of coronary artery disease) rather than on a technology per se (e.g.. exercise thallium radionuclide scan). Similarly, priority setting should address clinical conditions.

Although concern about a new test or treatment often leads to calls for its assessment, whenever possible, a technology should be evaluated within the context of the clinical condition for which it is being used. There are two reasons for proposing this orientation. First, technology assessment should

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

be comparative, implying that it should answer a useful clinical question: Which technology should a practitioner use and under what clinical circumstances? Second, a technology can only be evaluated in the context of what it does, which is to help solve a clinical problem.

Recommendation 3

OHTA technology assessments should compare the alternative technologies for managing a clinical condition. Similarly, the priority-setting process should include alternative technologies for managing a clinical condition.

The data required to determine the assessment priority of a clinical condition depend on which technologies are relevant to its management. (For example, the expected cost of managing a condition depends on the costs of the individual technologies that might be used.) This recommendation holds true even when a new technology is the first to be applied to a clinical problem: there are no obvious comparative technologies, but watchful waiting without therapeutic intervention is always a valid, and important, alternative.

Many parties need information about alternative technologies for managing a condition. For instance, clinicians and patients must choose among alternative tests and treatments. Third parties, too, are concerned about the marginal effects of a technology—the additional benefits and risks represented by one technology in comparison with another. All such comparisons should take place on a ''level playing field"; that is, the same methods and clinical circumstances should be applied to all of the technologies. An analogy to empirical studies is apt: the use of historical controls rather than concomitant controls in primary research is normally not sufficient because conditions change over time and variables other than the one being singled out for study may be responsible for observed differences. The same reasoning holds for technology assessment: referring to analysis done 10 years earlier is not acceptable as a form of comparison because the techniques, methods, and assumptions of the earlier analysis may not be the same as those currently used.

Although comparative data are preferred, they are sometimes difficult to acquire—particularly in the case of many alternative approaches to a particular condition. Thus, it may at times be necessary to conduct more limited assessments.

Recommendation 4

OHTA should identify criteria that best characterize a topic's importance as a candidate for assessment. The committee recommends the following objective criteria:

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
  • prevalence of the specific condition;

  • unit cost of the technologies commonly used to manage the condition (or the unit cost of a technology and its alternatives); and

  • variation in the rate of use of a technology for managing the condition (or variations in the rates of use of the technology and its alternatives).

Ordinarily, the data required to characterize a candidate topic may be found in the published literature or elsewhere in the public record. Prevalence is the number of people with the condition per 1,000 persons in the general population. Unit cost is the total direct and induced cost of conventional management for a person with the clinical condition. Variation in rates of use across different settings of care is measured by the coefficient of variation. A high coefficient of variation frequently implies a low level of consensus about clinical management.

The committee also recommends the following subjective criteria:

  • burden of illness imposed by the clinical condition;

  • potential of the results of the assessment to change health outcomes;

  • potential of the results of the assessment to change costs; and

  • potential of the results of the assessment to inform ethical, legal, or social issues.

Although some objective data about these criteria may exist, integration of these data often requires subjective estimates as well as judgments about the likely effect of an assessment; thus, the committee considers these four criteria subjective. Each criterion is described briefly below and is discussed in greater detail in Chapter 4.

Burden of illness, which is estimated at the level of the patient rather than of society, is the difference between the quality-adjusted life expectancy (QALE) of a patient who has the condition and who receives conventional treatment and the QALE of a person of the same age who does not have the condition. The potential of the results of the assessment to change health outcomes is the expected effect of the result of the assessment on health outcomes for patients with the illness. It includes consideration of the findings of the assessment and of the likelihood of policy and administrative changes, clinical practice changes, and patient acceptance. The potential of the results of an assessment to change costs is the expected effect of the results of an assessment on the costs of illness for patients with the illness. It includes direct costs to the patient and induced costs.

The committee anticipates that most conditions will be adequately ranked based on the first six criteria listed above. The seventh criterion, which considers the likelihood that the assessment would help to inform ethical, legal, and social issues, gives the panelists the opportunity to take a broad social per-

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

spective and to ask whether there is anything that had not been captured in the first six criteria that would alter the assessment priority of this particular topic.

Recommendation 5

OHTA should use an explicit process to determine a candidate topic's priority ranking. In the ranking process, the criteria that are important in deciding whether to do an assessment determine a topic's priority rank.

The committee recommends the use of a process that can be examined, challenged, and adjusted on the basis of tests of its reliability and validity. Use of a quantitative model as part of this process allows assumptions to be explicitly stated and individually assessed; it also permits the use of data, whenever they are available.

Recommendation 6

The committee recommends a specific quantitative method to calculate a priority score for each candidate topic, using the following formula:

where W is the criterion weight, S is the criterion score, and in is the natural logarithm of the criterion scores.

A panel of people from a broad spectrum of interests should set the criterion weights.

In the process proposed by the committee, a broadly based panel would be created to lead the necessary activities. Its first task would be to establish the criterion weights through one of several possible procedures (see Chapter 4). Once established, these criterion weights remain constant for the entire priority-setting process (i.e., across all candidate topics).

A topic's priority score determines its priority rank. According to the committee's method, each candidate topic receives a criterion score for each of the seven criteria (for example, S1 might be prevalence expressed as a number per 1,000 persons in the general population). In addition, each criterion has a criterion weight that reflects its importance in determining priorities for technology assessment. (W1, for example, might be a weight of 2 for prevalence, relative to a burden-of-illness criterion weight of 3.)

Each candidate topic has its own combination of criterion scores (Sn) for the seven attributes. The panel noted above (or a subset of its members) reviews data prepared for each topic by OHTA staff and assigns the criterion scores. Objective criterion scores are determined by a subpanel with

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

expertise in clinical epidemiology and statistics. Subjective criterion scores are determined by a broadly representative panel (or subpanel) with expertise in health care.

The rationale for taking the natural logarithm of the criterion score is to avoid the intractable problem of combining numbers that represent attributes with different units into a summary score. The logarithm of a number solves this problem because it is unitless.

Recommendation 7

OHTA should actively solicit nominations of topics to be considered for assessment. The solicitation should include payers, health professionals and their representative organizations, manufacturers of medical products, business, labor, government agencies, and consumers of health care.

The committee judged that a widespread solicitation of topics is crucial to the success of the priority-setting effort. In particular, the solicitation should be broad enough to ensure that important technologies are not omitted inadvertently from consideration and that all important constituencies are included in the process.

Recommendation 8

OHTA should develop a structured procedure for reducing the number of nominations.

The initial number of nominations will almost certainly far exceed staff capacity to collect the data required to assign criterion scores to each topic. Therefore, the committee proposes that a formal procedure be adopted to reduce that initial list to a manageable size—a technique it calls ''winnowing." To be feasible, the winnowing technique should be much less costly than the full ranking system. Practical approaches include preliminary ranking according to one or two of the objective criteria or a consensus process in which several groups would subjectively rank subsets of candidates by mail ballot.

Recommendation 9

OHTA should consider all previously assessed topics as candidates for reassessment.

OHTA has a special obligation as an influential public agency to revisit any previously assessed topics whose recommendations may be based on

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

outdated or now erroneous information. A change in the nature of the condition, expanded professional knowledge, a shift in clinical practice, or publication of a new, conflicting assessment might trigger consideration of a condition and technology for reassessment.

Recommendation 10

OHTA should maintain a data base on each topic that has been previously assessed and should catalog information pertaining to the topic.

A catalog will make it easier for OHTA to know when to consider topics for reassessment and when newly published information is relevant to a topic that has been previously assessed. Information should include descriptions of data, populations, and methods used in the earlier assessment, the impact and controversy generated, and a topic-specific estimated date or interval for considering reassessment.

Recommendation 11

OHTA should set priorities among topics for reassessment at the same time and on the same footing that it sets priorities for first-time assessment. That is, the committee recommends that OHTA create one rank-ordered list that contains both topics for reassessment and topics for first-time assessment.

The process of determining the need for reassessment can be accommodated within a priority-setting process for first-time assessments with the addition of several specific components: (1) a system for tracking previous assessments and events that prompt recognition that a major factor (e.g., a clinical condition or practice, information) has changed relative to the old assessment; (2) evaluation of literature that suggests that reassessment might be needed; (3) a decision by the priority-setting panel that a technology or clinical practice has changed sufficiently to warrant reassessment; and (4) a sensitivity analysis that suggests that the conclusion of an initial assessment might change when a reassessment is conducted.

There are several steps in deciding to do a reassessment. The first is to decide whether events that have occurred since the first assessment have made the original conclusions obsolete, as outlined in recommendation 10. The second step in reassessment is to evaluate the quality of studies that suggest that assessment might be needed. Third, an OHTA panel, presumably a sub-panel of OHTA's priority-setting panel, should periodically review the data on previous assessments and decide whether the circumstances warrant reassess-

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

ment. Fourth, topics designated for reassessment would be added to the list of candidates for first-time assessment that survive the winnowing process.

If a previously assessed topic has achieved a high priority score for reassessment, program staff should use the data that have been assembled for setting criterion scores to perform a sensitivity analysis. This analysis would indicate whether the new information would change the conclusions of a previous assessment. If a sensitivity analysis indicates that current recommendations about the use of a technology would not change, even given the reasons for a reassessment, no reassessment should be undertaken.

REVIEW OF STEPS AND ISSUES IN IMPLEMENTATION

The committee has proposed seven steps for its priority-setting process. Each step, which is explained in greater detail in Chapter 4, is summarized below. Also discussed in Chapter 5 and summarized below are four implementation issues: the resources needed for implementation of the process, how often priority setting should occur, what products of the process should be available to the public, and what should be done when there is insufficient evidence to conduct an assessment based on a review of the literature.

Steps in a Priority-Setting Process

Step 1. Selecting and Weighting the Criteria Used to Establish Priority Scores

This step requires that a broadly representative panel be constituted to select and define the criteria to be used for priority setting. In recommendation 4 and in Chapter 4 of this report, the committee defined seven criteria and recommended that they be adopted for use by OHTA. Chapter 5 discussed a number of points to be considered before changing the criteria or their definitions. In addition to selecting and defining criteria, the panel noted above would assign each criterion a weight that reflected its relative societal importance.

Step 2. Identifying Candidate Conditions and Technologies

OHTA program staff would seek nominations from a wide range of groups concerned with the health of the public. This solicitation is likely to produce a large set of candidate topics.

Step 3. Winnowing the List of Candidate Conditions and Technologies

Earlier in the report, the committee described several methods to reduce (winnow) the set of candidate conditions. The committee suggests one

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

particular method—a so-called panel-based preliminary ranking system—that is less data intensive than the other methods, but also less costly than the full ranking system, free of bias, resistant to control by special interests, and easily understandable to all participants. The method uses one or more panels to provide preliminary (subjective) rankings of the nominated technologies. To minimize costs, these activities could be conducted using mailed ballots.

Step 4. Data Gathering

The fourth element of the process calls for OHTA staff to define all alternative technologies for care of a clinical condition and to gather the data required for each objective priority-setting criterion.

Step 5. Creating Criterion Scores

In this step, a broadly representative subpanel would use consensus methods to create subjective criterion scores. A subpanel that included members with clinical experience and expertise in epidemiology and health statistics would determine criterion scores for objective criteria using data assembled for each clinical condition.

Step 6. Computing Priority Scores

The quantitative model developed by the committee and presented in Chapter 4 combines empirical rates (objective criterion scores) and subjective ratings (subjective criterion scores) as developed by the subpanels mentioned in step 5. Weighted criteria are multiplied by the natural logarithm of the criterion scores for each condition and technology and combined to form a single priority score for ranking.

Step 7. Review of Priority Rankings by the National Advisory Council of the Agency for Health Care Policy and Research

The AHCPR National Advisory Council would review the priority list and adjust it if desired before advising the AHCPR administrator to implement the set of priorities for assessment.

Resources for Implementation

The committee has carefully considered how best to implement the priority-setting process and to meet the requirements implementation may impose for additional resources while still achieving the goals of a credible, sound, defensible model process. This priority-setting process,

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

based on the committee's experience with the pilot test, is likely to require more resources to respond to its expanded mission than are currently available. The committee thus viewed its work in part as a strategic effort to look ahead to reasonable goals for AHCPR and OHTA and to characterize the kinds and levels of program resources that would be needed.

The committee concluded that the importance of the priority-setting effort warrants a staff large enough to accomplish its mission of helping to use the country's technology assessment resources wisely. The committee believes implementing its process will require staffing at least comparable to that for a grant review study section. Human resources needed to implement the proposed process include program staff and priority-setting panels.

The Priority-Setting Cycle

Priority setting for OHTA should occur approximately every 3 years. The broadly representative panels that are constituted to carry it out have four tasks (described in steps 1, 3, and 5 above). Panel task 1, which is initially to select criteria and set criterion weights, occurs approximately once every 5 years; panel tasks 2 through 4 would occur about once every 3 years. These latter tasks are, respectively, to reduce the long list of candidate conditions and technologies to a more manageable size (i.e., "winnowing"), to generate subjective criterion scores, and to generate objective criterion scores.

Throughout the 3-year cycle, OHTA program staff would be responsible for tracking information related to previous assessments.

Publicly Available Products

The committee views the priority-setting process as a public good that will be one of OHTA's most valued products; thus, OHTA should generate a list of priorities for assessment that is extensive enough for use by other organizations that perform technology assessment. Of further benefit will be the data base that OHTA creates during the process of compiling data for the quantitative model. The data base (containing such information, for example, as cost per case of the top-ranked conditions) will itself be a resource to other organizations.

Topics with Insufficient Evidence for Assessment Based on Review of the Literature

The committee suggested three possible responses to lack of scientific comparative data for assessment of a condition or technology. For instance, assessors might prepare an interim statement that would estimate how effective the technology would have to be for it to be cost-effective. Alterna-

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

tively, they might use decision modeling as an interim approach until sufficient data are available, or they might encourage primary research; they might also employ combinations of these steps. Topics that are of high priority for assessment and for which there is insufficient evidence should be identified and proposed as a topic for further research that might be encouraged and supported by AHCPR. This concept of linking priority setting, assessment of the evidence, and a research agenda is an important foundation for technology assessment and for evidence-based medical practice. Indeed, the committee recommends that AHCPR adopt this approach to setting its research agenda.

ADOPTION OF THE IOM'S PRIORITY-SETTING PROCESS BY OTHER ORGANIZATIONS

Many organizations evaluate health technology, although the major categories of such organizations are third-party payers, such as the Health Care Financing Administration (HCFA) and the Blue Cross and Blue Shield Association (BCBSA), and associations that represent physicians, such as the American College of Physicians (as described in Chapter 2). The committee developed this proposal for a priority-setting process with the expectation that the process would apply and be useful to these and similar organizations as well as to OHTA.

Requests from HCFA's Bureau of Policy Development at present constitute almost the entire workload of OHTA, but the bureau has no formal system for selecting technologies that are to be evaluated by OHTA. BCBSA member plans conduct their own technology evaluations, which are used, in part, to make coverage decisions. The member plans also rely on information supplied by the national BCBSA organization, which has an internal technology assessment program for new and emerging technologies and which has commissioned several major programs of assessment of established technologies by the American College of Physicians (ACP, 1990, 1991). The committee believes that most of its recommendations for a priority-setting process will apply to these private organizations as well as to OHTA, for the following reasons:

  • Although these organizations are part of the private sector, they also constitute a major public resource, both individually and collectively. The more they structure their technology assessment activities, including priority setting, as a public service, the greater the good they will do for their own private purposes and for their mission of public service. By focusing on clinical conditions rather than on individual technologies, their assessments are more likely to compare relevant alternative patient care strategies.

  • The argument that priorities for assessment should be determined on

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

the basis of several criteria is quite generalizable. An organization that uses only one dimension (e.g., cost, burden of illness) is oversimplifying a very complex matter. The trade-off between cost and effectiveness is one of the most important questions that physicians and patients must understand and resolve daily in the office or hospital. Those who pay for care and those who provide it will, ultimately, disadvantage themselves if they focus only on one dimension of health technology. The committee has maintained that the first objective of modeling is to develop a model that reflects the organization's mission, and it is entirely possible that a company's mission is primarily to increase shareholder value. In this instance, the firm arguably cannot be expected to place much emphasis on serving the national interest. Nevertheless, some on the committee take a broader view, believing that even for-profit concerns should, in their own long-term self-interest, adopt a "national interest" perspective as well. To the extent this proposition is true, groups that do not adopt a "multifactorial" approach to priority setting may short-change their own interests as well as those of the nation.

  • Because the committee's process accommodates the choice of any priority-setting criteria, an organization may choose criteria that serve its own interests. The committee argues, however, that public trust, which sustains any large organization of payers or professionals, requires criteria that are responsive to the public interest, as exemplified by the committee's seven criteria.

  • If one accepts the argument that any organization performing health technology assessment, or the officers of that organization who are responsible for the technology assessment, are accountable to the public, at least in very general terms, it would seem to follow that any process of establishing priority rankings should be open, explicit, and understandable. Although the priority-setting process could simply involve implicit judgments about how well a candidate topic meets explicit criteria, an explicit method for determining priority rankings is better than an implicit method at satisfying the requirement for openness.

  • The process of soliciting nominations is one element of an ideal process that could be designed to satisfy the needs of a specific organization without compromising the public interest.

  • The committee believes that any program of technology assessment must encompass a commitment to reassess topics that have been previously assessed. This commitment must be supported by a program to monitor previously assessed topics for new information that might prompt a reassessment. The rationale for this recommendation is public accountability, but it applies to private interests as well. For example, an organization of physicians should not have a potentially obsolete policy on the public record. Neither should a payer continue to provide or to withhold cover-

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

age on the basis Of information that may have been superseded by newly published data.

Technology Assessment and Clinical Practice Guidelines

The committee's priority-setting process may also be useful in setting priorities for developing practice guidelines. At present, many organizations, including AHCPR's Forum for Quality and Effectiveness in Health Care, actually produce practice guidelines or support their development. Clinical practice guidelines, according to another IOM committee's definition (IOM, 1990c), are ''systematically developed statements to assist the practitioner and patient in decisions concerning appropriate health care for specific clinical circumstances." This and a forthcoming IOM report, Guidelines for Clinical Practice: From Development to Use, call attention to the following needs: information on costs and outcomes; a rigorous, open, and documented development process; a broadly representative, multidisciplinary process of development and review; and a systematic plan for scheduled review and reassessments.

Clinical practice guidelines are one vehicle for disseminating the results of technology assessment, and technology assessment is one method of producing information for a practice guideline. In particular, clinical practice guidelines may use the synthesis of available evidence and projection of outcomes that are a part of technology assessment as a foundation for statements that are clinically useful in individual patient care. Good practice guidelines go one step further, however, to rely on expert consensus to develop practical advice for clinicians in situations not directly addressed by clinical research.

What further distinguishes practice guidelines from technology assessment is the requirement that guidelines very carefully and explicitly describe the thinking that links the evidence (that is, the product of the technology assessment), or the lack of evidence, with the advice. Nonetheless, because technology assessment is so closely related to the development of practice guidelines, the priority-setting process proposed in this report appears to be largely, if not completely, applicable to guidelines development as well.

POTENTIAL PROBLEMS WITH THE PRIORITY-SETTING PROCESS

There are some potential problems with the process proposed in this report, but the committee believes that most of them stem from misperceptions about the use of a quantitative model to calculate a priority score. The great advantages of the model process are that it is explicit, that it contains

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

a representation of the values of society, and that it defines the information-gathering tasks involved in setting priorities. Balanced against these advantages are three main concerns.

Will a Numerical Priority Score Lead to Unrealistic Inferences About Priority?

The output of the model will be a priority score that can be calculated to several decimal places if necessary. Although the model encourages precise thinking about the factors that are important in setting priorities, it is not (and cannot be) a more precise tool than the data used to estimate criterion scores. Several of the criterion scores are numerical representations of subjective judgments. The definitions of the criterion scores, as described in Chapter 4, are precise to encourage panelists to adopt the same set of assumptions when they make subjective judgments. But a criterion score is precise only if it has a small coefficient of variation across all panel members.

The risk of imputing false precision to a priority score is that it may lead to erroneous inferences that one of two candidates with similar (but not identical) priority scores has a stronger claim to priority because of its higher score. Nevertheless, the possibility that such a false judgment could occur is not a weakness of the priority-setting process. An organization might counteract such inferences by grouping candidates with similar scores and making choices among them, if need be, on the basis of other criteria (e.g., required timeliness or the expected cost of the assessment).

Does Codifying an Idealized Process Lead to Inflexibility?

This report has emphasized the way in which the proposed process can take into account the factors that should be important in deciding whether to assess a technology. Is there a risk that this process is too precise for the political climate of technology assessment? Does the system need more "give" than is provided by a quantitative model that generates a priority score? The committee argues, rather, that an explicit process facilitates open discussion. Furthermore, the rank-ordered list (or, if preferable, the groupings of candidate topics with similar scores) should be understood as no more than one kind of information to inform a political process by which to choose the final set of topics for technology assessment.

Will There Be a Bias Toward Choosing Topics That Are Quantifiable?

As Freymann (1974) has noted, "The Cartesian physician tended to forget that not everything we can count counts, nor can everything that

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

counts be counted." To calculate a priority score, the proposed system requires data. Does this requirement mean that topics for which data are not available will be less likely to be assessed? Perhaps so, but a close look at the criteria suggests that this danger is more apparent than real.

First, four of the criteria do not require data. These subjective criteria require the panelist to make a subjective estimate and to express it on a scale of 1 to 5. Estimating a score for these criteria will not cause systematic bias against certain topics because the estimation problem will exist for all topics.

Second, of the remaining three criteria, one is the expected unit cost of the procedure for managing the condition; another is the prevalence of the condition. Analysts should be able either to collect or to estimate these data fairly easily.

The last criterion, the coefficient of variation of use rates, will be the most difficult in terms of data collection because it requires that the clinical condition or procedure be used on a wide enough scale to calculate meaningful use rates. The administrative data sets that many investigators study have a substantial advantage in the investigation of rare conditions because the largest ones can contain almost the entire population of such events in the United States. In the worst case—no available data on variation in use rates—the panel would simply assign the mean coefficient of variation of all other candidate topics.

CONCLUSION

Although this committee has recommended a specific step-by-step methodology as a priority-setting process, it believes that the four principles noted earlier in this report are far more important than the specifics of its model. First, the entire enterprise must be consistent with the mission of the organization. Second, the results of the process should be consistent with the needs of the user and should provide information in the form that is most useful. Third, the process should be efficient, especially in instances in which it must share resources with technology assessment itself. Fourth, the process must consider the political, economic, and social constraints that will affect how the information can be used. In the case of OHTA, satisfying the first principle will require determining which assessments are most likely to result in improvement in the health of the public, reduction of inappropriate health care expenditures, reduction of inequities in access to effective health care services or of maldistribution across equally needy populations, and the informing of other ethical, legal, and social issues.

OHTA and other organizations may wish to modify some of the components of the process as proposed. Experience with using this method or

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×

others will provide a sound basis for change, and organizations should constantly reexamine their methods for setting priorities. When making any changes, these groups should consider carefully whether modifying a given element might adversely affect the performance of the entire process.

In proposing a strategy for an optimal priority-setting process, the committee realizes that funding for technology assessment is already constrained and that its proposed priority-setting system will require some additional resources. Given the potential value of priority setting, however, the funding for this effort appears to be justified.

The committee views its report as a strategic effort to look ahead to reasonable goals for AHCPR and OHTA and to create a process that will be credible, sound, and defensible. During the process of compiling data for the quantitative model, OHTA will create a valuable data base and a ranking of priorities; both will be important resources for other organizations as well as for OHTA itself. Indeed, such a program could lead not only to wise use of public and private resources for technology assessment but also to an increase in public support for the entire technology assessment process.

Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 115
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 116
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 117
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 118
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 119
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 120
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 121
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 122
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 123
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 124
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 125
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 126
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 127
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 128
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 129
Suggested Citation:"6 RECOMMENDATIONS AND CONCLUSIONS." Institute of Medicine. 1992. Setting Priorities for Health Technologies Assessment: A Model Process. Washington, DC: The National Academies Press. doi: 10.17226/2011.
×
Page 130
Next: REFERENCES »
Setting Priorities for Health Technologies Assessment: A Model Process Get This Book
×
 Setting Priorities for Health Technologies Assessment: A Model Process
Buy Paperback | $50.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The problem of deciding which health care technologies to evaluate is urgent. With new technologies proliferating alongside steadily increasing health care costs, it is critical to discriminate among technologies to direct tests and treatments at those who can benefit the most.

Given the vast number of clinical problems and technologies to be evaluated, the many months of work required to study just one problem, and the relatively few clinicians with highly developed analytic skills, institutions must set priorities for assessment. This book sets forth criteria and a method that can be used by public agencies such as the Office of Health Technology Assessment (in the U.S. Public Health Service) and by any private organization conducting such work to decide which technologies to assess or reassess.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!