2
Methods for Priority Setting

Every organization engaged in technology assessment must choose how to use its assessment resources—either through an informal, implicit priority-setting process or by a more formal method that uses specified criteria and available scientific data. The goal of technology assessment varies with the organization conducting it: a medical professional organization assesses technologies to help its members make clinical decisions; information from technology assessment enables a device or pharmaceutical manufacturer to demonstrate the safety and efficacy of its products; technology assessment in the insurance industry supports reimbursement decision making; integrated health care delivery systems (e.g., hospital systems, health maintenance organizations) use the results of assessments to make capital investment decisions and to adopt common clinical management strategies.

This chapter describes how several organizations set priorities for assessments, summarizes models proposed by researchers, and considers how each might contribute to a model process appropriate to the Office of Health Technology Assessment (OHTA). Taken as a group, the examples are not intended to be an exhaustive survey of priority-setting methods but to indicate the range of approaches and features considered by the committee in developing its model and formulating its recommendations. (Criteria reported by eight assessment organizations when deciding which technologies to assess can be found in Appendix B of the IOM report from the Council on Health Care Technology [IOM, 1990f].)



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process 2 Methods for Priority Setting Every organization engaged in technology assessment must choose how to use its assessment resources—either through an informal, implicit priority-setting process or by a more formal method that uses specified criteria and available scientific data. The goal of technology assessment varies with the organization conducting it: a medical professional organization assesses technologies to help its members make clinical decisions; information from technology assessment enables a device or pharmaceutical manufacturer to demonstrate the safety and efficacy of its products; technology assessment in the insurance industry supports reimbursement decision making; integrated health care delivery systems (e.g., hospital systems, health maintenance organizations) use the results of assessments to make capital investment decisions and to adopt common clinical management strategies. This chapter describes how several organizations set priorities for assessments, summarizes models proposed by researchers, and considers how each might contribute to a model process appropriate to the Office of Health Technology Assessment (OHTA). Taken as a group, the examples are not intended to be an exhaustive survey of priority-setting methods but to indicate the range of approaches and features considered by the committee in developing its model and formulating its recommendations. (Criteria reported by eight assessment organizations when deciding which technologies to assess can be found in Appendix B of the IOM report from the Council on Health Care Technology [IOM, 1990f].)

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process PRIORITY-SETTING PROCESSES USED BY ORGANIZATIONS Example 1: Health Care Financing Administration Bureau of Policy Development Technology assessment is conducted for many reasons, one of the most common being to support reimbursement policy. Coverage determination issues often surface because technologies are expensive, are likely to raise safety concerns, or are likely to be overused. Third-party payers need to determine whether and at what point to cover new technologies. Although the legislative complexity of Medicare necessitates procedures that are more complex than those of private payers, the function of making coverage decisions is a common one. Requests for assessment to OHTA come from the Bureau of Policy Development (BPD) in the Health Care Financing Administration (HCFA). Because these requests have historically been the genesis of OHTA's workload, it is useful to examine the process that produces them. BPD becomes involved in a small proportion of all questions related to Medicare coverage, focusing on those that are most difficult to resolve and that are of national significance. (Lewin and Associates [1987] and the appendix to this chapter describe the HCFA coverage determination process in greater detail.) Questions that cannot be resolved at the regional level are referred to the central office, but in most instances, Medicare fiscal intermediaries are able to resolve claims coverage questions within existing national policy or by referring questions to HCFA regional offices. With increasing political pressure on HCFA to have uniform contractor coverage, however, requests to HCFA's BPD are becoming more common. Once a request for coverage has reached BPD, that office decides if a coverage decision is or is not appropriate. If the question is deemed appropriate for a national coverage decision, BPD prepares a background paper for review by the HCFA Physicians Panel. Health Care Financing Administration Physicians Panel The physicians panel serves in an advisory role to BPD. Using a set of implicit criteria (e.g., medical and national significance, potential for high cost and rapid diffusion, uncertainty about safety and effectiveness) and considering the background information provided by BPD staff, the panel decides either to recommend that no national coverage decision be made or to refer the technology to OHTA.

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process Reevaluation or Assessment of Established Technologies HCFA might also evaluate a service that is already excluded or covered under the Medicare program. Because most covered technologies have never been assessed formally by OHTA, these evaluations are not reassessments as defined by this committee (although they fit the terminology used by Banta and Thacker [1990]). They might be termed ''reevaluation" or, more accurately, "new assessments of established technologies." The purpose of such assessments is to remove obsolete technologies, clarify inappropriate use of otherwise acceptable technologies, and enhance appropriate use of technologies. Publication of clinical studies may prompt such assessments if the findings are inconsistent with current coverage policy or if a service is considered obsolete. Currently, a HCFA-proposed rule (Federal Register 54:4306, 1989) concerning reasonable and necessary services would treat the assessment of established technologies in the same way as the evaluation of new technologies, except that a notice requesting comments would be published in the Federal Register announcing HCFA's intent to evaluate. Interested parties could thus also request reconsideration and submit evidence published after the initial coverage decision. In summary, issues reach BPD, and hence OHTA, by a process that involves requests for coverage to fiscal intermediaries that have been filtered through the regional offices before reaching BPD. BPD decides from time to time (on the basis of stated criteria) that a technology assessment may be needed, but it does not have a priority-setting process for making these decisions (National Advisory Council on Health Care Technology Assessment, 1988). Example 2: Private Sector—Pharmaceutical Industry Criteria for Assessment Pharmaceutical companies1 exemplify organizations that need to determine how to use resources for biomedical research and development.2 The 1    This section is based on information provided by committee member Glenna Crooks. 2    Innovation in medical devices is a strikingly different process. Innovation in some devices involves radical new capabilities, but most often it involves modifying, upgrading, and improving existing devices by a process in which engineering problems are solved or a technology is adapted for a new use or setting. Innovation often originates with clinicians themselves and seldom depends on the results of long-term research in basic science (Roberts, 1988).

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process top tier of research-intensive pharmaceutical companies, which comprises fewer than 10 companies worldwide, sets assessment priorities for research, development, and testing of compounds on the basis of a demonstration of scientific and market opportunity. Scientific opportunity includes the likelihood of significant clinical benefit. Market opportunity involves several considerations, including those of the Food and Drug Administration (FDA), market-entry hurdles, stockholders' acceptance of long- and short-term research strategies, and returns on investment. Criteria for Reassessment Regulatory agencies worldwide require pharmaceutical companies to conduct continuing studies of their products, including, in some countries, postmarketing surveillance. Determining when reassessment is warranted may also require epidemiologic studies on diseases treated by their products to ensure that condition-related adverse events are distinguished from those that are related to administration of the drug. Other factors related to the clinical and market environment may also prompt industry reassessment. These include new (sometimes called off-label) uses of a product. Any of these activities may require primary data collection (e.g., surveying physicians about their uses of a product) or analysis of secondary data. Pharmaceutical companies sometimes establish external advisory groups to decide when reassessment is warranted. Reviews may either be scheduled or unscheduled and are sometimes prompted by some external event such as new information in the published literature or reports from the field on physician experience with a product. Internal Process of Priority Setting An assessment team of senior managers from the company's basic, developmental, and clinical research divisions reviews and evaluates research and development priorities of specific new chemical entities and potential products. Key research data on each potential product are reviewed at monthly meetings at which the team decides whether to proceed, alter, or discontinue that particular program. Senior management reviews strategic, or long-range, priorities in pharmaceutical development. A development review team reviews the data on compounds it proposes to develop, together with target dates for delivery of each project, and makes a final decision on development. It is reasonable to estimate that such companies use 1 to 2 percent of their research and development budgets for such strategic planning. Thus, private-sector pharmaceutical manufacturers conduct assessments in response to several circumstances: when there is a regulatory require-

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process ment or when a new compound is under development. In the latter case, scientific and market opportunity are assessed repeatedly so that a timely decision can be made regarding further development. Example 3: Health Care Provider Organizations Many other private-sector entities, including medical specialty societies, medical group practices, hospitals, and health maintenance organizations conduct technology assessment. Two of the better-known programs are the Clinical Efficacy and Assessment Program (CEAP) of the American College of Physicians (ACP) and the American Medical Association's (AMA) Diagnostic and Therapeutic Technology Assessment (DATTA) program. Other programs include the Blue Cross/Blue Shield Medical Necessity Project and programs sponsored by the American College of Surgeons, the American College of Radiology, and the Council of Medical Specialty Societies. The CEAP has been active since 1981. The program seeks nomination of technologies for assessment from the 68,000 members of the ACP who are specialists in internal medicine. The college uses a process in which the CEAP committee members evaluate each candidate topic on each of several criteria. The criteria include whether good-quality syntheses have been performed recently; the clinical impact of the technology; estimates of the aggregate costs associated with the technology; relevance of the technology to internists; the degree of uncertainty among practicing physicians regarding appropriate use of the technology; adequacy of the knowledge base for an assessment; and the likelihood that an assessment will result in altered practice patterns (Linda White, Director, Scientific Policy Department, ACP, personal communication, October 1991). CEAP assessments include new and emerging technologies and common diagnostic tests. The AMA's DATTA program answers questions about the safety, effectiveness, and clinical acceptance of medical technologies. It assesses primarily new diagnostic and therapeutic procedures and technologies and occasionally reassesses experimental technologies if new evidence becomes available (Lewin and Associates, 1987). DATTA receives requests for assessment from individual clinicians, and it also surveys program subscribers and certain interested groups to elicit assessment topics. It then sets priorities implicitly using three criteria: potential impact on substantial patient

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process population, controversy in the medical community, and availability of scientific data (AMA, 1988; William McGivney, former director of the DATTA program, personal communication, 1991). Example 4: Institute of Medicine/Council on Health Care Technology Pilot Study The work of the IOM Council on Health Care Technology (IOM/CHCT) pilot study group is another example of priority setting. As described in Chapter 1, in 1989-1990, the National Center for Health Services Research (NCHSR) charged a panel of the CHCT to develop national priorities for technology assessment. That effort resulted in the IOM (1990f) publication National Priorities for the Assessment of Clinical Conditions and Medical Technologies: Report of a Pilot Study. The pilot study focused on developing a method for selecting both conditions and individual technologies of high priority for assessment. The study considered its final list of 20 conditions and technologies (which was not rank ordered) to be illustrative of its process rather than a definitive list of priorities. Methods used in the study included participation by providers, insurers, and scientists. The broadest level of participation occurred at the point of soliciting topics for consideration, with a deliberate effort by IOM to reach out to an array of stakeholders. Fourteen assessment organizations— representing academic institutions, government agencies, health care product manufacturers, health care provider organizations, and third-party payers— submitted candidate topics that each considered to be of very high priority for assessment. The list was augmented by topics suggested by the committee. IOM staff reduced the long list of suggested topics by combining closely related issues under comprehensive headings; as a result, the pilot study listed priority-ranked conditions and technologies formulated at a high level of aggregation (e.g., ''coronary artery disease" instead of "acute myocardial infarction" or "coronary arteriogram"). The committee then conducted two rounds of mail balloting and convened to produce the final list (Table 2.1). Each committee member implicitly took into account several primary and secondary criteria to produce a rank-ordered list of each member's highest ranking topics. Primary criteria ("important and readily quantifiable characteristics") included the potential for an assessment to improve individual patient outcomes, to affect a large patient population, to reduce unit or aggregate costs, and to reduce unexplained variations in medical practice. Secondary criteria represented a "spectrum of factors and issues," including the potential to address social and ethical implications, to advance medical knowledge, to affect policy decisions, and to enhance the national capacity for assessment. In sum, the committee used explicit criteria and a formal process but applied them implicitly to rate individual conditions and technologies.

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process Table 2.1 List of 20 Assessment Priorities Generated by the IOM/CHCT Priority-Setting Group (in alphabetical order) Clinical Conditions Technologies Breast cancer Diagnostic imaging technologies Cataracts   Chronic obstructive pulmonary disease Diagnostic laboratory testing Coronary artery disease Implantable devices Gallbladder disease Intensive care units Gastrointestinal bleeding Organ transplantation and replacement Human immunodeficiency virus infection   Joint disease and injury   Low back pain   Osteoporosis   Pregnancy   Prostatism   Psychiatric disorders   Substance abuse   Note: IOM/CHCT = Institute of Medicine Council on Health Care Technology. Using a two-round modified Delphi approach, the priority-setting group chose 20 national assessment priorities from a list of 496 candidate topics. In identifying these priorities, the group considered alternative medical technologies that may be used for each of the priority clinical conditions and the multiple clinical indications for the priority technologies. This list of priorities represented a preliminary set of general assessment areas. Example 5: Food and Drug Administration FDA establishes priorities for the evaluation of new drug applications and of information submitted about the safety and efficacy of new devices as they are received. It bases its priority setting on (1) the agency's prospective estimate of the level of clinical need for a new chemical entity, (2) the availability of some existing technology to treat that clinical need, and (3) FDA's best judgment (using a three-point scale) about what the new drug or therapy will add to the therapeutic armamentarium. FDA reviews all new drugs and biologicals at the "front end" for approval under the authority of the Medical Device Amendments of 1976. That legislation (21 U.S.C. 360c) authorizes the agency to regulate all medical devices to ensure that these products are safe and efficacious. The law created a three-tier classification scheme in which only those devices that pose the most significant safety risks must meet premarketing approval standards equivalent to those for new drugs. A list of devices that fall into each category are listed in the Federal Register.

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process QUANTITATIVE MODELS FOR SETTING PRIORITIES Two sets of researchers have proposed quantitative approaches to priority setting that use explicit criteria and empirical evidence to estimate the relative importance of assessing a set of technologies (Eddy, 1989; Phelps and Parente, 1990). David Eddy developed the Technology Assessment Priority-Setting System (TAPSS) for the Methods Panel of the Council on Health Care Technology of the Institute of Medicine; Charles Phelps and Steven Parente developed a different type of quantitative model for the same body. The purposes of these models are to structure thinking, identify the relative importance of the different elements in setting priorities, and provide a framework to evaluate the effect of different assumptions on priority rankings. Example 6: Technology Assessment Priority-Setting System TAPSS is a quantitative model that combines three variables: (1) the population affected, (2) the economic importance of a technology, and (3) the impact of an assessment on the health and economic outcomes for a population. The impact of an assessment is determined by a chain of events that include the likelihood that an assessment will change the use of the technology, the number of patients whose care will be changed, and the effect of such a change on the health of an individual patient (the "marginal effect"). Eddy's formula includes terms for the size of the population that potentially will be affected, the proportion of the affected population in different regions of the country (e.g., differences owing to geography, practice setting, or access), clinical characteristics of candidate technologies, the "Delta" results (the result of an assessment that can potentially cause a change in the use of the technology), ''periods" (change in the use of the technology over time), and the effect of the technology on patient outcomes. Although Eddy's model does not include specific weights to be assigned to different outcomes, he indicates that weights can be employed in a separate, later step in the process (D. Eddy, personal communication, November 1991). He asserts that parameter estimates should be based on empirical sources, if possible, but that when necessary, subjective judgments should be used. In another instance, Eddy (1989:499) cautions that the model does not provide precise answers but that it is "more accurate and accountable than attempting to perform the entire exercise implicitly and subjectively." Example 7: The Phelps-Parente Model In the Phelps-Parente model, calculation of a priority-setting index is based on three components: (1) aggregate spending (cost/unit x number of units); (2) the square of the coefficient of variation (an indication of clinical uncertainty and differences in practice style); and (3) a term that measures

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process how much the incremental value of an intervention falls with increasing rates of intervention (inverse demand elasticity). The economist's incremental value curve demonstrates how adding populations for a screening technology or more frequent use of a technology such as breast cancer screening increases the rate of use of a procedure until it is less and less likely to confer benefit. (Although this assumption may be valid in general, it may not be valid in any one specific clinical area; for example, mammography may not, in fact, be used by the population that is at greatest risk for breast cancer.) This priority-setting model assumes, for the sake of simplicity, that the average rate of use is the correct rate, in part because one cannot know in advance of an assessment whether any other rate (higher or lower) is better. The "right" rate can be thought of as that rate at which incremental cost and incremental value are equal. For communities that are not at this "right" point, the dollar value to consumers of the difference in incremental cost and incremental value is called the welfare loss. One must further assume that much of the welfare loss is attributable to lack of information about the appropriate use of the technology and that appropriate use would, at least to some extent, increase as a result of a technology assessment. Because the model requires a measure of the unexplained variability in use of a particular technology, a technology must be in widespread use in order for it to be included in the Phelps-Parente priority-setting model. The model is thus particularly applicable to setting priorities for reassessment or for primary assessment of medical activities that are well established, but it cannot inform discussions of emerging or new technology. Phelps and Parente (1990) used hospital discharge data sets, such as those available from insurance claims data and state hospital data bases, to demonstrate the use of the model. The model could also be applied to specific age- and sex-adjusted rates of procedures within a given diagnosis or hospital admission category. It is theoretically applicable in the ambulatory setting, although outpatient data tend to be incomplete. In sum, by estimating the welfare loss associated with the absence of information on technology, the Phelps-Parente model offers a systematic way to derive rankings for priority assessment and to quantify the expected gains from eliminating unwarranted variation in medical practice patterns. SETTING PRIORITIES FOR SPENDING ON HEALTH SERVICES Example 8: Oregon Basic Health Services Act Example 8 is not an example of priority setting for assessment. Nevertheless, because the Oregon Basic Health Services (OBHS) Act has some features that appear to be analogous to the IOM committee's priority-setting

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process task, it is useful to compare the two. The purpose of the OBHS is to prioritize health spending by the Oregon Medicaid program by developing a "list of health services ranked by priority from the most important to the least important, representing the comparative benefits of each service to the entire population being served" (ORS 414.036,[4a]4). Services are to be provided beginning with the highest ranked and proceeding down the list as far as the Oregon Medicaid budget allows. Thus, the Oregon process makes judgments about the value of services (a form of "technology assessment"); in contrast, the IOM process seeks to determine which assessments should be conducted first. Whether the Oregon exercise is ethical and has merit has engendered a good deal of public discussion (see Brown, 1991; Etzioni, 1991) and is not debated here. What is of interest, however, are the similarities and differences in approach that might help the committee identify possible pitfalls in implementation of its model process. The difference in purpose between the two methods means that far more detailed information is needed to decide which services are to be provided (as in Oregon) than to decide which assessments should be done. Like the IOM committee in considering assessment priorities, however, those implementing the OBHS believed it possible to establish a fair, open, and explicit way to dicriminate among an array of possible services and to set priorities for state spending based on the greatest benefit to the health of the public served (Callahan, 1991). To that end, implementers of the OBHS have adopted four process elements that the IOM committee also sees as essential. First, to estimate potential benefit to the public, the OBHS seeks public participation and uses a broadly representative panel called the Health Services Commission. The commission is composed of five licensed physicians (with clinical expertise in the general areas of obstetrics, perinatal medicine, pediatrics, adult medicine, geriatrics, and public health, including osteopathy), a public health nurse, a social worker, and four consumers of health care. Second, implementers of the OBHS sought public consensus on criteria, or values, to guide its process. Third, the process has sought to estimate the marginal benefit of a given technology (the likely difference in outcome that would result with and without the service). Fourth, the OBHS process includes provision for a test of reasonableness to be applied to its rank-ordered list of services (Sipes-Metzler, 1991). Two additional issues that are also pertinent to the IOM priority-setting process have had to be considered by those implementing the OBHS: whether some issues are "so preeminent that they must trump their way to the top of any priority list" (Callahan, 1991:83) and how the system can respond equitably to interest groups that disagree with a technology's inclusion or exclusion from the list of covered services.

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process DISCUSSION Reactive and Implicit Processes Many of the above examples of priority setting for technology assessment could best be described as reactive, implicit, and internal. They are reactive in that they respond, sometimes ad seriatim, to requests for assessment. They are implicit in that decision making about priorities, although guided by stated criteria, is largely the result of global judgments. They are internal because experienced staff of the organization perform the ranking of the candidates for assessment. For example, OHTA's role in relation to HCFA has been to respond to individual requests for assessment, using secondary literature synthesis to provide information for coverage decisions. With the establishment of AHCPR, however, OHTA has been given expanded responsibilities that reach beyond responses to such requests. OHTA has been asked (1) to set priorities for initial assessments of new or established technologies that might not be important to or a high priority for the Medicare population, and (2) to set priorities for reassessment of technologies that have been previously assessed by OHTA. In addition, HCFA and OHTA need a way to ensure that technology assessment funds for coverage determination purposes are used as productively and as efficiently as possible. HCFA's priority-setting method is an example of a reactive mechanism that sifts requests and responds to payers, manufacturers, physicians, or other users of a technology by judging when a threshold of "demand" for technology assessment has been crossed. Widespread publicity, for example, about autologous bone marrow transplantation for metastatic breast cancer might induce demand for assessment of this technology; another example of induced demand for technology assessment might be the development of a new device for cataract extraction for which the manufacturer wants Medicare coverage. HCFA and private insurance companies alike use this priority-setting process, which is reactive and, in general, implicit. In their coverage decisions regarding new and emerging technologies, they may weigh potential expenditures most heavily in deciding which technologies to assess. Both public- and private-sector groups, however, have more candidates for assessment than they can accommodate, and all must operate within resource constraints. Others in the technology assessment field also set priorities reactively. Professional organizations such as the American College of Physicians respond, in part, to the interests of their members. A manufacturer assessing the potential market for a device or pharmaceutical product may be primarily concerned with market size and political and market hurdles such as reimbursement and pricing controls, as well as the magnitude of the clinical

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process need—that is, the likelihood that the company's product will have an impact on clinical care. Academic investigators may conduct assessments based on personal interest in a particular topic or on the availability of funds to support the research (Eddy, 1989). Strengths and Weaknesses of Reactive Mechanisms There are strengths to be acknowledged in implicit, reactive mechanisms. Their principal advantages are that they provide a timely response to "demand" and that the "hottest" or costliest issues are likely to be addressed first. A further strength is that this kind of priority setting uses the acumen and professional judgments of staff to identify technologies for assessment; as a result, few personnel and other resources are needed. Some weaknesses of reactive mechanisms can also be identified. First, to the extent that the selection process is closed, it cannot be examined, challenged, or modified by outsiders. Second, the process is unlikely to take into account all perspectives because input depends on access to those who set priorities. Third, although those who engage in technology assessment may find it appropriate to focus on controversial issues, issues that capture passing public attention can overwhelm the process. As a result, the program may never address worthwhile, significant assessments that would add to the practical scientific base of medical practice. Fourth, although implicit estimates about the importance of an issue are necessary and useful in instances when no valid data are available, an implicit method does not make systematic use of data when they are available. Fifth, because the process cannot be examined, it is less likely to be improved upon. Sixth, because of the concerns about costs of new (and frequently expensive) technologies and the political difficulties involved in assessing established technologies, there is a greater tendency to examine new technologies. In contrast to assessments of new technologies, assessments of established technologies encounter strong economic and psychological disincentives to change practice, especially for practitioners and hospitals that are frequent users of the technology. Banta and Thacker (1990) argue persuasively, however, that technologies should be assessed several times during their life cycle. The IOM/CHCT Process Compared with This IOM Study The IOM/CHCT pilot study invited a large set of interested groups to nominate candidate technologies and conditions for assessment. In assembling these technologies for further consideration, the pilot study group emphasized the need to assess alternative choices for diagnosing or treating a clinical condition rather than assessing a medical technology taken in isolation from the medical conditions that constitute its clinical content.

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process The goal of the pilot study was somewhat different from that of this IOM study, however, and its product differs correspondingly. The product of the IOM/CHCT pilot study was a list of priorities that was intended to be valid for the health of the public in 1990; the product of this study is a method for priority setting that can be used anytime in the future. Unlike the IOM/CHCT pilot study, this report does not assemble a list of the top 20 priorities for assessment; rather, it describes an ongoing process for ranking specific candidates for technology assessment such as might be needed by an organization with limited resources for assessment that was faced with choosing among a series of possible choices. The goal of this process is to marshall and use assessment resources to achieve the greatest improvement in the health of the public. To this end, the process must include operational definitions that can be used consistently by those who implement it. In a variety of ways, which are described in Chapter 4, the method presented in this report is more objective, explicit, and verifiable than that of the IOM/CHCT pilot study. Thus, this study differs from that study but has clearly evolved from it, and this committee acknowledges the path-breaking efforts of the earlier IOM/CHCT panel and the ideas described in its report. Analytic Models Both the Eddy and the Phelps-Parente analytic models specify criteria to be used in setting priorities and a formula for combining them; both emphasize the use of empirical data. The Phelps-Parente model uses only available epidemiologic, claims, and practice variation data, whereas TAPSS entails subjective estimates, including estimates of the probability that information will change behavior. Both models start with health care technologies (rather than conditions): the Phelps-Parente model uses established technologies, and TAPSS includes both established and new technologies. Strengths and Weaknesses of Analytic Models Analytic models for priority setting share a number of features. The strengths of quantitative models are that they structure thinking, use data (including, eventually, the more humanistic measures of health status that are becoming available), open the process to review and accountability, and are amenable to examination and adjustment not only of the results but of the methodology itself. Overall, they move the technology assessment process closer to a realization of its potential for strengthening the scientific basis for decision making. The use of models, however, is more complex and requires more resources and expertise than an implicit process that reacts to requests for technology assessment. Furthermore, analytic methods that simply insert

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process values into a formula can be perceived as mechanistic and insensitive to human concerns. Another potential issue is that the use of data and ratings, even though subjectively derived, can appear more precise and authoritative than is warranted. Because the assessment process will affect allocation of resources, the social and political values that will influence recommendations must also be addressed. For both these reasons, the committee emphasizes that any analytic model should use public input and professional judgment about the relative importance of criteria, the science base, clinical issues, and the political environment. It is also important to stress that the priority rankings established by means of an analytic model are inputs to a final decision process, not the final product of the process itself. Need for a Comprehensive, Proactive Process for Priority Setting What sort of process, then, would best serve the public interest? Although each assessment organization has its own goals, the public as a whole has an interest in the effects and use of medical technologies. Public agencies need a comprehensive, proactive process of public input to ensure the greatest gain to the health of the public from such technologies. The priority-setting process must be accountable to the public. It cannot be private, implicit, or internal to the organization conducting the assessment, and it must include a process to identify possible topics for action. OHTA's domain of possible topics for assessment is vast and includes many unevaluated procedures and devices whose original approval was based largely on physician acceptance as determined by decentralized fiscal intermediaries. The agency must have a process not only to respond to requests for assessment but to identify possible candidates on its own—technologies that axe newly emerging, existing technologies whose indications for use need better understanding, and technologies that may be obsolete. The identification of technologies that should be assessed requires a process that is free of bias. Determining those candidates that should have highest priority seems, like the assessment itself, to require a combination of scientific rigor and consideration of social values. An examination of the principles of priority setting for a public agency is useful in identifying the critical elements of a comprehensive proactive process. Chapter 3 considers these principles. SUMMARY This chapter described several examples of priority setting: (1) HCFA; (2) a research-intensive pharmaceutical company; (3) the CEAP program of the American College of Physicians and the DATTA program of the AMA;

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process (4) the priority-setting process used by the IOM's Council on Health Care Technology in its pilot study; (5) the FDA; two examples of quantitative models of priority setting—(6) David Eddy's Technology Assessment Priority-Setting System and (7) the Phelps and Parente model; and (8) the process developed under the Oregon Basic Health Services Act to set priorities for Medicaid spending. The committee drew on these examples to derive a set of principles for developing a process for OHTA to use in setting priorities. Although individual assessment organizations may have various goals in assessment, the public as a whole has an interest in the effects and use of medical technologies. Public agencies need a comprehensive, proactive process of public input to ensure that technology assessment provides the greatest gain possible to the health of the public. In addition, priority setting must be accountable to the public. It cannot be private, implicit, or internal to the organization but must include a process that is open, fair, and credible to discriminate among the array of possible technologies it might assess or reassess. There are a number of benefits to be derived from the use of analytic models—they structure thinking, use what data are available, and open the process to review and accountability and to examination and adjustment of both the results and the methodology. Such models move the technology assessment process closer to a realization of its potential for strengthening the scientific basis for decision making. The use of analytic models, however, is more complex and requires more resources (at least initially) and expertise than an implicit process that simply reacts to requests for technology assessment. The committee also concluded that any analytic model must include a process to review its product and a way to address issues of equity and unusual ethical and legal dimensions presented by health care technologies. Nevertheless, priority rankings established by means of an analytic model should be understood as inputs to a final decision process, not the final product of the process itself. APPENDIX: MEDICARE COVERAGE DECISION MAKING The Medicare program, which serves 33 million elderly and disabled beneficiaries and persons with end-stage renal disease, is the responsibility of the Health Care Financing Administration of the Department of Health and Human Services (DHHS). The Medicare statute provides broad authority to cover ''reasonable and necessary procedures," but it does not provide an all-inclusive list of specific items, services, treatments, procedures, or technologies covered by Medicare; specifically, it does not list which medical devices, surgical procedures, or diagnostic or therapeutic services should be covered or excluded from coverage (Federal Register 54:4304, 1989).

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process When the Medicare law was enacted, Congress vested in the Secretary of Health and Human Services the authority to make decisions about which services are "reasonable and necessary" to diagnose or treat illness or injury or to improve function. Those statutory terms—translated in practice to "safe and effective," and neither "experimental'' nor ''investigational," based on authoritative evidence or general acceptance in the medical community—became the basis for payment (coverage) determinations. Over time, of course, many new technologies and procedures have been covered. "Experimental" and "investigational" technologies are, as noted above, not covered by HCFA (nor, typically, in the private sector); definitions of these terms, however, are variable and murky. There is increasing pressure to pay for (and thus assess) technologies that are not yet standard, established therapies (e.g., investigational Class C drugs for AIDS patients, which are approved by the FDA as investigational new drugs). Coverage decisions are made in several ways—by local intermediaries and by HCFA with and without an OHTA assessment. HCFA contracts with local, primarily insurance, companies, to process and pay insurance claims from beneficiaries and providers. For Medicare Part A (the Hospital Insurance Program), these payers are known as fiscal intermediaries (FIs); for Part B (the Supplementary Insurance Program), they are referred to as carriers. HCFA issues "national coverage decisions" regarding new technologies and procedures, sometimes after seeking a recommendation from the Public Health Service (PHS) and OHTA.3 Such decisions then become national policy.4 Coverage determinations are published in the Medicare Coverage Issues Manual and its accompanying instructions. HCFA issues this manual to the FIs and carriers for claims adjudication and payment and to Medicare peer review organizations (PROs) for utilization and quality review. For the most part, however, HCFA gives the FIs, carriers, and PROs broad discretion on coverage determinations, and there is correspondingly variation in what they actually accept and pay for (Lewin and Associates, 1987). Some of the lack of uniformity has been attributed to the absence of a legally binding compliance requirement, to insufficient information about specific technologies, and to difficulty in understanding HCFA 3   A technology is considered generally accepted if (1) research and investigations are complete, (2) the technology has demonstrated value for diagnosis or treatment, (3) it is in general use for patient care, and (4) if relevant, it has been approved by the FDA (although FDA approval is not required for all devices). 4   The Omnibus Budget Reconciliation Act of 1987 requires quarterly Federal Register notices that list all manual instructions, interpretative rules, statements of policy, and guidelines of general applicability to the Medicare and Medicaid programs. Coverage decisions do not normally require notice-and-comment rule making.

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process coverage instructions. The process has been the subject of recommendations for improvement (Kinney, 1987; Lewin and Associates, 1987; National Advisory Council on Health Care Technology Assessment, 1988). The Office of Inspector General (OIG; 1990) also found that carriers have difficulty identifying new technologies and are inconsistent in coverage of new technologies that are identified. According to structured interviews and written information, one-third of carriers have experienced major problems identifying new technologies; they depend most frequently on physician inquiries and less frequently on claims submissions. Often, new technologies are not identified because they are given the claims payment codes of current technologies. A new technology may be identified when it does not fit payment instructions, when it is uncoded, when it is given an unrecognizable code, or when the level of reimbursement is challenged by the physician. Manufacturers are sometimes a source of identification. In the case of FIs, although some consider patient benefit, safety, and effectiveness in making coverage decisions, 73 percent of those interviewed used professional acceptance as a major criterion when making decisions. Fewer than 10 percent of FIs who were interviewed cited cost-effectiveness as a major criterion. The OIG report recommended that HCFA cooperate with the PHS in proactively and routinely compiling information on new health care technologies and rapidly disseminating it. During any given year, contractors, Medicare beneficiaries, physicians, equipment manufacturers, public officials, professional associations, or government entities request national coverage policy determinations for some 20 to 30 different technologies. The Coverage/Payment Technical Advisory Group (TAG), composed of medical directors and other officers of the carriers and intermediaries, also raises coverage questions. All such requests go to the Bureau of Policy Development (BPD) in HCFA. Once a question of coverage has been raised, BPD considers a technology for national policy determination if it meets one or more of the following criteria (Federal Register 54:4305 and 4318, 1989): The technology represents a significant advance in medical science. It can be described as a new product (for which there is no similar technology already covered by Medicare). The technology is likely to be used in more than one region of the country. It is likely to represent a significant expense to the Medicare program. It has the potential for rapid diffusion and application. There is substantial disagreement among experts regarding the safety, effectiveness, or appropriateness of the technology. The technology has been treated inconsistently by different contractors and fiscal intermediaries, and a conflict can be resolved only by a national decision.

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process The technology was commonly accepted in the past but appears to have become outmoded or its safety and effectiveness are in question. If BPD decides that a coverage decision is not appropriate (for instance, the technology applies to a very rare medical condition or is still in an emerging, preliminary form), that office may still provide information to contractors, which is not binding on their decision, about the opinion of other third-party payers, specialty societies, or recognized medical authorities. If the question is deemed appropriate for a national coverage decision, BPD conducts a literature search, consults with the Food and Drug Administration (FDA) on the status of any FDA action, and meets with interested parties. Finally, BPD staff prepare a background paper for review by the HCFA Physicians Panel. The Physicians Panel (composed for the most part, of HCFA physician employees) serves in an advisory role to BPD—the panel cannot itself make a coverage determination. After considering the background information, the panel decides whether (1) to recommend that no national coverage decision be made, (2) to refer the technology question to OHTA on an "inquiry" basis,5 or (3) to refer the technology to OHTA for a full assessment. The following criteria are among those used to decide whether to refer a coverage question to OHTA for assessment (Federal Register 54:4306, 1989): significant expenditure (e.g., potential for rapid diffusion to a large patient population or high costs on a per-case basis); adequate scientific data base; and prior FDA approval if relevant (i.e., the technology is a drug, biologic, or medical device that requires approval). In 1986, the Administrative Conference of the United States recommended that DHHS introduce more "openness and regularity into the procedure for issuing 'national coverage decisions' pertaining to new medical technologies and procedures ... [and] in the process by which the HHS Office of Health Technology Assessment supplies recommendations to HCFA. . ." (Federal Register 51:46987-46988, 1986). In the Federal Register of April 29, 1987, HCFA described its process for making coverage determinations and sought comments. In January 1989, following a legal challenge arising from a Medicare coverage issue (Jameson v. Bowen, C.A. No. CV-F-83-547-REC USDC [E.D. Cal.]), HCFA issued a 5   The panel might make a recommendation for an OHTA "inquiry" if it is unsure whether sufficient evidence is available or if only limited information is needed.

OCR for page 31
Setting Priorities for Health Technology Assessment: A Model Process proposed rule to establish criteria and procedures by which health care technologies could be considered "reasonable and necessary" (Federal Register 54:4302-4318, 1989). The proposed rule solicited comments on, among other topics, (1) criteria for coverage decisions and "the identification and selection of health care technologies for national coverage decisions," and (2) "methods for assuring appropriate public participation in the various phases of the technology assessment process." The rule also proposed that cost-effectiveness of technologies be a criterion for coverage (see Leaf, 1989). At the time of this writing, the proposed rule is still pending. Based on the Notice of Proposed Rule Making, the final rule will likely require not only that a technology be reasonably safe, demonstrably effective, noninvestigational, and acceptable to the medical community, but also cost-effective.