Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 17
Setting Priorities for Health Technology Assessment: A Model Process 1 Technology Assessment and the Need for Priority Setting Clinicians, payers, and policymakers turn to technology assessment to help provide better information—to assist decision making in clinical care, to guide coverage decisions, and to set national health policy. Technology assessment can play a valuable role in the entire process of improvement of health and health care. For example, an assessment may show that the data needed for a complete evaluation of a technology are not available. This finding may serve as an impetus to initiate research to supply the missing information. Similarly, an assessment may lead to changes in practice norms when it yields a conclusion that differs from common clinical behavior. Deciding which of the myriad medical technologies require assessment—and at what point—is a necessity. Even with unlimited funds, it would not be feasible to evaluate all health care technologies; rather, it would be necessary to identify which assessments should have priority. With limited resources, the need to allocate technology assessment funds is essential, but the choices must be defensible. The purpose of this report is to describe such a process—specifically, a priority-setting process for a federal agency— the Office of Health Technology Assessment (OHTA) of the Agency for Health Care Policy and Research (AHCPR). Its broader objective, however, is to propose a process that has wider applications and general utility to other organizations that must set priorities for health technology assessment. This chapter describes how the establishment of AHCPR supports clinical evaluation and how the expanded role of OHTA prompted this Institute of Medicine (IOM) study. It also discusses the committee that was constituted to respond to this request and the methods and terms used by the committee to develop the process proposed in the report.
OCR for page 18
Setting Priorities for Health Technology Assessment: A Model Process EVOLUTION OF TECHNOLOGY ASSESSMENT TOWARD OUTCOMES, EFFECTIVENESS, AND APPROPRIATENESS RESEARCH Health care technology encompasses a wide range of items and services that support clinical practice; it comprises an extensive number of well-established technologies and newly emerging ones. The technologies may include materials from a variety of industries, adaptations of technologies for use in new health settings, replacement of damaged organs and tissue using new or modified procedures and materials, and systems that integrate and monitor information. In what has been called an American "technocopia," such technologies include numerous new and anticipated applications drawn from space and materials technology, the human genome project, and biological research. These may result in genetic engineering applications and new generations of genetic "super drugs." Other technologies may extend preventive and diagnostic techniques to self-care, home, and ambulatory care settings. For example, biosensors and implantable materials for delivering therapies and monitoring the body, as well as the miniaturization of devices, permit treatment to be moved from the hospital to a patient's home or the doctor's office. This flexibility greatly increases the possible range of settings for care and in some cases may decrease the invasiveness of procedures (e.g., new surgical techniques that use small incisions). Other technologies have emerged from work on artificial intelligence systems and from software that assists in monitoring, diagnosis, and therapy—an example is three-dimensional diagnostic imaging. At a multipatient level in the informatics area, health care technologies include microcomputer-integrated clinical management and information systems (Coile, 1990; Misener, 1990). Ingenious applications such as these seem to hold great promise, and health care technology is often praised for improving medical care. At the same time, it is blamed for fueling the rise in per-capita health care expenditures (Altman and Blendon, 1979; Schwartz, 1987; Ginsberg, 1990). As the costs of health care continue to increase well beyond the rate of inflation in other sectors of the U.S. economy, society has devised methods to control these costs. Yet across-the-board efforts to control the use of procedures and other health care technologies—for example, through administratively imposed caps or cuts in services and programs—have been accompanied by warnings from some health care sectors about the danger these efforts pose to quality and access to care. Two separate, but related, areas of research—variations research conducted by John Wennberg and others and appropriateness research conducted by Robert Brook and his colleagues—have led policymakers and health services researchers to argue that efforts to control costs should fo-
OCR for page 19
Setting Priorities for Health Technology Assessment: A Model Process cus on encouraging selective use of technologies; that is, (1) identifying and then encouraging appropriate uses of technologies and (2) discouraging inappropriate uses. Physicians often disagree about the optimal use of diagnostic tests and treatment even for common conditions and well-established therapies (Mushlin, 1991). Wide variations in uses of technologies (see, e.g., Wennberg and Gittelsohn, 1973, 1975, 1982; McPherson et al., 1982; Paul-Shaheen et al., 1987) are thought to be due, at least in part, to such disagreement and uncertainty about their appropriate use (Eddy, 1984; Eddy and Billings, 1988; Ellwood, 1988; Moskowitz et al., 1988; Holohan et al., 1990). A separate body of published evidence on appropriateness has indicated that a significant amount of money is spent in the United States on technologies that are ill suited to the needs of patients and even at times harmful (Moloney and Rogers, 1979; McPhee et al., 1982; Brook and Lohr, 1986; Chassin et al., 1986, 1987; Merrick et al., 1986; Park et al., 1986; Winslow, 1988a,b; Brown et al., 1989). Medical leaders are convinced that appropriate medical and reimbursement decision making require a better understanding of the value of new or well-established clinical practices, which might be gained from an evaluation of the outcomes of clinical practices in the settings in which they are used (Fuchs and Garber, 1990). Such efforts toward more rigorous evaluation of medical practice are variously called outcomes and effectiveness research, evaluative clinical science, and clinical evaluation (Lohr, 1988; Relman, 1988; Gelijns, 1990; Wennberg, 1990). Effectiveness research has become an important concept in the rapidly evolving field of technology assessment, which in the past has focused on studies of clinical efficacy.1 The efficacy approach describes results obtained under controlled conditions with carefully chosen patient populations, indications, and settings. Effectiveness research, on the other hand, measures the usefulness of technologies in day-to-day clinical practice. In addition to the more traditional outcomes measured in clinical trials, such as physiological and anatomical change, effectiveness research focuses on other outcomes that are also relevant to patients and clinicians—for example, health status, functioning, and quality of life. Many people believe that effectiveness research will provide physicians with tools for selecting the patients for whom a technology is most likely to provide benefits that are important in day-to-day living. 1 According to Banta and colleagues (1981), efficacy is a measure of the probability of benefit to individuals in a defined population from a medical technology applied for a given medical problem under ideal conditions of use.
OCR for page 20
Setting Priorities for Health Technology Assessment: A Model Process The Effectiveness Initiative and Establishment of the Agency for Health Care Policy and Research Three recent events indicate the attention and interest being directed by the federal government toward effectiveness research as a way to address the nation's growing concerns about quality, effectiveness, and the escalating costs of health care. First, in 1988, William Roper, then administrator of the Health Care Financing Administration (HCFA), introduced the Effectiveness Initiative within that agency (Roper et al., 1988; Relman, 1988). This initiative sought to identify ''what works in the practice of medicine'' and to use this information to improve patient care. Roper and his colleagues described an overall approach with three elements: (1) facilitating the use of the large administrative Medicare data sets to monitor trends in the use of services and to analyze geographic variations in the use and outcomes of services, (2) supporting research, and (3) providing this information to clinicians. In support of this initiative, the IOM held a series of workshops to determine which medical conditions should receive highest priority (IOM, 1989a, 1990a,b,d,e). Second, in 1988, John Wennberg and others prompted the National Center for Health Services Research and Health Care Technology Assessment (NCHSR) to establish the Patient Outcomes Assessment Research Program. Through this program, NCHSR funded a set of multidisciplinary research studies, focused on particular clinical conditions, to assess the outcomes and effectiveness of alternative health care interventions. Third, by means of an amendment to the Public Health Service Act in the Omnibus Budget Reconciliation Act of 1989 (Public Law 101-239), Congress established within the Public Health Service the Agency for Health Care Policy and Research (AHCPR), which superseded NCHSR. (For further details on the functions of AHCPR, see the appendix to this chapter.) The legislation relocated the Office of Health Technology Assessment, which had previously been part of NCHSR, within the new agency. The Office of Health Technology Assessment The Office of Health Technology Assessment, or OHTA, was and remains responsible for performing health technology assessments in response to requests from HCFA. (OHTA also conducts assessments for the Medicaid and CHAMPUS programs, but these are a small fraction of its portfolio.) HCFA uses the assessments for Medicare coverage determinations. OHTA is located in the Public Health Service rather than in HCFA, the agency responsible for Medicare payments, to reduce any appearance of conflict of interest in technology assessment.
OCR for page 21
Setting Priorities for Health Technology Assessment: A Model Process ORIGIN OF THE IOM STUDY In 1989, the authorizing legislation for AHCPR focused and expanded the agency's role in effectiveness research and defined an expanded role for technology assessment as well. In particular, the legislation directed the agency "to promote the development and application of appropriate health care technology assessments—(1) by identifying needs in, and establishing priorities for, the assessment of specific health care technologies..." (Section 904). This charge, and related technology assessment responsibilities, go well beyond the Medicare program and call for the agency to address issues that will benefit the general public. This legislation was designed to alter significantly the mission of OHTA. Thus, broadening of its role has required OHTA to devise a method to set priorities for the use of its funds. OHTA does not now have such a process for deciding whether to conduct assessments or reassessments other than those initiated by HCFA and, if so, which ones it should undertake. In addition to its directives regarding AHCPR, the legislation directed the Secretary of Health and Human Services to call on the IOM to recommend priorities for the assessment of specific health care technologies. In asking the IOM to conduct this study, and in keeping with the legislation, the agency proposed that the IOM effort focus specifically on developing a process for setting priorities for technology assessment and reassessment within OHTA. The agency requested a priority-setting process that would be viewed as objective, broadly based, and defensible against charges of institutional bias. It asked that the process include criteria to permit it to decide whether a technology had reached a threshold for assessment or reassessment and a method to rank-order conditions or technologies requiring assessment. In developing such a process, however, the committee tried to ensure that the process could be useful to other organizations engaged in priority setting. Given the broad scope and purpose of OHTA's new legislative authority, the committee concluded that if the process was properly designed to achieve OHTA's mission, it could be readily adapted by others for their own particular needs. Previous Pilot Study of Preliminary Model The current study has its roots in a 1990 IOM monograph, National Priorities for the Assessment of Clinical Conditions and Medical Technologies: Report of a Pilot Study (IOM, 1990f), which presented a preliminary model of priority setting for technology assessment. The monograph was the report of work conducted by the Council on Health Care Technology
OCR for page 22
Setting Priorities for Health Technology Assessment: A Model Process (CHCT).2 Congress authorized the establishment of the CHCT in 1984-1985 within the Institute of Medicine to promote development of technology assessment and coordination of the many technology assessment programs in the public and private sectors (IOM, 1988). To carry out its mandate, the council established panels on methods of technology assessment, on information dissemination, and on evaluation. In response to a request from the director of the National Center for Health Services Research, the council charged its evaluation panel with setting priorities for technology assessment. In its 1990 monograph, the panel described such a process and its outcome, which it titled a pilot study. It focused on both clinical conditions and technologies rather than exclusively on individual technologies, the historical targets of technology assessment. It also used explicit criteria and a Delphi-like process to compile a list of national assessment priorities. (Chapter 2 describes the pilot study in greater detail.) It is referred to in this report as the IOM/CHCT pilot study to distinguish it from the pilot study work that was performed as part of the current project. When Congress created AHCPR, it asked the IOM to extend the council's pilot effort as a way of assisting the new agency in responding to its expanded mandate. STUDY METHODS The IOM began its current effort by installing a 13-member committee in January 1991. The committee members collectively had experience that represented the perspectives of practicing clinicians and those in academic medicine and other health professions; national legislative and health care executive policymaking; pharmaceutical and device manufacturing; technology assessment in academic, medical association, research, and third-party organizations; and the areas of health economics, ethics, insurance, managed care, hospitals, and public advocacy. Between January and September 1991, the committee met three times. Using the previously published IOM/CHCT pilot study as a starting point for discussion, it reviewed the priority-setting methods of a number of organizations and the quantitative models developed by Eddy (1989) and Phelps and Parente (1990). After outlining a process for priority setting, the committee held a 2-day subcommittee meeting in July 1991 to test this process. It heard presentations, made a brief videotape describing aspects of the process it was considering, and sought reactions from individuals who were familiar with technology assessment methods and the needs of 2 The council was disestablished in the same authorizing legislation that created AHCPR.
OCR for page 23
Setting Priorities for Health Technology Assessment: A Model Process organizations that undertake technology assessment. Finally, nine individuals, representing expertise comparable to that on the IOM study committee, reviewed the report in accordance with the policies of the National Research Council. DEFINITIONS Terms such as technology and technology assessment are often used without a common understanding of their meaning. To avoid possible misunderstanding, the committee agreed on the following definitions for its discussions. Medical Technology Medical technology encompasses a wide range of items and services that support clinical practice, including "drugs, devices, medical and surgical procedures, and the organizational and supportive systems within which such care is provided" (Office of Technology Assessment [OTA], 1978). The term is often defined by example—electronic fetal monitoring, drug therapy, coronary artery bypass surgery, magnetic resonance imaging (MRI), coronary intensive care management. Whether diagnostic or therapeutic, whether intended for the benefit of one patient or many, the term medical technology is used to denote all such activity. Diverse organizations, including the IOM in previous reports (IOM, 1985), have accepted this definition, as did this committee.3 Technology Assessment The goal of technology assessment is to provide information on patient care alternatives to patients and clinicians and information on policy alternatives to policy decision makers. It is based on an explicit analytic framework that is specified before the study begins and is comprehensive in scope; that is, it considers higher order impacts such as direct and indirect, short- and long-term, and intended and unintended effects on populations and society. There are two main categories of technology assessment. Primary technology assessment involves collecting data from or about patients and 3 HCFA defines a health care technology as a "discrete and identifiable regimen or modality used to diagnose or treat illness, prevent disease, maintain patient well-being, or facilitate the provision of health care services" (Federal Register 54:4305, 1989). This definition is compatible with the OTA definition.
OCR for page 24
Setting Priorities for Health Technology Assessment: A Model Process sometimes the collection and analysis of cost dam; it results in the generation of new information through such means as randomized clinical trials and epidemiologic observational studies. Secondary technology assessment uses existing data. Its methods include literature synthesis and meta-analysis, cost-effectiveness and cost-benefit analyses, computer modeling, and ethical, legal, and social assessments. The term technology assessment entered medical and health policy parlance in the 1970s, and from the beginning its intent was to consider the social impact of medical technologies (Banta et al., 1981; Perry and Pillar, 1990); OTA (1982) standardized the definition of medical technology assessment as "the field of research that examines the short- and long-term consequences of individual medical technologies." It viewed technology assessment as "a source of information needed by policymakers in formulating regulations and legislation, by industry in developing products, by health professionals in treating and serving patients, and by consumers in making personal health decisions." This formulation grew out of the ongoing efforts of OTA and the National Center for Health Care Technology to promote this field of study as a form of research that would describe and evaluate the effects of a technology on individuals and society. Key areas of attention—safety, efficacy, and cost-effectiveness—and key areas of impact—clinical, social, economic, and ethical—are retained in the term as it is used today. Programs of technology assessment define their goals and objectives in various ways, but in practice they adhere to the OTA definition. In 1985, the Institute of Medicine defined technology assessment, consistent with the OTA definition and very broadly, as "any process of examining and reporting on medical technology used in health care, such as safety, efficacy, feasibility, and indications for use, cost and cost-effectiveness, as well as social, economic and ethical consequences, whether intended or unintended'' (IOM, 1985:2). Consonant with the emergence of the field of effectiveness research, the most recent addition to the terminology of technology assessment comes from Fuchs and Garber (1990), who assert that the field has evolved from an "old" to a "new" form. The old form emphasized biomedical perspectives, that is, the safety and efficacy of an intervention. The new form has a much broader perspective that draws on multiple investigators, multiple data sets, and diverse methodologies to yield an assessment that is based on a range of values and interpretations of the data. As a result, Fuchs and Garber assert that the "new technology assessment is more challenging, more complex, more controversial, and potentially more useful than the old one." Current approaches to technology assessment embrace considerations of health-related quality of life, return to work, functional social and mental status, and patient preferences (McNeil et al., 1978; Fowler et al., 1988; IOM, 1989b), as well as increasingly refined evaluations of costs and
OCR for page 25
Setting Priorities for Health Technology Assessment: A Model Process benefits (OTA, 1980) and of cost-effectiveness (Leaf, 1989). Indeed, some authors now assert that the aims of and the term technology assessment itself have been subsumed in the more encompassing activity of effectiveness research, which goes well beyond measures of safety and efficacy to encompass the assessment of clinical practice (Fuchs and Garber, 1990; Rettig, 1991). Reassessment The IOM committee defined the term reassessment literally as a subsequent assessment of a health technology conducted by the same institution or organization that conducted the first assessment. Thus, evaluation of a technology by a second organization would not be considered reassessment, although the information from the first assessment would certainly be weighed as part of any new assessment effort. In their report on health care technology reassessment, Banta and Thacker (1990) note that technology assessment since the 1970s has been focused too narrowly on new technologies. They urge that assessment be an iterative process over the life cycle of a technology as it is developed, disseminated, becomes obsolete, and is dropped from use. The issue of reassessment of established technologies or of new uses of older technologies has been growing in prominence. Many urge that new technologies not be adopted unless they are known to provide at least some benefit, and that obsolete uses of technologies be eliminated. Yet knowledge of the best uses of a given technology may be scanty, and the diffusion and pattern of its actual use seldom conform to an idealized conception of a linear flow in distinct stages (e.g., developing, newly emergent, diffusing, well established, and obsolete and fallen from use [Banta et al., 1981; see Gelijns, 1990, for extensive discussion]). Indeed, the diffusion of new technologies while they are still evolving is both a characteristic of medical progress and the bane of efforts to rationalize selective use. Technologies in wide use often require ongoing modification based on clinical experience and studies to determine and refine their most appropriate application. Further, many established technologies tend to be used for wider and wider indications after their initial introduction, even though those new applications have never been formally evaluated. For example, beta-blockers (beta-adrenergic antagonists) were originally marketed for two indications. They are now approved by the Food and Drug Administration for eight conditions but are used in clinical practice for more than 20 (Gelijns and Thier, 1990). Thus, although the committee did not adopt Banta and Thacker's use of the term reassessment to include initial assessments of "technologies already in place," it certainly agrees with the need to assess established and possibly obsolete uses of technologies.
OCR for page 26
Setting Priorities for Health Technology Assessment: A Model Process REPORT STRUCTURE The remainder of the report is organized as follows. Chapter 2 briefly reviews several methods of priority setting and draws from these elements the core features of the committee's proposed priority-setting process. Chapter 3 explains the principles that guided the committee's work. Chapter 4 presents the committee's recommendations for a priority-setting process. It describes the elements of the proposed process and how the committee proposes that these elements be implemented to determine priorities for assessment and reassessment. Because the process entails activities that are beyond the present scope of OHTA, Chapter 5 examines the implications of the committee's process for priority setting within that agency. Finally, Chapter 6 summarizes the committee's rationale and recommendations, addresses possible problems, and considers how the priority-setting process developed by the committee might be modified by nonpublic entities. In an appendix (Appendix A) the committee describes the pilot study it conducted to assess the priority-setting process it recommended. SUMMARY Clinicians, payers, and policymakers are turning to technology assessment to help provide better information for clinical decision making, to guide coverage decisions, and to set national health policy. Yet the efficient use of resources for technology assessment requires a systematic priority-setting process. In the legislation that established AHCPR, the Institute of Medicine was asked to develop a process and criteria for setting priorities for health care technology assessment and reassessment to assist OHTA in its expanded role within that agency. The establishment of AHCPR itself can be seen as recognition of the need to consider systematically the value of health care services in improving health. This kind of consideration uses measures of effectiveness as a means of better understanding the appropriate use of new and established technologies; the expansion of the role of OHTA to develop a comprehensive process to guide this work is consistent with that goal. The process should also be of value to other organizations that, notwithstanding their differing goals, must develop priorities for the use of limited assessment resources. APPENDIX: THE AGENCY FOR HEALTH CARE POLICY AND RESEARCH The establishment of AHCPR is a reflection of concerns about the rising costs of health care, the effect on health care quality and costs of knowing little about the value of many health care technologies, and the consequences
OCR for page 27
Setting Priorities for Health Technology Assessment: A Model Process of using those technologies inappropriately. As stated in the authorizing legislation for the agency (Public Law 101-239, Omnibus Budget Reconciliation Act of 1989, Title IX, Part A, Section 901[b]), its purpose is to "enhance the quality, appropriateness, and effectiveness of health care services." Center for Medical Effectiveness Research AHCPR retains many of the functions and personnel of the National Center for Health Services Research (NCHSR) but has a greatly expanded role and much greater visibility than that agency had. For example, the Center for Medical Effectiveness Research within AHCPR has incorporated the medical effectiveness studies of the Patient Outcome Assessment Research Program of NCHSR and is now funding a set of condition-focused grants and contracts called Patient Outcomes Research Teams, or PORTs. These multidisciplinary teams use methods for making inferences from experimental and nonexperimental data to assess all reasonable alternative practices for a specified clinical condition. Thus, one PORT is investigating the care of patients after acute myocardial infarction; other PORTs are studying outpatient care of the diabetic patient and alternatives in the treatment of biliary disease; another team is examining pre-, inter-, and postoperative alternatives in the care of patients with cataracts. In addition to the multiyear, multi-institutional PORT research, AHCPR also funds other, smaller extramural projects as part of its continuing mission of funding health services research. Office of the Forum for Quality and Effectiveness in Health Care AHCPR's Office of the Forum for Quality and Effectiveness in Health Care is assigned responsibility for arranging for the development of clinical practice guidelines. Forum guidelines use clinical conditions as a starting point and often incorporate the products of technology assessment. Currently, topics for guideline development are chosen based on a number of criteria such as prevalence, potential benefits and risks, large variations in practice, costliness, and availability of data. Guidelines presently being developed include care of patients with cataracts in otherwise healthy eyes, care of depressed patients, treatment of benign prostatic hypertrophy, and pain management for patients with cancer. Office of Science and Data Development The Office of Science and Data Development is responsible for increasing the quality and quantity of data available for health services research
OCR for page 28
Setting Priorities for Health Technology Assessment: A Model Process (Department of Health and Human Services [DHHS], 1990). It supports extramural research, demonstrations, and conferences, and is currently investigating the possibility of linking research-related data from different sources. It also formulates science policy for AHCPR and conducts intramural research. Center for General Health Services Extramural Research and the Division of Technology and Quality Assessment Two other components of AHCPR deserve mention. The Center for General Health Services Extramural Research promotes research in three areas: cost and financing, primary care, and technology and quality assessment. The Division of Technology and Quality Assessment supports research that includes development and evaluation of methods for conducting health care technology assessments and identification of factors that influence the development, diffusion, and adoption of health care technologies (DHHS, 1990). Office of Health Technology Assessment OHTA assesses the effectiveness of medical technologies that are being considered for coverage under Medicare. When a coverage decision cannot be resolved at the regional level or within HCFA, HCFA may refer the question of effectiveness to OHTA. OHTA's plans for conducting an assessment are published in the Federal Register. Historically, the responsibilities of OHTA have entailed what Blumenthal (1983) has called "knowledge processing" rather than "knowledge development"; that is, OHTA does not perform or contract for primary research. Rather, it collects, synthesizes, validates, and disseminates existing knowledge concerning health care technologies. Originally, it was part of the National Center for Health Care Technology (NCHCT). The center itself, however, lacked constituency support and encountered such strong professional (e.g., from the American Medical Association) and manufacturer group (e.g., from the Health Industry Manufacturers Association) opposition to procedure- and device-oriented technology assessment that budget authorization was withheld and all functions of the center, except OHTA, ceased operations after fiscal year 1981 (only 3 years after the center was created). Subsequently, OHTA became a program within the National Center for Health Services Research. In Blumenthal's view, OHTA survived despite the demise of the NCHCT because of its demonstrated ability to save money for Medicare ($100-$200 million per year; Perry and Pillar, 1990). Thus, strong, although recent, historical reasons locate OHTA in AHCPR with its customary responsibilities of responding to requests for assessment from HCFA.
OCR for page 29
Setting Priorities for Health Technology Assessment: A Model Process OHTA Technology Assessments The procedure used by OHTA in its assessments is explained briefly below. Because the agency uses secondary synthesis of published literature for its technology assessments, a given circumstance for beginning work on any topic is that it must be able to retrieve sufficient data to perform an assessment. Collection of Information. Once OHTA has accepted a request for an assessment and has formulated the assessment question so that it is scientifically and medically answerable, it publishes a notice in the Federal Register soliciting comments within 90 days. The agency reviews these comments (which often total a hundred or more) and also solicits opinions from professional organizations and societies, manufacturers, manufacturers' trade associations, consumer organizations, and practitioners and institutions who perform the procedure or use the device. It sends formal letters of inquiry to other PHS agencies, particularly the Food and Drug Administration (FDA), the National Institutes of Health (NIH), and the Alcohol, Drug Abuse, and Mental Health Administration. OHTA expects the proponents of a new technology to submit data that demonstrate safety and effectiveness. For technologies such as surgical procedures, the proponents must come forward with convincing scientific studies and not simply expert opinion or anecdote.4 Analysis of Data. OHTA uses a graded, hierarchical system for examining evidence that is based on study design. The system is comparable to the five grades used by the Canadian Task Force on the Periodic Health Examination (Woolf et al., 1990). Because data from prospective randomized controlled trials are usually not available, OHTA synthesizes the results of other studies, including "quasi-epidemiologic" data or case studies. Recently, OHTA has put greater emphasis on evaluating the quality of studies and on determining whether the technology results in improved health outcomes for patients. For instance, in assessing carotid endarterectomy, the question examined was not whether lesions could be removed from the carotid artery but how the outcomes for patients with removal of lesions compared with outcomes for patients who did not have the procedure. Assessment and Recommendations. Assessments are subject to peer review within OHTA and are then forwarded to the FDA, NIH, and other 4 For a technology that is currently covered, however, the burden of proof of ineffectiveness would lie with HCFA and OHTA, which makes removal of coverage much more difficult.
OCR for page 30
Setting Priorities for Health Technology Assessment: A Model Process appropriate federal agencies. Assessments generally take from 12 to 14 months. OHTA sends HCFA a memorandum that states whether coverage is or is not recommended. Although these memoranda are not subject to the Freedom of Information Act and thus are not available to the public, the literature synthesis and analysis are published and widely disseminated in the series AHCPR Health Technology Assessment Reports. At the time of this writing, OHTA had published 10 reports (9 assessments and 1 reassessment) in its 1990 report series. Report topics comprised four procedures (e.g., no. 1, on liver transplantation), two diagnostic technologies (e.g., no. 3, on electroencephalographic [EEG] video monitoring), three treatments (e.g., no. 8, on salivary electrostimulation in Sjögren's syndrome), a revision based on new clinical trial findings (no. 5R, on carotid endarterectomy), and a reassessment (no. 9, on reassessment of external insulin infusion pumps).5 5 One report addressed both diagnosis and treatment, hence the apparent discrepancy in the totals.
Representative terms from entire chapter: