Click for next page ( 2


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
Introcluction In an era in which the pressure for containing health care costs is steadily mounting, the case for improving our methods of technology assessment is compelling. The rate of increase In health care expenditures during the past five years has been higher than at any time in our history (Califano l987~. A major element in cost is hospital care (Waldman 1972, Gibson and Mueller 1977~. New and improved technologic serv- ices, procedures, and techniques (for example, drugs, diagnostic tests, cardiac surgery) are responsible for a large fraction of the rise in hospital costs (Wal~nan 1972~. Innovative approaches have at times diffused rapidly, without carefully defined prospective studies to evaluate their precise role in diagnosis or therapy (Abrams and McNeil 1978b,c; Cooper et al. 1985~. The dangers of overutilization have been appropriately emphasized (Abrams, 1979), but the biologic and monetary costs of underutilizing new and successful methods have not been adequately clarified (Doubilet and Abram s 1984~. Proper utilization is impossible without timely, sophisticated, and precise evaluation of new methods (Abram s and McNeil 197Sa,b,c). While safety, cost, and patterns of diffusion are matters of profound concern, the most important and difficult aspect of technology assessment is the determination of efficacy (Abrams and McNeil 1978a, McNeil 1979, Adelstein 1982, Eddy 1982, Greer 1981, Yeaton and Wo run an 1984, Petitti 1986~. This requires consideration of a number of critical problems: timing, bias, primary data collection, new methods of secon- dary data analysis, the nature of the "laboratory" in which technology as-

OCR for page 1
2 ASSESSMENT OF DIAGNOSTIC TECHNOLOGY sessment is best accomplished, the "exploitative" character of assessment research, ethical issues, and diffusion. In 1975, Congress asked the Office of Technology Assessment (OTA) to look into He problem of medical technology assessment. The OTA report (OTA 1976) led to the establishment of the National Center for Health Care Technology in 1978 (Perry and Eliastam, 1981~. Dunng its three-year history, the center acted as an important catalyst for more scientific evaluations of new and existing technologies and as a vehicle for disseminating information about them. The center ceased to exist in December 1981, after the Administration refused to request funds for its continuation out of deference to the two main sources of opposition, the American Medical Association and the Health Industry Manufacturers Association. The AMA opposed it because "the relevant clinical policy analysis and judgments are better made . . . within the medical profes- sion." The Health Industry Manufacturers Association feared that the center"might constrain industry's freedom in the market place" Sperry 1982~. The American College of Physicians, insurance carriers, the Asso- ciation of American Medical Colleges, the Association for Advancement of Medical Instn~mentation, and many other groups supported its continu- ance. But the problem did not disappear. In a changing economic climate, all of the constituencies concemed with health care became more fulRy aware of the need for objective data on medical technologies. By way of response, in 1983 the Institute of Medicine of the National Academy of Sciences issued a planning study report on assessing medical technology (Institute of Medicine 1983~. From the report, there ulti- mately emerged a congressional bill Public Law 98-551 which re- quired that the already established National Center of Health Services Research add to its name and mission "Health Care Technology Assess- ment" and that a new Council on Health Care Technology be organized as an oversight group. This time, the Health Industry Manufacturers Association reversed itself and endorsed the technology assessment provisions of the law, realizing ~at, in an era of restricted resources, the ever more cautious hospitals and buyers would require better data on efficacy, safety, and cost (Pe~ry 1986~. The legislation was approved by the Congress, and both the center and the council were funded in 1986. The council is nongovernmental, an arm of the Institute of Medicine of the National Academy of Sciences. It represents a number of different constituencies: the physician providers;

OCR for page 1
INTRODUCTION 3 the organized consumers; those who pay, including insurance companies and corporation health groups; the hospital community; the manufactur- ers and developers; the health maintenance organizations (HMOs); the nursing profession; and groups concerned win health Dolicv. mana~e- ment, and economics. Congress charged the council to serve as a clearinghouse for infonna- tion on health care technologies and health care technology assessment, to collect and analyze data, to identify needs in the assessment of specific technologies, to develop criteria and methods for assessment, and to stimulate assessments of technologies that were potentially important to the heady care of the nation. At the outset, the council formed a Methods Panel, to look at techniques and to identify the methodologic problems in assessing new technologies, and an Information Panel, which is respon- sible for developing a usable and accessible cleannghouse of assessment data. Subsequently, an Evaluation Pane] was organized to identify the technologies most urgently requiring exploration (Abrams and Hessel 19871. As one of the 16 members of the council, and as co-chairman of the Methods Panel, ~ have been impressed by the breadth and quality of the work that has been produced by the council members, the pane} partici- pants, and the staff alike. The Methods Pane! has dealt, in particular, with a number of secondary approaches to technology assessment, including meta-analysis and developments and improvements in consensus me~- ods. Dunng my career in diagnostic radiology, both at Stanford and at Harvard universities, ~ have observed and participated in the introduction of a wide array of innovations, including visceral angiography, angiocar- diography, coronary arteriography, lymphangiography, ultrasound, radi- onuclide methods, computed tomography, digital radiography, magnetic resonance imaging, among others. Many of these are based on complex technologies. In each case, before they could be properly utilized, their appropriate role had to be defined, and their efficacy and safety had to be addressed. ~ at, ~ Furthermore, my background and experience have exposed me- through direct participation to the array of impediments to first-rate primary data acquisition (Abrams and McNeil 197Sa,b,c; Abrams 1979; Hickman et al. 1979; McNeil et al. 1981; Abrams et al. 1982; Adelstein 1982; Hesse] et al. 1982; Alderson et al. 1983; Doubilet et al. 1985; Doubilet and Abrams 1984; Abrams and Hesse] 19871. As a conse- quence, I have emphasized at meetings of the council and of the panel that

OCR for page 1
4 ASSJESSME~ OF DIAGNO=IC TECHNOLOGY secondary data analysis based on inadequate primary information win necessanly be flawed. In the course of our exchanges, a request emerged that a more systematic depiction of the banters to scientific primary assessments of diagnostic technologies be developed. This brief tract is a direct response to that request. Initially, I conceived of it simply as a summary of the mechanical problems inherent in com- plet~ng well-designed, prospective efficacy studies. Out of a series of discussions with my colleague, Dr. Harold Sox, we became persuaded that a somewhat broader approach would be of value. The resulting monograph contains seven chapters. Chapter ~ presents the case for evaluating diagnostic tests and procedures. We adopt three perspectives: that of the patient, the physician, and the public. Chapter 2 introduces a central theme: the evaluation of a diagnostic test should provide the information required to decide if an individual patient needs the test. The chapter is a primer on the concepts of decision analysis. Chapter 3 represents a critique of the present state of assessment of diagnostic technologies. The focus is on the design of studies that avoid the problem of systematic bias in selecting patients to participate. Chapter 4 explores He practical problems encountered in performing studies of diagnostic technology. These problems are at least as formi- dable as the issues of study design dealt with in Chapter 3. Chapter 5 identifies the costs associated with diagnosis technology assessment. The evaluation of diagnostic tests is expensive, although not as- expensive as the failure to evaluate. Chapter 6 describes a national multicenter pro- gram for technology evaluation, designed to avoid some of the problems discussed in preceding chapters. Chapter 7 incorporates a review of the principles and problems of multi-institutional studies. There is, of course, what some might consider an important gap: we have deliberately chosen to omit a consideration of the ethical issues in technology assessment. We are fully aware of the questions that have been raised about controlled clinical teals and of the need to buttress the assessment of health technologies in humans with He underlying prin- ciples germane to any clinical research: respect for persons, beneficence, and justice. These issues are sufficiently wide-ranging, however, to warrant a full exposition of the arguments on both sides, and they could not possibly be dealt with within the limited agenda that we had set for ourselves. Dr. Sox has played the most important role in fulfilling our objectives and has contributed much of the creative thinking in a number of chapters. Susan Stern has done an outstanding job in gathering and organizing

OCR for page 1
INTRODUCTION s much of the material on rationale, barriers to implementation, and mu~- institutional studies. Dr. Douglas Owens has developed the material on cost and funding. AD of us have reviewed, criticized, and modified each section in a joint effort to render them clear, pertinent, and useful. The support of the Methods Panel in reviewing each section has been invaluable and the final editing hv Mrs Jefferv Stnia has helped elimi- . _ _ , ~ ~ , . nate both duplication and lack of clarity where they existed. Meant more as a primer than as an exhaustive treatment of the subject, it is our hope that the product win indicate bow problems and their potential solutions, while at the same time highlighting the inherent com- plexity of the area and the need for adequate resource avocation. The contents of this brief monograph reflect the experience, observa- tions, and opinions of the authors and should not be construed as a statement by the Council on Heath Care Technology as a whole, or by its Methods Panel. Although we have been helped by many comments and suggestions from council members, there has been no formal review or endorsement by the council, nor would we have considered that appropri- ate. A responsive and welD-developed system of technology assessment can provide a strong impetus to rapid application of essential technologies and prevent the wide diffusion of marginally useful methods. In both of these ways, it can increase quality and decrease the cost of health care. The requisite invesunent of funds, intellect, creativity, and supporting personnel is a small price to pay for the potential good that may be achieved. Herbert L. Abram s REFERENCES Abrams, H.L. The "overutilization" of x-rays. New England Journal of Medicine 300: 1213- 1216, 1979. Abrams, H.L., and Hessel, S. Health technology assessment: Problems and challenges. American Journal of Roentgenology 149:1127-1132, 1987. Abrams, H.L., and McNeil, B.~. Computed tomography: Cost and effi- cacy implications. American journal of Roentgenology 131:81-87, 1978a. Abrams, H.L., and McNeil, B.~. Medical implications of computed to- mography (CAT scanning). New England Journal of Medicine 298:253-261, 1978b.

OCR for page 1
6 ASSESSMENT OF DIAGNOSTIC TECHNOLOGY Abrams, H.L., and McNeil, B.~. Medical implications of computed to- mography (CAT scanning). Il. New England Journal of Medicine 298:310-318, 1978c. Abrams, H.L., Siegelman, S.S., Adams, D.F., et al. Computed tomogra- phy versus ultrasound of the adrenal gland: A prospective study. Ra- diology 143:121-128, 1982. Adelstein, S.~. Pitfalls and biases in evaluating diagnostic technologies. In McNeil, B.~., and Cravalho, E.G., eds., Critical Issues in Medical Technology, pp. 67-79. Boston, Auburn House Publishing Company, 1982. Alderson, P.O., Adams, D.F., McNeil, B.~., et al. Computed tomography, ultrasound, and scintigraphy of the liver in patients with colon or breast carcinoma: A prospective comparison. Radiology 149:225- 230, 1983. Califano, I. Quoted in David, M. U.S. health care faulted in senate. New York Times, January 13, 1987, p. 10. Cooper, L.S., Chahners, T.C., and McCally, M. Magnetic resonance imaging (MRI) (NMR). Poor quality of published evaluations of di- agnostic precision. Abstract. Clinical Research 33:597A, 1985. Doubilet, P.M., and Abrams, H.~. The cost of underutilization: Percuta- neous transluminal angioplasty. New England Journal of Medicine 310:95-10Q, 1984. Doubilet, P., McNeil, B.J., Van Houten, F.X., et al. Excretory Orography in current practice: Evidence against overutilization. Radiology 154:607-611, 1985. Eddy, D.M. Pitfalls and biases in evaluating screening technologies. In McNeil, B.J., and Cravalho, E.G., eds., Critical Issues in Medical Technology, pp. 53-65. Boston, Auburn House Publishing Company, - 1982. Gibson, R.M., and Mueller, M.S. National health expenditure, fiscal year 1976. Social Security Bulletin 40:3-22, 1977. Greer, AM Medical technology: Assessment, adoption, and utilization. Journal of Medical Systems 5:129-145, 1981. Hessel, S.J., Siegelman, S.S., McNeil, B., et al. A prospective evaluation of computed tomography and ultrasound of the pancreas. Radiology 143:129-133, 1982. Hilton an, B., Abrams, H.~., Hessel, As., et al. Simplifying radiological examinations.Theurogramas amodel.Lancet(May 19,1979) 1:1069- 1071. Institute of Medicine. A Consortium for Assessing Medical Technology. Washington, D.C., National Academy Press, 1983. McNeil, B.J. Pitfalls in and requirements for evaluations in diagnostic technologies. In Wagner J., ea., Proceedings of the Conference on

OCR for page 1
INTRODUCrlON 7 Medical Technologies. DHEW Publication (PHS) 79-3254, 1979:33- 39. McNeil, By., Sanders7, R., Alders7on, P.O., et al. A prospective study of computed tomography, ultrasound, and gallium imaging In patients with fever. Radiology 139:647-653, 1981. Office of Technologr Assessment Development of Medical Technology: Opportunities for Assessment Washington. D.C.. U.S. Government Printing Office (OTA-H-34), 1976. 0 7 Perry, S. The brief life of He National Center for Health Care Technol- Ogy. New England loumal of Medicine 307:1095-~100, 1982. Peny, S. Technology assessment: Continuing uncertainty. New England Joumal of Medicine 3 ~ 4: 240-243, ~ 9 ~ 6. Perry, S., and Eliastam, M. The National Center for Health Care Technol- ogy. Journal of He American Medical Association 245:2510-2511, 1981. Petite, D.B. Competing technologies. Implications for He costs and com- plexity of medical care. New England loumal of Medicine 315:1480- 1483, 1986. Waidman, S. The effect of changing technology on hospital costs. Re- search and statistics note. DHEW Publication 72-~170l, 1972:1-6. Yeaton, W.H., and Woranan, P.M. Evaluation issues in medical research synthesis. In Yeaton, W.H., and Worunan, P.M., eds., New Direc- tions for Program Evaluation, vol. 24, Issues in Data Synthesis, pp. 43-56. San Francisco, Jossey-Bass, December 1984.