Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
THE EVALUATION OF EQUIPMENT-EMBODIED TECHNOLOGY Providers of health care adopt and use new equipment-embodied technology only if they judge it useful in achieving their goals. Chapter 5 explored alternative approaches to ensuring that the goals of decision makers are consistent with social objectives. This chapter addresses the issue of whether the evaluative infor- mation available to support adoption and use decisions is ade- quate, and, if it is inadequate, what measures should be taken to improve the process by which such evaluative information is generated and disseminated to decision makers. WHAT SHOULD POTENTIAL USERS OF EQUIPMENT-EMBODIED TECHNOLOGY KNOW ABOUT TECHNOLOGY? The value of a procedure, product, or system can be measured by different criteria. Relevant evaluative criteria vary depending on characteristics of the user and of the technology, particu- larly the stage that the technology has reached in the process of technological change. Five general evaluative criteria, each more complex than its predecessor and subsuming the previous criteria within its purview, are possible. Technical Validity Technical validity refers to the extent to which a product, procedure, or system does what it purports to do and does it safely. If a fetal monitor is to measure fetal heartbeat, then it must do so with reasonable accuracy and precision and with a reasonable degree of safety to mother, fetus, and operator. It 68
69 might also be expected to behave reliably over some lifetime whose length would be an important indicator of the technical capability of the equipment. The judgment of technical validity requires knowledge of the dimensions of performance and safety that are important to the use of the technology. Professional societies often develop standards for equipment using criteria against which the per- formance of a particular manufacturer's equipment can be as- sessed. The particular dimensions of performance th'at are selected often have a major impact on the design and long-run usefulness of the equipment. If the standards neglect important dimensions of performance, the equipment of different makers may vary widely along this dimension. Or, if the standards are set unreasonably highâfor example, demanding a level of precision in measurement that is not needed in clinical decision makingâ then the cost of technology is made unnecessarily high. Protection of individuals, even in experiments, requires the demonstration of reasonable safety prior to clinical use. As- pects of technical validity affecting safety must therefore be ascertained quite early in the development process. However, good performance standards cannot be developed until a tech- nology has been in actual use long enough to determine which dimensions of performance are truly critical. Effectiveness or Efficacy Effectiveness refers to the extent to which a product, proce- dure, or system makes a difference for the objectives of medical careâimproving the health status of the community. These im- provements are often expressed as changes in patient outcomes, measured by indicators such as mortality, morbidity, or patient satisfaction. Because of the difficulty in measuring such changes, effectiveness is usually measured by intermediate results such as changes in therapy or improvements in diagnostic accuracy.30 The effectiveness of a technology may vary widely with the organizational setting in which it is applied or with the level of training or competence of its operators. Consequently, the effectiveness of a technology is often differentiated from its "efficacy," a term sometimes used to denote effectiveness when measured under optimal clinical conditions.1' Cost-Effectiveness Cost-effectiveness refers to the extent to which a procedure, product, or system achieves a specified objective at a cost below
70 other methods of achieving the same objective. Alternatively, the most cost-effective option may be the one that achieves the highest level of effectiveness, as measured by selected indica- tors, for a given level of program or system expenditure. A cost-effective technology is one that is superior to all other alternatives for the specific conditions evaluated. For example, when a diagnostic technology is found to be cost- effective, that finding must usually be qualified by the spe- cific set of presenting symptoms and the testing sequence employed in the study. Net Social Benefit When the introduction of a new technology produces both increased expenditures for health care and improvements in patient outcomes, then the difference between the value of improved outcomes and the additional costs is the net social benefit. By reducing all measures to a commensurate scale, usually dollars, the net social value (benefit if positive, and cost if negative) is calculated. Unfortunately, calculating net social benefit is fraught with methodological and ethical difficulties,97 including inability to measure the dollar value of life and changes in pain or worry and the relative value of benefits accruing in different time periods in the future.68 Although significant amounts of research have been devoted to these and other methodological questions, the state of the art in measuring benefits remains limited. When measurement problems cannot be overcome, the physical benefits accruing from a technology (improved patient outcome) can be arrayed against the additional program or health care expenditures necessary to achieve them. Whether these benefits are worth the additional costs reduces to a political decision. Societal Impact The introduction of new technology may affect the social and economic structure of communities or nations in addition to gener- ating patient benefits. The environment, institutions, social structure, culture, values, and the law may be affected.5 For example, automated medical record keeping could threaten basic privacy rights in the absence of safeguards.140 The evaluation of a new or developing technology might include a prospective look at potential societal impacts in addition to the narrower set of patient benefits included in the previous criterion. In essence, this criterion is an extension of the net social benefit criterion, where nothing is assumed constant and all effects are assumed to interact.
7l Recent debates over the implications of genetic research high- light concern about societal impacts. This debate is occurring at an early stage in the process of technological change with respect to genetics. The major concern at present is with the safety of that research. The potential implications of tech- nology that might emerge from such research have been studied in a few instances. (See, for example, a recent National Research Council report.86) Information regarding these five evaluative criteria always exists, although its quality and the evidence on which it is based vary widely. At one extreme lies pure opinion, based on casual observation of the technology or, indeed, on no evidence at all. At the other extreme are the results of formal studies in which technology has been scientifically assessed against one or more of the evaluative criteria. Decisions based upon opin- ion gleaned from informal observation are not always inferior to those based on formal studies; however, it is reasonable to assume that the more valid the information, the more possible good decisions become. Ideally, then, one would expect new technology to be evaluated at all of the levels described above using strict methodologic standards. Only those procedures, products, or systems that are truly worth their cost would be developed and diffused, and dif- fusion would be limited to those uses for which the technology has been found valid. In reality, of course, this rarely occurs due to the existence of barriers to the generation and use of evaluative information. These barriers include those inherent in the evaluative task and those arising externally, particularly from the economic environment. These two kinds of barriers to evaluationânatural and economicâare discussed below. Natural Barriers to the Generation and Use of Evaluative Information Natural barriers include problems in conducting evaluative stud- ies arising from technical, ethical, cost, and time constraints that limit the quality of the information achievable. These natural barriers argue for a trade-off between the quality of information produced and the costs of obtaining it. Two exam- ples will illustrate how they may lead to modification of study approaches: the use of randomized clinical trials to measure efficacy and the use of technology assessment to measure societal impact.
72 Randomized Clinical Trials It has been claimed that randomized clinical trials represent the only truly valid technique to measure the effectiveness of any clinical intervention.1^ A randomized clinical trial is an experiment whose design assures that the true effectiveness of the technology can be isolated from other factors that might affect measured outcomes. Patients are assigned by chance ac- cording to a fixed probability distribution to alternative kinds of treatment, thereby minimizing the chance of biases in the selection of patients to one treatment mode or another. The design of the experiment is usually further refined to control for other possibly confounding effects. Although such experiments, properly conducted, produce the highest achievable level of quality of information on efficacy, there are some fundamental problems in their implementation. First, and perhaps most important, randomized clinical trials are costly. The National Institutes of Health estimates that in FY l975 it supported 465 randomized clinical trials at a total expenditure of approximately $72.8 million.64 These trials differ widely in scope, duration, and cost. For example, a l0-year study at the Heart Institute on the interactive effects of risk factors on the incidence of heart disease is funded at $l2.4 million per year. Another 4-year study at the Institute of Allergy and Infectious Diseases on the treatment of lethal bacteria has been funded at a yearly rate of $69,000. Other parameters influencing the cost of randomized clinical trials are the number of subjects involved and their degree of hospital insurance coverage. Second, there are often significant ethical problems in con- ducting a trial. When a medical technology is new, its novelty and potential safety hazards often require that patients be selected on a nonrandom basis from special populations. But when early evidence shows promise for the technology, the physi- cian faces the moral imperative not to deny his patients a pre- ferred treatment regardless of whether its superiority has been demonstrated definitively.72 McDermott72 has observed that "for a physician to submit his patients to random decisions regarding their therapy, he must be genuinely undecided on the value of the therapy." Third, it is often necessary to conduct trials over long periods of time to obtain enough subjects for adequate statis- tical accuracy or in some cases to measure long-term consequences of a technology. This delay in the face of accumulating informal evidence about the value of the technology often undermines the continuance of the trial.
73 Fourth, the proper design of a trial requires enough knowledge of disease processes to identify important differences in stages and subsets of the disease under study. If patients are aggre- gated in the study, the effectiveness of a technology for a particular subset of patients may be obscured.72 Fifth, there is a severe technological obsolescence problem in the conduct of efficacy studies. If the technology is chang- ing rapidly, or if user competence improves dramatically with experience over long periods, early results may lose their ap- plicability before they are published. For these reasons, clinical investigators of efficacy often resort to cheaper, faster, more feasible methods for assessing efficacy. These compromises are not necessarily detrimental. Judgment is needed to assess the loss of information content against the gains in technical and economic feasibility. Technology Assessment Although formal methods to evaluate the societal impacts of new or emerging technology have not been fully developed or vali- dated, the "technology assessment" (TA) method has been promul- gated as a logical approach to the identification of such impacts. The method of technology assessment, whose purpose is to "assess holistically the potential short-term impacts and longer-term consequences of emerging technologies on society,"5 sets forth a step-by-step process of identification and analysis of impacts. The method is formal, usually employing estimates by experts of the expected consequences of a development. However, because TA focuses on long-run, structural impacts, it is difficult to validate the technique. It is debatable whether anyone is able to foresee major societal shifts resulting from a new technology early enough in its development to influence the outcome. A study completed for the National Commission for the Protection of Human Subjects demonstrates that even experts have difficulty predicting what the major technological developments will be within a reasonably short (20-year) period.130 Thus, in the case of the TA method, as in the case of clinical trials, the costs of the method must be weighed against the quality of the information to be obtained. For those technologies with major cumulative effects on society, it is prudent to conduct periodic technology assessments, but these technologies need to be selected cau- tiously and the results considered in light of the limitations of the methodology at this time.
74 Economic Barriers to the Generation and Use of Evaluative Information The primary economic barrier to the generation of evaluative information at any criterion level is the lack of a market for such information. When, for example, decisions to adopt and use new technology are in the hands of individuals or institutions whose objectives differ from those of society, one would expect them to ignore evaluations that are irrelevant to those objec- tives. As the previous chapter contends, hospitals bear little or no risk for poor adoption decisions. Even though they may be motivated to adopt the most effective technology, they have an inadequate stake in ascertaining the effectiveness of such technology prior to the adoption decision. Also, they are clearly unmotivated to determine the cost-effectiveness of new technology. Thus, evaluative studies find no ready market for their findings. Were the financial incentives facing hospitals altered, or were regulatory processes over the adoption process instituted, then a market for such information might be created. A second major economic barrier to the development of high- quality evaluative information is the existence of economies of scale in the production of information. A single patient, physician, hospital, or even third-party payer may lack the economic resources to conduct independent studies of technical validity, effectiveness, cost-effectiveness, and so on, of all new procedures, equipment, and systems. Collective efforts to produce such information are warranted. Yet collective evalua- tion groups such as independent product-testing laboratories similar to those that have developed in other industries have not developed to a large extent in medical instrumentation, probably due to the lack of a market described above. The participants in a symposium on procurement practices in health care, sponsored by the Experimental Technology Incentives Program (ETIP) of the National Bureau of Standards in l975, recognized the waste inherent in uncoordinated information generation when it reported that: ... a number of government agencies, including state and local, are testing and evaluating medical devices in varying degrees and at various stages in their life cycle. When added to testing and evaluation by manu- facturers and associations, there is a tremendous amount of useful information being developed regard- ing the relative merits of medical devices, much if not most of which goes no farther than the boundaries 1 9 O of the organization in which the effort takes place.1'
75 The advent of the medical devices law substantially alters the situation at least with respect to the technical validity crite- rion. Manufacturers will now be responsible for proving that their devices either meet established performance standards (Category II devices) or are "safe and effective" (Category III devices). However, the medical devices program will not re- quire studies of the effectiveness of new technology in im- proving patient outcomes nor will it consider cost-effectiveness or net social benefit. Furthermore, the data developed under the program are proprietary. A third economic barrier to the development of information is related to the second: The conduct of evaluative studies is sub- ject to external effects. That is, studies may often benefit those who do not pay for their implementation, but there may be no way to appropriate payment for the information provided by one organization to another. Here the solution is for collec- tive sponsorship of such studies and the open publication of results to all parties represented by the collective body. Conclusion The natural obstacles to the production of high-quality evalua- tive information argue for the exercise of organized judgment in selecting technologies to be evaluated, evaluative criteria to be employed, methodologies to be used, and the stages in the process of technological change at which to perform such stu- dies. This judgment must reflect the trade-offs between the cost of obtaining information and the quality and usefulness of the information to decisions. The economic barriers to conducting studies argue for col- lective funding and coordination of information generation and dissemination. TO WHAT EXTENT ARE EXISTING PROCEDURES FOR GENERATING AND USING INFORMATION ON EQUIPMENT-EMBODIED TECHNOLOGY INADEQUATE? How is information on new equipment-embodied technology gener- ated and transmitted to users at present? Each evaluative criterion faces a different environment. At present, there is no systematic approach to the initiation and conduct of studies to evaluate new equipment-embodied technology except with re- spect to its technical validity. In fact, the only systematic approach to evaluating equipment-embodied technology is in the regulation of medical devices. As noted above, the law requires
76 manufacturers to collect data documenting their products' safe and effective performance according to their claims. This does not imply that studies of the effectiveness, cost- effectiveness, or benefits and costs of new medical technologies do not occur. In fact, the efficacy and effectiveness of new medical technology is studied and reported extensively in the clinical research literature. However, the quality of the eval- uative information presented in that literature has been ques- tioned by a number of observers. In a recent study of innovations in surgery, Barnes10 found that "the most critical and central defect in [the] cited studies of innovative surgical therapy is the lack of control experience." There is some evi- dence that clinical investigators in the United States do not make adequate use of scientific opportunities to conduct con- trolled clinical studies. In a review of the international literature on gastroenterologic therapy, Juhl et al.65 found that less than l percent of studies reporting on nondrug thera- pies followed a preestablished controlled research design, and that the United States lagged behind Britain in the absolute number of studies performed. In the absence of information from valid research designs, knowledge of the effectiveness of medical procedures and tech- nologies builds up through informal information channels during the diffusion process. The process of collection and digestion of information on the effectiveness of medical procedures has been characterized as a large, poorly designed clinical trial. That is, procedures and technologies are incorporated into medi- cal practice, experience with the technique is obtained, in- formal analyses of the experience are conducted, and informal channels of communication are used to disseminate the results. Fineberg's study of gastric freezing (Appendix D) demonstrates how a new technique was used in nonexperimental, direct patient care to generate information on its effectiveness, risks, and side effects. The medical devices law was not in effect at the time that gastric freezing was developed. If it had been, it is possible that the technique would not have been permitted to diffuse quickly due to its implications for patient safety. However, if the technique had not presented obvious risks to patients, its effectiveness in improving patient outcomes would not have had to be proven prior to diffusion under the medical devices law. It is interesting that while the protection of human subjects in medical experimentation evokes great concern,* this *Witness the establishment of a National Commission for the Protection of Human Subjects and the promulgation of regulations governing the use of humans in experiments.
77 nonexperimental approach to the collection of information can be most harmful to the human subjects who are participating in an experiment under the guise of direct patient care. The cost of this current method of collecting effectiveness information is part of what is normally referred to as the cost of "unnecessary" utilization. Third-party payers and consumers bear the costs of the inefficient experiments by paying for new procedures as part of patient care. Until recently, little attention has been given to measuring the cost-effectiveness or benefits and costs of new or existing clinical and ancillary equipment-embodied technology.* Why have these studies not been forthcoming? Part of the answer lies in the methodological problems of studies of this kind. These include the difficulty of identifying valid measures of patient outcome, determining the costs unique to the application of a technology, and in the case of benefit-cost analysis, placing dollar values on benefits. Such studies have been further ham- pered by the lack of valid data from clinical studies. The cost- effectiveness of a diagnostic test, for example, cannot be determined without information on its sensitivity and specifi- city in particular populations and its impact on the speed of diagnosis and on changes in therapy. When this kind of infor- mation is not available from clinical studies, analysis of cost-effectiveness is impossible. However, the fundamental obstacle to the conduct of cost- effectiveness and benefit-cost analysis has been the lack of a market, either in the private or public sectors, for the results of such studies. The irrelevance of these results to hospitals has been noted above. However, even regulatory programs expressly intended to control the adoption or use of clinical technology have been singularly uninterested in economic evaluations. The National Health Planning Act of l974 (P.L. 93-64l), which mandates the universal establishment of state certificate-of-need laws, requires agencies reviewing proposals for the adoption of ex- pensive equipment to consider "the need . . . for such services [and] . . . the availability of alternative, less costly, or more effective methods of providing such services." However, *Several studies have been directed at coordinative technologies such as mobile coronary care;2 automated hospital information systems;11 and telemedicine.81,91 These have generally been funded as part of demonstration projects sponsored by the federal government. The National Center for Health Services Research has played a major role in seeing that these studies were undertaken, but funding cuts in recent years have reduced both the demonstra- tion and evaluation activity dramatically.
78 in actual operation, these agencies generally do not consider more than medical criteria of need for expensive clinical equip- ment. The professional standards review program, which has established a network of agencies to monitor and control health services utilization, focuses on "medical necessity" as the cri- terion of interest. A test or procedure is considered necessary if it makes any difference at all to the diagnostic or thera- peutic process, not if it is the cheapest approach to the man- agement of the patient. It is not clear that the public wishes such regulation of the use of clinical technology to be based on economic as well as medical criteria, since Congress clearly had medical criteria in mind when it drafted the statute creating this program. The pessimistic forecast for economic evaluation of clinical technology must be tempered by noting recent significant con- tributions both to methodology and to increasing the awareness of the medical community. For example, a compendium of studies on the costs, risks, and benefits of surgery published by the Harvard Center for the Study of Health Practice14 has clearly linked the medical and economic disciplines in the production of useful case studies. McNeil and her colleagues7^ 7^,75, 76 have made major contributions to the measurement of the cost- effectiveness of diagnostic and screening technologies, and for a number of years investigators at the Kaiser health plans have been using such analyses to assist in the selection and design of their preventive programs (Appendixes B and C). Not surprisingly, technology assessments intended to identify the societal impacts of emerging technology have been conducted only on a sporadic or demonstration basis and virtually always federally funded. Another area of inadequate information occurs in the development of product standards. While standards have been established for years by voluntary industrial organizations such as the American Association of Clinical Chemists and the American Hospital Association, and while the medical devices law mandates development or adoption of performance standards for much equipment-embodied technology, these have primarily been concerned with safety and reliability and have not addressed significant information needs of the health care market and of developers. In a study of voluntary industrywide standards in a variety of industries, Hemenway56 has described the benefits of product uniformity standards that simplify product comparisons, assure interchangeability, allow scale economies, encourage price com- petition, and assure future availability. He concludes that while such standards are least likely to develop in a market where there is disaggregation of both buyers and sellers, such a market is most likely to benefit from them. The health care
79 system is such a market. Indeed, perceived equipment require- ments may vary from specialty to specialty, from one patient population to another, and from one setting of care to anotherâ a frustrating and costly environment for developers of new technology, as described by Gross47: . . . it is not surprising that private industry has often found it frustrating to design equipment for medical use. One consequence of this procrastination in setting standards has been the continued failure, after some five years and diverse research projects to develop a computer terminal that was or is accept- able as a man-machine interface in the hospital ward. In light of the significant benefits possible through standardi- zation and the difficulty faced by voluntary standardization ef- forts in a market with many buyers and many sellers, a national collective effort to encourage standards development is warranted. In summary, the performance of evaluative studies of equipment- embodied clinical technology has been uncoordinated, undirected, and, particularly for economic evaluations, underfunded. Oppor- tunities for obtaining improved information are not seized, either because of inadequate funds or lack of a perceived market for the information. Certainly, the lack of interest by regulatory agencies in economic evaluation constitutes a serious problem, as do the barriers to development of product standards. The market for the results of evaluation must be improved, as must the co- ordination of efforts in producing such information. PROPOSED SOLUTIONS TO THE PROBLEM The lack of and need for systematic approaches to the generation and dissemination of evaluative information on medical technolo- gies have been widely recognized. A group of experts called together by the National Commission for the Protection of Human Subjects130 has suggested that a Board on the Evaluation of Therapeutics and Scientific Advances be established. In the words of the panel's report: The precise specifics of any such proposal would need very close examination. Some very different patterns are clearly available: these could range from a pub- licly sponsored agency for "medical consumers," by way of a clinical research agency empowered to issue nonmandatory certificates of efficacy, to a full-scale
80 regulatory agency similar to the FDA, with elaborate mandatory powers. At the very least, all authenti- cated information about the general efficacy, limita- tions and/or side effects of medical, surgical, psychotherapeutic, and other health related procedures, should be readily available to "consumers" of health services, or their organizations. Since the aim of this proposal would be to bridge the gap between "experts" and the lay public, this kind of evaluation could not be left to an expert panel alone. Rather, what is needed is a "consumer oriented" agency, having not only the power to assess "efficacy" and "social costs," but also the prestige to influence the direction of research on new types of therapy and treatment modalities. In l977, legislation was introducted in Congress to establish a Center for the Study of Medical Practice within the National Institutes of Health (NIH).117 This proposal recognized the serious deficiency in information on the efficacy of medical procedures and practices. The legislation focused not only on emerging medical practices, but also on existing questionable medical practices. The National Institutes of Health127 has recently established a procedure for involving itself in disseminating information on biomedical advances with clinical usefulness to providers and practitioners. The procedure involves the establishment of advisory panels to assess the implications of advances in bio- medical research for the practice of medicine. The intent of the NIH proposal is to seek a technical consensus on: . . . the clinical significance of new findings; whether validation for efficacy and safety has been adequate, and if not, what more needs to be done; whether costs, ethical or other social impacts need to be identified as points for caution when formal recommendations are made; whether the technical complexity of the new findings suggests the need for further demonstration of feasibilities in local community settings; whether recommendations are phrased for ready understanding and acceptance by health practitioners and include all appropriate cautions. These and other suggestions for establishing systems for eval- uating new technology places this committee's concern about the present lack of such systems in the mainstream of current thought.
8l The committee believes that a collective approach to planning, funding, and coordinating evaluative studies of new equipment- embodied technology is needed. Most important is the coordina- tion function, which is totally absent at present. No single body, either public or private, currently has the authority or responsibility to monitor the emergence of new technology; determine whether, when, and what kinds of evaluative studies are needed; encourage the performance of such studies through funding; and act as an information clearinghouse for public and private users. Numerous federal agencies are involved in funding, conducting, or requiring certain kinds of evaluative studies, but the in- terests of these agencies are narrow, generally as a result of limited legislative mandates. Certainly the Food and Drug Administration (FDA) will have access to information on tech- nical validity and, in some cases, efficacy of new equipment- embodied technology. However, its legislative mandate is limited, and it cannot be expected to extend its concerns to other evaluative criteria or to technologies that fall outside the definition of a medical device. The National Institutes of Health fund clinical trials as part of their research agendas, and the commitment of funds for such activities relative to program size has been increasing in recent years. But these studies are selected fundamentally to support the research mission of the institutes and not to assist in the allocation of health care resources. The Veterans Administration (VA) also supports clinical trials, but at much lower funding levels. As a self-contained health care delivery system, the VA should be interested in funding studies at all criterion levels, but, with a small and special patient population, cannot be expected to generate all the needed information for the larger civilian health care system. The military medical system is in a similar position. The Medical Equipment Test and Evaluation Division of the Army's Medical Material Agency represents a useful source of specialized evaluative information. The evaluation programs funded by other federal agencies, such as the National Center for Health Services Research and the Center for Disease Control, are other specialized resources that a coordinating body could use. In the opinion of this committee, a national coordinating body should be established. Its purposes would be to: (l) identify the need for evaluative information on equipment-embodied (and perhaps other) technology; (2) fund planning and evaluation studies where existing funding programs are not adequate;
82 (3) collect and disseminate available information regarding new and existing technologies to users; (4) encourage and foster national and international efforts to standardize equipment-embodied technology to achieve economy of equipment design, safety, and comparability of data; (5) conduct and sponsor research into methodologies for evaluating medical technology; and (6) coordinate evaluative programs of federal agencies. The proposed coordinating body need not be governmental. Alterna- tives include a nonprofit organization such as a council on tech- nology supported by a consortium of public and private third-party payers. However, many evaluative studies are currently sponsored or conducted by federal agencies such as the NIH, FDA, VA, and others. Major users of the information would be the Medicare and Medicaid programs, health systems agencies, and direct government delivery systems such as the VA, the military medical system, and the Indian Health Service. Therefore, the placement of such authority in an existing federal agency appears to be a reasonable alternative. The best location within the existing federal bureaucracy for such a function is a question that needs more consideration than this committee was able to devote to it. A thorough analysis of the legislative and administrative mandates, interests, and com- petencies of various federal offices and their place within the organizational hierarchy of the federal bureaucracy is required. Wherever the coordinating function is placed, it is important to assure that funds are not merely shifted from existing federal programs to a new agency, but are actually increased. If federal agencies with existing programs take the opportunity to transfer responsibility for evaluation to the organization in charge, their budgets should be reduced accordingly. Although this committee calls for an increase in funding for evaluation studies, this does not necessarily imply a net in- crease in health care expenditures. At present, third-party payers reimburse for new procedures before their effectiveness has been definitely established. Because this is often an inef- ficient way to assess new technology, third-party payers even now bear a high cost of information generation and dissemination. If third-party payers were required to reimburse for procedures conducted on their beneficiaries as part of an evaluative study approved by the national coordinating body, then a major cost of such studies would be covered. Third-party payers could refuse to pay for procedures performed on patients not participating in
83 such a study when in enough doubt about the procedure's effectiveness. The administrative and analytical costs of evaluative studies should come from a collective funding source, which might include federal dollars or represent a consortium of payers.