National Academies Press: OpenBook
« Previous: 6. A National Program for Assessing Diagnostic Technology
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 129
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 130
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 131
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 132
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 133
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 134
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 135
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 136
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 137
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 138
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 139
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 140
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 141
Suggested Citation:"7. Problems of Multi-Institutional Studies." Institute of Medicine. 1989. Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions. Washington, DC: The National Academies Press. doi: 10.17226/1432.
×
Page 142

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

7 Problems of Multi-Insitutional St:uclies In Chapter 6 we suggested a multi-institutional approach to conducting technology assessments. This strategy has several merits (Sox 1986~. These include access to a larger patient population, which could reduce the time needed to obtain the required number of study subjects. Findings from the potentially more diverse population of a multicenter study might be more easily generalized to a wider patient population. In addition, the pooled resources of a number of centers, both in terms of expertise and facilities, will be greater Man the resources of a single center (Meinert 1980~. To obtain these advantages, however, those planning the assess- ment must pay careful attention to some requirements and potential prob lems in four general areas: (1) study structure and organization, (2) study design and protocol development, (3) patient recruitment, and (4) quality control and monitoring. STUDY STRUCTURE AND ORGANIZATION Most of the following structural requirements pertain to multi-institu- tional studies in general; they are not unique to studies of diagnostic technology. All multicenter studies must have a wel;~-def~ned organiza- tional structure if adequate communication and monitoring are to occur (Meinert 1980~. Figure 7.1 depicts an arrangement suggested by a com- ~This section draws extensively on two reviews of the organization of collaborative clinical trials, Ederer (1975) and Meinert (1981~. 129

130 COORDINATING CENTER FIGURE 7.1 Organization of a multi-institutional study. ASSESSMENT OF DIAGNOSTIC TECHNOLOGY ADVISORY COMMITTEE ....... 1 l I FUNDING l l l | AGENCY | | CHAIRPERSON I 1 EXECUTIVE COMMITTEE CLI NICAL CENTERS 1 ~ 1 mittee of the National Advisory Heart Council (as adapted by Ederer 19751. At the top of the organizational chaIt is the chairperson, who must be willing to invest a considerable portion of his or her time and to take full responsibility for coordinating the study. This person must be able to provide strong leadership and be sensitive to the politics of the study group. Steering Committee The steering committee would be composed of principal investigators from Me major participating clinical centers and would be responsible for designing He protocol, approving protocol changes, and dealing wig operational problems. Approval of the protocol must involve aB investi- gators. Nevertheless, if the number of centers involved in the study is very large, a much smaller subset of the steering committee, an executive committee, may be needed to make timely decisions. The task of control-

PROBLEMS OF MULTI-INSIlTl1IONAL STUDIES 131 ling performance would be shared by the study chairperson and the steering committee. Coordinating Center This important component of the study would serve a variety of func- tions, including preparing the manual of operations, developing and pre- testing data collection forms, and randomizing the patients to the different arms of the trial. The center would develop Me statistical design for the study and would also be responsible for data analysis; its staff would therefore include a fuB-time biostatistician. In addition, foBow-up inter- views could be done by telephone from the center to ensure un~fonnity. Perhaps We most impor~t function of the coordinating center is its monitoring function. With centralized data management, the data would be monitor for quality, and periodically edited and analyzed. The coordinating center would be able to detect major drops in the level of participation at any of the coccal units. The center should be in a separate location from the funding agency, which may have a stake in a particular outcome, and from any of the clinical centers, which may try to "dump" extra duties on a center conveniently located within their wads. Advisory Committee A group of investigators who are not contributing data to the study would form an advisory committee to review the study and protocol design, recommend changes, adjudicate controversies, and make sugges- tions about adding or dropping centers. This committee would also advise the sponsoring agency on He design and progress of the trial. These individuals, having no responsibility for the care of patients in the study, would evaluate interim data from the coordinating center for trends that indicate that the study should be terminated early. For example, it would be unethical to continue a study if it became clear that one of the tests was clearly better or had serious unexpected side effects. Central Observers Finally, central laboratones or observers may be needed to ensure consistent performance throughout the centers. A patient's entry into He study often depends on He value of a particular laboratory test (for example, He serum glucose) or a specific finding on a diagnostic test

132 ASSESSMENT OF DIAGNOSTIC TECHNOLOGY (such as a chest X ray). If such tests are not performed and interpreted a standardized manner, two patients with He same true state may be evaluated at different centers, but only one is included in the study. In the University Group Diabetes Program (UGDP) study, admission to the study was based In part on the results of a glucose tolerance test, which determined the level of whole blood glucose. Four of the clinics, how- ever, substituted serum glucose levels for at least a portion of the study (Weinstein 1971~. The serum glucose level Cat defines diabetes meDitus is 20 mg/~] higher than the blood glucose level used to define this disease. Thus, several centers enrolled fewer mild diabetics than did others. Simi- larly, evaluating the study endpoint may require tests with inherent van- ability. Eliminating interinstitutional variation is especially important for tests whose results win be used to decide between (for example) inclusion or exclusion and test success or failure (Kahn 1979~. This multitude of requirements indicates the complexity involved in organizing a cooperative study. Recruiting most of the necessary person- ne] should not be a major obstacle. As we discussed in Chapter 4, however, the requirement for central observers may pose a serious prob- lem. Compensating highly trained subspecialists for devoting a large amount of time to the study could be quite costly. Individuals willing or able to give up time from other commitments may be difficult to find. The ongoing program of technology assessment we have proposed would employ its own staff of subspecialists. STUDY DESIGN AND PROTOCOL DEVELOPMENT With the framework of a cooperative study in mind, we can now examine the activities at each level of the organizational chart in more detail. We will begin with the most important tasks of the steering com- mittee: study design and protocol development. Focus and Compromise The study design should begin with a carefully identified objective. The protocol must be very detailed and precise. The active participation of many principal investigators with different areas of expertise can produce synergy, but it may produce antagonism as well The input of the individual investigators, representing a variety of disciplines, may result in a study objective that is too open-ended or overly ambitious (Machin et al. 1979~.

PROBLEMS OF MULTI-INSTIT~IONAL STUDIES 133 Trying to obtain as much data as possible in order to satisfy everyone's requirements may produce a trial that has too many hypotheses. These, in turn, pose a statistical problem, because the larger the number of compari- sons, the more likely it is that a difference will appear statistically signifi- cant when it is me result of random fluctuations in sampling. Also, the burden of gathering the additional data may be impractically heavy. For example, suppose a patient has agreed to participate and the study has enough funding to monitor his or her clinical condition. The investiga- tors may be tempted to use a diagnostic imaging technique to answer any number of questions about the progress of the patient's disease, rather tha~for example -focusing on the technique's ability to detect metas- tases to the liver. In studies of diagnostic technology, additional proce- dures may result in both inconvenience and increased costs for the patient. Often the result is poor adherence to the study protocol and patients who withdraw from the study. Cooperation in a multicenter study win require compromise. When the steering committee makes a decision about the study objective or the protocol, they must reach unanimity because, as the statistical coordina- tors of the UGDP study put it, "a majority decision cannot be a substitute where professional ethics and scientific conviction are concemed" (Klimt and Meinert 1966, p. 343~. Thus, in a multicenter study, the time needed to agree on a final protocol will be greater than in a single center. A principal investigator who perceives omissions or objectionable provi- sions in the protocol may refuse to participate from the beginning. Alter- natively, he or she may try to find a way "around" the problem or may drop out once the study is in progress. For example, in the methods paper of the extracraniaU~ntracranial Ipecac' bypass study, the authors state that "some centers have joined the trial with a commitment to exclude patients with no symptoms since their initial carotid occlusions were demonstrated," although such patients were eligible for the study (EC/IC Study Group 1985, p. 399~. Compromises aimed at ensuring Me partici- pation of particular centers are less problematic if they are made explicit, yet the potential for introducing bias should not be underestimated. In ad- dition, achieving compromise may mean using methods or procedures that "represent a level of consensus which is less than the best scientific basis available" (Klimt and Meinert 1966, p. 343~. Reproducibility Versus Generalizability In the effort to construct a reproducible protocol, the investigators may choose methods that are agreeable to all but are not widely used in the

134 ASSESSMENT OF DIAGNOSTIC TECHNOLOGY day-to-day practice of medicine. There may be a trade-off in the study design between the need for objectivity and precision and the acquisition of clinically used or relevant data. In Me UGDP study, the standard clinical assessment of peripheral neuropathy- which examines touch, pain, and tibial perception of vibration—was replaced by a bio~esiomet- r~c measurement that was presumably more objective but was not Cal cally practical. According to Feinstein's critique of the study, We only published results from the biothesiometr~c procedure were for assess- ments of vibration in the right index finger, and Bus they had "an uncer- tain pertinence for the problem of peripheral neuropathy in the legs" (Feinstein 1971, p. 176~. Techniques providing highly objective, precise results may be more reproducible Han traditional cI=cal methods, and they may be adopted to facilitate standardization between centers. Difficulty arises when Cal clans wish to determine if the study results apply to their patients but do not have access to the method used in the study. The data on the characteristics of the study patients obtained with the "foreign" procedure cannot be easily translated into familiar clinical information. If the study does not provide data on these same characteristics, obtained with a commonly used procedure, Be physician will be unable to make the necessary comparison and win be uncertain about whether a particular patient is like the study population. Thus, there may be a loss of general- izability of the results of the study. The need for consistency may result, therefore, in substituting a highly objective pare-clinical method for a clinical method that is generally used. It may also lead to using a common procedure in an unusual fashion. For example, the dose of a hypoglycemic agent prescribed for a diabetic patient is usually flexible and is changed according to changes in the patient's status. The UGDP study protocol, however, specified "arbitrar- ily chosen fixed dosages that were maintained invar~andy throughout the project unless Be patient dropped out or developed major untoward events" (p. 1701. Deviation from usual clinical practice may also be used to facilitate statistical analysis or to maintain blinding (Feinstein 19711. Although standardization among centers is an important goal, departures from ordinary practice may lead to a protocol that is difficult to follow and is frequently misinterpreted. The protocol for studies of diagnostic technology must specify the procedure for interpreting a test and how the results are to be expressed. These methods must be the same in aD centers if data from the different centers are to be combined. Thus, the most objective method for interpre-

PROBI~MS OF MULTI-INSTIT=lONAL STUDIES 135 tation may appear to be the one of choice, although it may not be the one Mat is commonly used in clinical practice. Suppose Me criterion for including patients in a study is based on the size of a pulmonary nodule on a chest X ray. Patients with nodules of a certain size are Men randomized to either of two imaging technologies to determine which is most effective at detecting calcification of the nodule (which indicates a benign condition). According to the study protocol, scanning densitometer is to be used to measure the nodule, because the customary practice of using a ruler is presumed to be too difficult to reproduce. Without access to a densitometer to scan the patient's radio- graph, a clinician might find it difficult to determine which of the tests evaluated by the study her patient should receive. The result may be unnecessary adoption of densitometry by physicians who feel compeBed to use it In order to apply the study findings to their patients. Variation in the equipment used by the-participating centers is a related problem (McNeil et al. 1981~. Which type of equipment should the protocol specify, if any? The problem posed by such variations is that differing performance characteristics of, for example, two CI scanners, one used at center A and the other at center B. may obscure the difference between Or scanning and another imaging technique to which it is being compared. PATIENT RECRUITMENT l Before recruiting patients, it is important to calculate how many will be needed to answer the study question (Freiman et al. 1978). For example, one might need to determine how large a patient population would be Squired to demonstrate a difference of a given magnitude between the ability of two tests to detect a lesion in the brain. Because the study population of a multicenter trial is likely to be.more heterogeneous man the population from a single center, the sample variance within the inter- vention groups is likely to be greater. Thus, a larger number of patients- may be ~quired.in a multicenter study to detect a specified difference between tests. If the condition to be detected by the test is sufficiently rare, it may be difficult to recruit a large enough sample. The generalizability of Me results depends on how study patients compare with all patients with the disease being examined. One advan- tage of a multi-in.sti.tutional study is that its combined patient population may more closely approximate the "real world" Can the patients from a single institution. As we pointed out earlier, however, a larger sample size

136 ASSESSMENT OF DIAGNOSTIC TECHNOLOGY may be needed to generate the statistical power required to answer the study question, and simply adding more patients to die sample may not be adequate. This may mean using a long list of exclusion cr~tena to try to produce greater homogeneity in the study sample, which in tune would require keeping a log of data on He excluded patients, to verify the representativeness of the population. Imposing highly restrictive exclu- sion criteria may contribute to increased statistical precision, while de- tracting considerably from the generalizability of Be study results. In addition, stringent exclusion criteria may make it difficult to obtain the required number of patients. Thus, efforts to increase power by reducing heterogeneity may be offset when the study fails to enroll the target number of patients. Sufficiency explicit inclusion and exclusion enters can present ser~- ous problems as wed. If too much room is left for subjective judgment, application of Be eligibility criteria may not be undone at the different centers. The variation may be "random," with each clinic using slightly different Interpretations of the cr~tena. The result may be a diagnostically nonuniform patient population. For example, in the UGDP study, "the clinic physicians used their judgement to screen aD patients for absence of life-endangering diseases so as to obtain patients win a minimum life expectancy of five years" (Weinstein 1971, p. 17i). There are no quantita- tive rules for making such prognostic judgments, and specific guidelines were not created for Be study. As Feinstein points out, although such criteria would not have guaranteed correct predictions, the predictions at the various clinics would at least have been consistent (Feinstein 1971~. The result of absent or inconsistent application of admission enters becomes obvious if the different clinics obtain markedly different results for Be same ann of Be study. Pooling the data and doing valid statistical analyses may be impossible. In the UGDP study, the widely disparate mortality results obtained win tolbutamide treatment at the different clinics clearly indicated that the tolbutamide treatment groups did not make up a homogeneous population. To use the combined data, a retro- spective stratification into groups with similar risk of death was required. Because increased mortality was not an anticipated result of the study, however, baseline information on risk factors for death was never ob- ta~ned. In some studies, appropriate corrections for differences between clinics may not be possible and a true difference, for example, may fail to achieve statistical significance because of a wide variance. Besides being unifonnly applied, the criteria used to establish the patient's baseline state following enrollment must also be highly specific.

PROBLEMS OF MULTI-INSIlTUTIONAL SlUDIES 137 AU clinics must use the same specifications for the diagnosis of particular conditions, especially for conditions with a broad clinical spectrum, such as angina pectoris, or if changes in the severity of the conditions are important in evaluating the outcome of the study. Nonuniformity, whether it occurs in the process of enrollment or during the initial workup, can profoundly affect generalizability as weD as validity. When patients are excluded for iB~ef~ned reasons, or when they are inadequately character- ized, the composition of the study population is unknown, and the study results cannot be applied with confidence to any one patient. QUALITY CONTROL AND MONITORING The ~ dance of monitoring in a multi-institutional study cannot be overestimated. Even the most carefully designed study may fail to pro- duce valuable information if the performance of individual centers is not adequately monitored to ensure correctness and thoroughness in the ad- m~n~stration of the protocol. Accurate and efficient data entry requires the individuals participating in data collection to cooperate win the staff of the coordinating center. Standardized data forms, designed specifically for the study, must be fined out and returned to the coordinating center without delay (Gaus 1979~. Adherence to the protocol may vary from center to center. With in- creasing delay between acquiring the data from the patient and complet- ing the data forms, the likelihood of errors or omissions increases. Simi- larly, there may be substantial delay between the time the forms are completed and when they appear at the coordinating center (Marks et al. 1984~. Thus, an error (for example, a patient admitted to the trial by mistake) may not be discovered by the monitoring committee until much later, after considerable time and energy have been invested. In the UGDP study, 69 patients who did not meet the diagnostic criteria for admission to the study were nevertheless included (Feinstein 1971~. Alternatively, some clinics may routinely fail to obtain ad the neces- sary baseline information. In the UGDP study, data specified by the protocol were never obtained for some patients. For example, 31~ patients did not have retinal photographs. As Feinstein (1971) points out (p. 177), the absence of this data '`adds to the subsequent problems of evaluating risk factors and transitions in groups whose denominator is substantially reduced by the omissions." Another reason for monitoring is to discover attempts to "tamped' win randomization. The test groups from the center with the altered scheme win suffer from selection bias. If such bias is not

138 ASSESSMENT OF DIAGNOSTIC TECHNOLOGY detected until late in We study, the biased groups may have to be dropped from the analysis, and the study may fan short of Me requiem number of patients. Some centers may fad] to perform a test or give a treatment as specified by the protocol. In a study comparing the use of computed tomography (fir) win radionuclide (RN) studies in patients with intracranial disease, one institution used a mercury isotope and an unusual type of imaging instrument although me protocol had carefully specified sodium pertech- netate (McNeil 19791. The cases from this institution, 20 percent of the total, could not be used to analyze Me relative lesion detection capacities of or and RN. Another institution did not follow the provision calling for a minimum of eight cross-sectional views with Me CI scanner. Follow- up may be inadequate or may not occur at all. For example, in a study designed to assess the efficacy of a diagnostic technology, some centers may not proceed with the gold-standard test to verify the absence of disease in patients with a negative index test. Inadequate follow-up was a substantial problem in the CT/RN study (McNeil 19791. For cooperative studies of diagnostic technology, poor technical qual- ity control can be especially disastrous. Large numbers of technically suboptimal exams on eligible patients win reduce the effective number of cases in the study. Hidden bias may enter the study at this point mat is, when patients are removed fiom He study because of"nondiagnostic or technically unsatisfactory" exams (McNeil 1979~. It is important to guard against evaluating only patients on whom He technology "works" (Phil- br~ck et al. 1982~. For example, suppose the performance of a new imaging technique is being studied. If some of the "substandard" exami- nations are really false negatives, and these patients are withdrawn from the study, the sensitivity for the new technique may be artificially inflated (B egg et al. 1986~. In a multi-institutional study, the technical problems that limit reproducibility of tests at each of the participating centers must also be considered. Skills of individuals may vary (Harris 1981), and in a multicenter technology assessment, comparable interpretative skills may be just as important as comparable equipment. Each center's enrollment rate must also be monitored. "Minor" partici- pants, that is, centers that contribute less than a specified number of cases, would have less experience with the protocol and/or the technology involved. Some evidence supports the contention that the quality of He participation of minor participants is lower Can that of "major" par~ci- pants (Sylvester et al. 19811. The cases from minor centers may make a greater contribution to the variance, offsetting any benefit derived from having increased the total number of cases (Gaus 1979~.

PROBLEMS OF MULTI-INSII-IUTIONAL STUDIES 139 Poor quality control and inadequate monitoring thus present a number of hazards. In the best case, deviations from the study protocol wounds be prevented, or at least caught as they occurred. Early detection would reduce the number of patients who were admit and then dropped at a later date, and it would minimize the costs and resources attendant to such mistakes. Additional patients could be enrobed, and corrections could be made if biases were discovered. If errors are not discovered. within a reasonable amount of time, the number of valid cases available for statis- tical analysis may be reduced. Generalizability may be compromised. In the worst case, poor quality control and inconsistencies would not be detected, and the study conclusion would be erroneous. If He result Danslates into a general policy that leads the medical community to adopt a less efficient technology or retire one that is still useful, patient care may suffer. SUMMARY AND CONCLUSION This chapter has presented some of the difficulties that may be encoun- tered when a multi-~nstitutional f~rnewo~ is used for the assessment of diagnostic technologies. The multi-institutional study requires a more weB-def~ned organizational structure than a single-center study. There must be strong leadership to coordinate the various units, open communi- cation between them, and ensure continuous monitoring. More time win be needed to plan a multicenter trial because the investigators must agree on a focused study question and then approve a protocol designed to answer Cat question. Once a multi-institutional study is underway, the greatest challenge win be to obtain a uniform data set, that is, to remove sources of spurious variability between the centers. The validity of statistical analyses on pooled data requnres that each center obtain similar results for similar patients. Thus, the protocol in a multi-inshtutional study must be very precise. It must be easy to follow and reproducible in each of the centers. "Reproducibility," however, must be used with caution as a criterion for protocol design. Data obtained using the most reproducible and objective methods win not be useful if such methods are not those of usual clinical practice. All centers must have access to comparable equipment and technical expertise. Problems may also arise at the level of patient enrollment. The proto- col should include explicit information about which patients are to be enrolled and which are to be excluded. IdeaDy, aD patients who are to have the test under study at a given institution would be enrolled. One of

140 ASSESSMENT OF DIAGNOSTIC TECHNOLOGY the chief advantages of the multi-inshtutional study is access to a wider spectrum of patients, yielding a study population Mat closely represents the spectrum of patients encountered in clinical practice. If, however, each center enrolls a qualitatively different population of patients, pooled analyses may be invalid or impossible. Many of these problems can be avoided or corrected with adequate monitoring. In addition to patient enrollment, protocol adherence and data quality at each center must be monitored continuously. Monitoring is the key to ensuring a unwon data set when many centers are involved. The multi-institudonal mode} for assessment of diagnostic technology has many advantages (see Chapter 6~. Even more Man the single-center model, however, it requires careful organ~zabon, commitment of Dose involved, extensive planning, and monitoring if me sway is to succeed. When these prerequisites are met, such studies may provide valuable clinical information that would overwise be impossible to obtain. REFERENCES Begg, C.B., Greenes, R.A., and Iglewicz, B. The influence of un~nte~pre- tability on Me assessment of diagnostic tests. Joumal of Chronic Diseases 39:575-584, 1986. The Coronary Drug Project Research Group. Practical aspects of decision making in clinical trials: The coronary drug project as a case study. Controlled Clinical Trials- I :363-376, 1981. The EC/IC Sway Group. The intemational cooperative study of extracra- n~aV~ntracranial arsenal anastomosis (EC/IC bypass study): Method- ology and entry characteristics. Stroke 16~3~:397-406, 1985. Ederer, F. Practical problems In collaborative clinical Dials. American loumalof epidemiology 102:~-118, 1975. Feinstein, A. R. Clinical biostatistics—VITI. An analytic appraisal of the University Group Diabetes Program (UGDP) study. Clinical Phar- macology and Therapeutics 12~2~:167-19l, 1971. Freiman, I.A., Chalmers, T.C., Smith, H., Ir., and Kuebler, R.R. The importance of beta, the type IT error and sample size in Me design and interpretation of the randomized control teal. New England Journal of Medicine 299:690-694, 1978. Fnedewald, W.T., and Levy, R.~. Planning and implementation of large clinical trials. Israel Journal of Medical Sciences 22:191-196, 1986. Gaus, W. The experience of the EORTC-Gnotobiotic Project Group In planning, organizing, and evaluating cooperative clinical trials. Pro-

PROBLEMS OF MULTI-INSTITUTIONAL STUDIES 141 ceedings of an EORTC symposium, Brussels, Belgium, Apr~126-29, 1978. In Tagnon, Ho., and Staquet, Magi., eds., Controversies in Cancer: Design of Trials and Treatment. New York, Mason Publish- ers, 1979. Harris, I.M. The hazards of bedside B ayes. Journal of the flamencan Medical Association 246:2602-2605, 1981. Kahn, H.A. Diagnostic standardization. In Row, H.P.,and Gordon, R.S., Ir., eds., Proceedings of He National Conference on Clinical Tnals Methodology, October 1977. Clinical Phannacology and Therapeu- tics 25:703-71 1, 1979. Klimt, C.R., and Meinert, C.L. The design and methods of cooperative therapeutic teals with examples from a study on diabetes. Chapter 19 (pp. 341-373), in International Encyclopedia of Pharmacology and Therapeutics, Clinical Pharmacology, Vol. I. New York, Pergamon Press, 1966. Machin, D., Staquet, My., and Sylvester, R.~. Advantages and defects of s~ngle-center and multicenter clinical trials. Proceedings of an EORTC symposium, Brussels, Belgium, April 26-29, 1978. In Tagnon, H.~., and Staquet, M.~., eds., Controversies in Cancer: Design of Trials and Treatment. New York, Mason Publishers, 1979. Marks, I.W., Croke, G., Gochman, N., et al. Major issues in the organiza- tion and implementation of the National Cooperative Gallstone Study (NCGS). Controlled Clinical Trials 5:~-12, 1984. McNeil B.~. Pitfalls in and requirements for evaluations of diagnostic technologies. In Wagner, I., ea., Proceedings of a Conference on MedicalTechnologies. DHEW Pub. No (PHS) 79-3254. Washing- ton, D.C., U.S. Government Pnndng Office, 1979:33-39. McNeil, B.~., Sanders, R., Alderson, P.O., et al. A prospective study of computed tomography, ultrasound, and gallium imaging in patients with fever. Radiology 139:647-653, 1981. Meinert, Cab. Toward more definitive clinical teals. Controlled Clinical Trials 1:249-261, 1980. Meinert, Cat. Organization of multicenter clinical teals. Controlled Clinical Trials 1 :305-312, 1981. Philbrick, J.T., Honvitz, R.I., Feinstein, A.R., et al. The limited spectrum of patients studied in exercise test research. Analyzing the tip of the iceberg. Journal of the American Medical Association 248:2467- 2470, 1982. Sox, H.C., Ir. Centers of excellence in technology assessment: A pro- posal for a national program for the study of health care technology.

142 ASSESSMENT OF DIAGNOSTIC TECHNOLOGY In A Forward Plan for Medicare Coverage and Technology Assess- ment. Report to He Assistant Secretary for Heals and Human Services. Washington, D.C, Lew~n ~ Associates, 1986. Sylvester' R.~., Pinedo, H.M., De Pauw, M., et al. Quality of institutional participation in multicenter clinical teals. New England Joumal of Medicine 305:852-855, 1981. Weiss, D.G., Williford, W.O., Collins, I.F., and gingham, S.F. Planrung multicenter clinical Dials: A biostatistician's perspective. Controlled Clinical Tnals 4:53-64, 1983.

Next: The Authors »
Assessment of Diagnostic Technology in Health Care: Rationale, Methods, Problems, and Directions Get This Book
×
Buy Paperback | $50.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Technology assessment can lead to the rapid application of essential diagnostic technologies and prevent the wide diffusion of marginally useful methods. In both of these ways, it can increase quality of care and decrease the cost of health care. This comprehensive monograph carefully explores methods of and barriers to diagnostic technology assessment and describes both the rationale and the guidelines for meaningful evaluation. While proposing a multi-institutional approach, it emphasizes some of the problems involved and defines a mechanism for improving the evaluation and use of medical technology and essential resources needed to enhance patient care.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!