2
The Evolving Evidence Base— Methodologic and Policy Challenges

OVERVIEW

An essential component of the learning healthcare system is the capacity to continually improve approaches to gathering and evaluating evidence, taking advantage of new tools and methods. As technology advances and our ability to accumulate large quantities of clinical data increases, new challenges and opportunities to develop evidence on the effectiveness of interventions will emerge. With these expansions comes the possibility of significant improvements in multiple facets of the information that underlies healthcare decision making, including the potential to develop additional insights on risk and effectiveness; an improved understanding of increasingly complex patterns of comorbidity; insights on the effect of genetic variation and heterogeneity on diagnosis and treatment outcomes; and evaluation of interventions in a rapid state of flux such as devices and procedures. A significant challenge will be in piecing together evidence from the full scope of this information to determine what is best for individual patients. This chapter offers an overview of some of the key methodologic and policy challenges that must be addressed as evidence evolves.

In the first paper in this chapter, Robert M. Califf presents an overview of the alternatives to large randomized controlled trials (RCTs), and Telba Irony and David Eddy present three methods that have been developed to augment and improve current approaches to generating evidence. Califf suggests that, while the RCT is a valuable tool, the sheer volume of clinical decisions requires that we understand the best alternative methods to use when RCTs are inapplicable, infeasible, or impractical. He outlines



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 81
The Learning Healthcare System: Workshop Summary 2 The Evolving Evidence Base— Methodologic and Policy Challenges OVERVIEW An essential component of the learning healthcare system is the capacity to continually improve approaches to gathering and evaluating evidence, taking advantage of new tools and methods. As technology advances and our ability to accumulate large quantities of clinical data increases, new challenges and opportunities to develop evidence on the effectiveness of interventions will emerge. With these expansions comes the possibility of significant improvements in multiple facets of the information that underlies healthcare decision making, including the potential to develop additional insights on risk and effectiveness; an improved understanding of increasingly complex patterns of comorbidity; insights on the effect of genetic variation and heterogeneity on diagnosis and treatment outcomes; and evaluation of interventions in a rapid state of flux such as devices and procedures. A significant challenge will be in piecing together evidence from the full scope of this information to determine what is best for individual patients. This chapter offers an overview of some of the key methodologic and policy challenges that must be addressed as evidence evolves. In the first paper in this chapter, Robert M. Califf presents an overview of the alternatives to large randomized controlled trials (RCTs), and Telba Irony and David Eddy present three methods that have been developed to augment and improve current approaches to generating evidence. Califf suggests that, while the RCT is a valuable tool, the sheer volume of clinical decisions requires that we understand the best alternative methods to use when RCTs are inapplicable, infeasible, or impractical. He outlines

OCR for page 81
The Learning Healthcare System: Workshop Summary the potential benefits and pitfalls of practical clinical trials (PCTs), cluster randomized trials, observational treatment comparisons, interrupted time series, and instrumental variables analysis, noting that advancements in methodologies are important; but increasing the evidence base will also require expanding our capacity to do clinical research—which can be exemplified by the need for increased organization, clinical trials that are embedded in a nodal network of health systems with electronic health records, and development of a critical mass of experts to guide us through study methodologies. Another issue complicating evaluation of medical devices is their rapid rate of turnover and improvement, which makes their appraisal especially complicated. Telba Irony discusses the work of the Food and Drug Administration (FDA) in this area through the agency’s Critical Path Initiative and its Medical Device Innovation Initiative. The latter emphasizes the need for improved statistical approaches and techniques to learn about the safety and effectiveness of medical device interventions in an efficient way, which can also adapt to changes in technology during evaluation periods. Several examples were discussed of the utilization of Bayesian analysis to accelerate the approval process of medical devices. David M. Eddy presented his work with Archimedes to demonstrate how the use of mathematical models is a promising approach to help answer clinical questions, particularly to fill the gaps in empirical evidence. Many current gaps in evidence relate to unresolved questions posed at the conclusion of clinical trials; however most of these unanswered questions do not get specifically addressed in subsequent trials, due to a number of factors including cost, feasibility, and clinical interest. Eddy suggests that models can be particularly useful in utilizing the existing clinical trial data to address issues such as head-to-head comparisons, combination therapy or dosing, extension of trial results to different settings, longer follow-up times, and heterogeneous populations. Recent work on diabetes prevention in high-risk patients illustrates how the mathematical modeling approach allowed investigators to extend trials in directions that were otherwise not feasible and provided much needed evidence for truly informed decision making. Access to needed data will increase with the spread of electronic health records (EHRs) as long as person-specific data from existing trials are widely accessible. As we accumulate increasing amounts of data and pioneer new ways to utilize information for patient benefit, we are also developing an improved understanding of increasingly complex patterns of comorbidity and insights into the effect of genetic variation and heterogeneity on diagnosis and treatment outcomes. Sheldon Greenfield outlines the many factors that lead to heterogeneity of treatment effects (HTE)—variations in results produced by the same treatment in different patients—including genetic, environmental, adherence, polypharmacy, and competing risk. To improve the specificity of

OCR for page 81
The Learning Healthcare System: Workshop Summary treatment recommendations, Greenfield suggests that prevailing approaches to study design and data analysis in clinical research must change. The authors propose two major strategies to decrease the impact of HTE in clinical research: (1) the use of composite risk scores derived from multivariate models should be considered in both the design of a priori risk stratification groups and data analysis of clinical research studies; and (2) the full range of sources of HTE, many of which arise for members of the general population not eligible for trails, should be addressed by integrating the multiple existing phases of clinical research, both before and after an RCT. In a related paper, David Goldstein gives several examples that illustrate the mounting challenges and opportunities posed by genomics in tailoring treatment appropriately. He highlights recent work on the Clinical Anti-psychotic Trials of Intervention Effectiveness (CATIE), which compared the effectiveness of atypical antipsychotics and one typical antipsychotic in the treatment of schizophrenia and Alzheimer’s disease. While results indicated that, with respect to discontinuation of treatment, there was no difference between typical and atypical antipsychotics, in terms of adverse reactions, such as increased body weight or development of the irreversible condition known as tardive dyskinesia, these medications were actually quite distinct. Pharmacogenetics thus offers the potential ability to identify subpopulations of risk or benefit through the development of clinically useful diagnostics, but only if we begin to amass the data, methods, and resources needed to support pharmacogenetics research. The final cluster of papers in this chapter engage some of the policy issues in expanding sources of evidence, such as those related to the interoperability of electronic health records, expanding post-market surveillance and the use of registries, and mediating an appropriate balance between patient privacy and access to clinical data. Weisman et al. comment on the rich opportunities presented by interoperable EHRs for post-marketing surveillance data and the development of additional insights on risk and effectiveness. Again, methodologies outside of the RCT will be increasingly instrumental in filling gaps in evidence that arise from the use of data related to interventions in clinical practice because the full value of an intervention cannot truly be appreciated without real-world usage. Expanded systems for post-marketing surveillance offer substantial opportunities to generate evidence; and in defining the approach, we also have an opportunity to align the interests of many healthcare stakeholders. Consumers will have access to technologies as well as information on appropriate use; manufacturers and regulatory agencies might recognize significant benefit from streamlined or harmonized data collection requirements; and decision makers might acquire means to accumulate much-needed data for comparative effectiveness studies or recognition of safety signals. Steve Teutsch and Mark Berger comment on the obvious utility of clinical stud-

OCR for page 81
The Learning Healthcare System: Workshop Summary ies, particularly comparative effectiveness studies—to demonstrate which technology is more effective, safer, or beneficial for subpopulations or clinical situation—for informing the decisions of patients, providers, and policy makers. However they also note several of the inherent difficulties of our current approach to generating needed information, including a lack of consensus on evidence standards and how they might vary depending on circumstance, and a needed advancement in the utilization, improvement, and validation of study methodologies. An underlying theme in many of the workshop papers is the effect of HIPAA (Health Insurance Portability and Accountability Act) regulation on current research and the possible implications for utilizing data collected at the point of care for generation of evidence on effectiveness of interventions. In light of the substantial gains in quality of care and advances in research possible by linking health information systems and aggregating and sharing data, consideration must be given to how to provide access while maintaining appropriate levels of privacy and security for personal health information. Janlori Goldman and Beth Tossell give an overview of some of the issues that have emerged in response to privacy concerns about shared medical information. While linking medical information offers clear benefits for improving health care, public participation is necessary and will hinge on privacy and security being built in from the outset. The authors suggest a set of first principles regarding identifiers, access, data integrity, and participation that help move the discussion toward a workable solution. This issue has been central to many discussions of how to better streamline the healthcare system and facilitate the process of clinical research, while maximizing the ability to provide privacy and security for patients. A recent Institute of Medicine (IOM) workshop, sponsored by the National Cancer Policy Forum, examined some of the issues surrounding HIPAA and its effect on research, and a formal IOM study on the topic is anticipated in the near future. EVOLVING METHODS: ALTERNATIVES TO LARGE RANDOMIZED CONTROLLED TRIALS Robert M. Califf, M.D. Duke Translational Medicine Institute and the Duke University Medical Center Researchers and policy makers have used observational analyses to support medical decision making since the beginning of organized medical practice. However, recent advances in information technology have allowed researchers access to huge amounts of tantalizing data in the form of administrative and clinical databases, fueling increased interest in the question of whether alternative analytical methods might offer sufficient validity to

OCR for page 81
The Learning Healthcare System: Workshop Summary elevate observational analysis in the hierarchy of medical knowledge. In fact, 25 years ago, my academic career was initiated with access to one of the first prospective clinical databases, an experience that led to several papers on the use of data from practice and the application of clinical experience to the evaluation and treatment of patients with coronary artery disease (Califf et al. 1983). However, this experience led me to conclude that no amount of statistical analysis can substitute for randomization in ensuring internal validity when comparing alternative approaches to diagnosis or treatment. Nevertheless, the sheer volume of clinical decisions made in the absence of support from randomized controlled trials requires that we understand the best alternative methods when classical RCTs are unavailable, impractical, or inapplicable. This discussion elaborates upon some of the alternatives to large RCTs, including practical clinical trials, cluster randomized trials, observational treatment comparisons, interrupted time series, and instrumental variables analysis, and reviews some of the potential benefits and pitfalls of each approach. Practical Clinical Trials The term “large clinical trial” or “megatrial” conjures an image of a gargantuan undertaking capable of addressing only a few critical questions. The term “practical clinical trial” is greatly preferred because the size of a PCT need be no larger than that required to answer the question posed in terms of health outcomes—whether patients live longer, feel better, or incur fewer medical costs. Such issues are the relevant outcomes that drive patients to use a medical intervention. Unfortunately, not enough RCTs employ the large knowledge base that was used in developing the principles relevant to conducting a PCT (Tunis et al. 2003). A PCT must include the comparison or alternative therapy that is relevant to the choices that patients and providers will make; all too often, RCTs pick a “weak” comparator or placebo. The populations studied should be representative; that is, they should include patients who would be likely to receive the treatment, rather than including low-risk or narrow populations selected in hopes of optimizing the efficacy or safety profile of the experimental therapy. The time period of the study should include the period relevant to the treatment decision, unlike short-term studies that require hypothetical extrapolation to justify continuous use. Also, the background therapy should be appropriate for the disease, an issue increasingly relevant in the setting of international trials that include populations from developing countries. Such populations may be comprised of “treatment-naïve” patients, who will not offer the kind of therapeutic challenge presented by patients awaiting the new therapy in countries where

OCR for page 81
The Learning Healthcare System: Workshop Summary active treatments are already available. Moreover, patients in developing countries usually do not have access to the treatment after it is marketed. Well-designed PCTs offer a solution to the “outsourcing” of clinical trials to populations of questionable relevance to therapeutic questions better addressed in settings where the treatments are intended to be used. Of course, the growth of clinical trials remains important for therapies that will actually be used in developing countries, and appropriate trials in these countries should be encouraged (Califf 2006a). Therefore, the first alternative to a “classical” RCT is a properly designed and executed PCT. Research questions should be framed by the clinicians who will use the resulting information, rather than by companies aiming to create an advantage for their products through clever design. Similarly, a common fundamental mistake occurs when scientific experts without current knowledge of clinical circumstances are allowed to design trials. Instead, we need to involve clinical decision makers in the design of trials to ensure they are feasible and attractive to practice, as well as making certain that they include elements critical to providing generalizable knowledge for decision making. Another fundamental problem is the clinical research enterprise’s lack of organization. In many ways, the venue for the conduct of clinical trials is hardly a system at all, but rather a series of singular experiences in which researchers must deal with hundreds of clinics, health systems, and companies (and their respective data systems). Infrastructure for performing trials should be supported by the both the clinical care system and the National Institutes of Health (NIH), with continuous learning about the conduct of trials and constant improvements in their efficiency. However, the way trials are currently conducted is an engineering disaster. We hope that eventually trials will be embedded in a nodal network of health systems with electronic health records combined with specialty registries that cut across health systems (Califf et al. [in press]). Before this can happen, however, not only must EHRs be in place, but common data standards and nomenclature must be developed, and there must be coordination among numerous federal agencies (FDA, NIH, the Centers for Disease Control and Prevention [CDC], the Centers for Medicare and Medicaid Services [CMS]) and private industry to develop regulations that will not only allow, but encourage, use of interoperable data. Alternatives to Randomized Comparisons The fundamental need for randomization arises from the existence of treatment biases in practice. Recognizing that random assignment is essential to ensuring the internal validity of a study when the likely effects of an intervention are modest (and therefore subject to confounding by indica-

OCR for page 81
The Learning Healthcare System: Workshop Summary tion), we cannot escape the fact that nonrandomized comparisons will have less internal validity. However, nonrandomized analyses are nonetheless needed, because not every question can be answered by a classical RCT or a PCT, and a high-quality observational study is likely to be more informative than relying solely on clinical experience. For example, interventions come in many forms—drugs, devices, behavioral interventions, and organizational changes. All interventions carry a balance of potential benefit and potential risk; gathering important information on these interventions through an RCT or PCT might not always be feasible. As an example of organizational changes requiring evaluation, consider the question: How many nurses, attendants, and doctors are needed for an inpatient unit in a hospital? Although standards for staffing have been developed for some environments relatively recently, in the era of computerized entry, EHRs, double-checking for medical errors, and bar coding, the proper allocation of personnel remains uncertain. Yet every day, executives make decisions based on data and trends, usually without a sophisticated understanding of their multivariable and time-oriented nature. In other words, there is a disassociation between the experts in analysis of observational clinical data and the decision makers. There are also an increasing number of sources of data for decision making, with more and more healthcare systems and multispecialty practices developing data repositories. Instruments to extract data from such systems are also readily available. While these data are potentially useful, questionable data analyses and gluts of information (not all of it necessarily valid or useful) may create problems for decision makers. Since PCTs are not feasible for answering the questions that underlie a good portion of the decisions made every day by administrators and clinicians, the question is not really whether we should look beyond the PCT. Instead, we should examine how best to integrate various modes of decision making, including both PCTs and other approaches to data analysis, in addition to opinion based on personal experience. We must ask ourselves: Is it better to combine evidence from PCTs with opinion, or is it better to use a layered approach using PCTs for critical questions and nonrandomized analyses to fill in gaps between clear evidence and opinion? For the latter approach, we must think carefully about the levels of decision making that we must inform every day, the speed required for this, how to adapt the methodology to the level of certainty needed, and ways to organize growing data repositories and the researchers who will analyze them to better develop evidence to support these decisions. Much of the work in this arena is being conducted by the Centers for Education and Research in Therapeutics (CERTs) (Califf 2006b). The Agency for Healthcare Research and Quality (AHRQ) is a primary source of funding for these efforts, although significant increases in support will be needed

OCR for page 81
The Learning Healthcare System: Workshop Summary to permit adequate progress in overcoming methodological and logistical hurdles. Cluster Randomized Trials If a PCT is not practical, the second alternative to large RCTs is cluster randomized trials. There is growing interest in this approach among trialists, because health systems increasingly provide venues in which practices vary and large numbers of patients are seen in environments that have good data collection capabilities. A cluster randomized trial performs randomization on the level of a practice rather than the individual patient. For example, certain sites are assigned to intervention A, others use intervention B, and a third group serves as a control. In large regional quality improvement projects, factorial designs can be used to test more than one intervention. This type of approach can yield clear and pragmatic answers, but as with any method, there are limitations that must be considered. Although methods have been developed to adjust for the nonindependence of observations within a practice, these methods are poorly understood and difficult to explain to clinical audiences. Another persistent problem is contamination that occurs when practices are aware of the experiment and alter their practices regardless of the randomized assignment. A further practical issue is obtaining informed consent from patients entering a health system where the practice has been randomized, recognizing that individual patient choice for interventions often enters the equation. There are many examples of well-conducted cluster randomized trials. The Society of Thoracic Surgeons (STS), one of the premier learning organizations in the United States, has a single database containing data on more than 80 percent of all operations performed (Welke et al. 2004). Ferguson and colleagues (Ferguson et al. 2002) performed randomization at the level of surgical practices to test a behavioral intervention to improve use of postoperative beta blockers and the use of the internal thoracic artery as the main conduit for myocardial revascularization. Embedding this study into the ongoing STS registry proved advantageous, because investigators could examine what happened before and what happened after the experiment. They were able to show that both interventions work, that the use of this practice improved surgical outcomes, and that national practice improved after the study was completed. Variations of this methodologic approach have also been quite successful, such as the amalgamation of different methods described in a recent study by (Schneeweiss et al. 2004). This study used both cluster randomization and time sequencing embedded in a single trial to examine nebulized respiratory therapy in adults and the effects of a policy change. Both

OCR for page 81
The Learning Healthcare System: Workshop Summary approaches were found to yield similar results with regard to healthcare utilization, cost, and outcomes. Observational Treatment Comparisons A third alternative to RCTs is the observational treatment comparison. This is a potentially powerful technique requiring extensive experience with multiple methodological issues. Unfortunately, the somewhat delicate art of observational treatment comparison is mostly in the hands of naïve practitioners, administrators, and academic investigators who obtain access to databases without the skills to analyze them properly. The underlying assumption of the observational treatment comparison is that if the record includes information on which patients received which treatment, and outcomes have been measured, a simple analysis can evaluate which treatment is better. However in using observational treatment comparisons, one must always consider not only the possibility of confounding by indication and inception time bias, but also the possibility of missing data at baseline to adjust for differences, missing follow-up data, and poor characterization of outcomes due to a lack of prespecification. In order to deal with confounding, observational treatment comparisons must include adjustment for known prognostic factors, adjustment for propensity (including consideration of inverse weighted probability estimators for chronic treatments), and employment of time-adjusted covariates when inception time is variable. Resolving some of these issues with definitions of outcomes and missing data will be greatly aided by development of interoperable clinical research networks that work together over time with support from government agencies. One example is the National Electronic Clinical Trials and Research (NECTAR) network—a planned NIH network that will link practices in the United States to academic medical centers by means of interoperable data systems. Unfortunately, NECTAR remains years away from actual implementation. Despite the promise of observational studies, there are limitations that cannot be overcome even by the most experienced of researchers. For example, SUPPORT (Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatment) (Connors et al. 1996; Cowley and Hager 1996) examined use of a right heart catheter (RHC) using prospectively collected data, so there were almost no missing data. After adjusting for all known prognostic factors and using a carefully developed propensity score, this study found an association between use of RHC in critically ill patients and an increased risk of death. Thirty other observational studies came to the same conclusion, even when looking within patient subgroups to ensure that comparisons were being made between comparable groups.

OCR for page 81
The Learning Healthcare System: Workshop Summary None of the credible observational studies showed a benefit associated with RHC, yet more than a billion dollars’ worth of RHCs were being inserted in the United States every year. Eventually, five years after publication of the SUPPORT RHC study, the NIH funded a pair of RCTs. One focused on heart disease and the other on medical intensive care. The heart disease study (Binanay et al. 2005; Shah et al. 2005) was a very simple trial in which patients were selected on the basis of admission to a hospital with acute decompensated heart failure. These patients were randomly assigned to receive either an RHC or standard care without an RHC. This trial found no evidence of harm or of benefit attributable to RHC. Moreover, other trials were being conducted around the world; when all the randomized data were in, the point estimate comparing the two treatments was 1.003: as close to “no effect” as we are likely ever to see. In this instance, even with some of the most skillful and experienced researchers in the world working to address the question of whether RHC is a harmful intervention, the observational data clearly pointed to harm, whereas RCTs indicated no particular harm or benefit. Another example is drawn from the question of the association between hemoglobin and renal dysfunction. It is known that as renal function declines, there is a corresponding decrease in hemoglobin levels; therefore, worse renal function is associated with anemia. Patients with renal dysfunction and anemia have a significantly higher risk of dying, compared to patients with the same degree of renal dysfunction but without anemia. Dozens of different databases all showed the same relationship: the greater the decrease in hemoglobin level, the worse the outcome. Based on these findings, many clinicians and policy makers assumed that by giving a drug to manage the anemia and improve hematocrit levels, outcomes would also be improved. Thus, erythropoietin treatment was developed and, on the basis of observational studies and very short term RCTs, has become a national practice standard. There are performance indicators that identify aggressive hemoglobin correction as a best practice; CMS pays for it; and nephrologists have responded by giving billions of dollars worth of erythropoietin to individuals with renal failure, with resulting measurable increases in average hemoglobin. To investigate effects on outcome, the Duke Clinical Research Institute (DCRI) coordinated a PCT in patients who had renal dysfunction but did not require dialysis (Singh et al. 2006). Subjects were randomly assigned to one of two different target levels of hematocrit, normal or below normal. We could not use placebo, because most nephrologists were absolutely convinced of the benefit of erythropoietin therapy. However, when an independent data monitoring committee stopped the study for futility, a trend toward worse outcomes (death, stroke, heart attack, or heart failure)

OCR for page 81
The Learning Healthcare System: Workshop Summary was seen in patients randomized to the more “normal” hematocrit target; when the final data were tallied, patients randomized to the more aggressive target had a significant increase in the composite of death, heart attack, stroke and heart failure. Thus the conclusions drawn from observational comparisons were simply incorrect. These examples of highly touted observational studies that were ultimately seen to have provided incorrect answers (both positive and negative for different interventions) highlight the need to improve methods aimed at mitigating these methodological pitfalls. We must also consider how best to develop a critical mass of experts to guide us through these study methodologies, and what criteria should be applied to different types of decisions to ensure that the appropriate methods have been used. Interrupted Time Series and Instrumental Variables A fourth alternative to large RCTs is the interrupted time series. This study design requires significant expertise because it includes all the potential difficulties of observational treatment comparisons, plus uncertainties about temporal trends. However, one example is drawn from an analysis of administrative data, in which data were used to assess retrospective drug utilization review and effects on the rate of prescribing errors and on clinical outcomes (Hennessy et al. 2003). This study concluded that, although retrospective drug utilization review is required of all state Medicaid programs, the authors were unable to identify an effect on the rate of exceptions or on clinical outcomes. The final alternative to RCTs is the use of instrumental variables, which are variables unrelated to biology that produce a contrast in treatment that can be characterized. A national quality improvement registry of patients with acute coronary syndromes evaluated the outcomes of use of early versus delayed cardiac catheterization using instrumental variable analysis (Ryan et al. 2005). The instrumental variable in this case was whether the patient was admitted to the hospital on the weekend (when catheterization delays were longer) or on a weekday (when time to catheterization is shorter). Results indicated a trend toward greater benefit of early invasive intervention in this high-risk condition. One benefit of this approach is that variables can be embedded in an ongoing registry (e.g., population characteristics in a particular zip code can be used to create an approximation of the socioeconomic status of a group of patients). However, results often are not definitive, and it is common for this type of study design to raise many more questions than it answers.

OCR for page 81
The Learning Healthcare System: Workshop Summary    NOTE: Excerpted, as background to the following paper on privacy, from a recent IOM workshop on the Health Insurance Portability and Accountability Act (HIPAA) of 1996 (Institute of Medicine 2006). This workshop, which brought together participants from a variety of public, private and scientific sectors, including researchers, research funders, and those who had participated in preparation of the Privacy Rule, identified a number of issues to be addressed when clinical data are used to generate evidence and cast light on the lack of data about the quantitative and qualitative effects of HIPAA on the conduction of clinical research. A formal IOM study of the issue is anticipated. PROTECTING PRIVACY WHILE LINKING PATIENT RECORDS3 Janlori Goldman, J.D., and Beth Tossell Health Privacy Project Critical medical information is often nearly impossible to access both in emergencies and during routine medical encounters, leading to lost time, increased expenses, adverse outcomes and medical errors. Imagine the following scenarios: You are rushed to the emergency room, unable to give the paramedics your medical history. Your young child gets sick on a school field trip, and you are not there to tell the doctor that your child has a life-threatening allergy to penicillin. As you are being wheeled into major surgery, your surgeon realizes she must first look at an MRI taken two weeks earlier at another hospital. If health information were easily available electronically, many of the nightmare scenarios above could be prevented. But, to many, the potential benefits of a linked health information system are matched in significance by the potential drawbacks. The ability to enhance medical care coexists with the possibility of undermining the 3 Text reprinted from iHealthBeat, February 2004, with permission from the California HealthCare Foundation, 2007.

OCR for page 81
The Learning Healthcare System: Workshop Summary privacy and security of people’s most sensitive information. In fact, privacy fears have been a substantial barrier to the development of a national health information network. A 1999 survey by the California HealthCare Foundation showed that even when people understood the huge health advantages that could result from linking their health records, a majority believed that the risks—of lost privacy and discrimination—outweighed the benefits. The issue does not split along partisan lines; prominent politicians from both parties have taken positions both for and against electronically linking medical records. During speeches to Congress and the public in 1993, Former President Bill Clinton touted a prototype “health security card” that would allow Americans to carry summaries of their medical records in their wallets. In response, Former Senate Minority Leader Bob Dole decried the health plan as “a compromise of privacy none of us can accept.” And yet, in his State of the Union address last month, President Bush advocated “computerizing health records [in order to] avoid dangerous medical mistakes, reduce costs, and improve care.” History of Medical Record Linkage But since the HIPAA privacy rule went into effect last April, the issue of unique health identifiers has resurfaced in the political debate. In November, the Institute of Medicine issued a report urging legislators to revisit the question of how to link patient data across organizations. “Being able to link a patient’s health care data from one department location or site to another unambiguously is important for maintaining the integrity of patient data and delivering safe care,” the report concluded. In fact, the Markle Foundation’s Information Technologies for Better Health program recently announced that the second phase of its Connecting for Health initiative will be aimed at recommending policy and technical options for accurately and securely linking patient records. Decision makers in the health arena are once again grappling with the questions of whether and how to develop a national system of linking health information. Is It Linkage? Or Is It a Unique Health Identifier? The initial debate over linking medical records foundered over concern that any identifier created for health care purposes would become as ubiquitous and vulnerable as the Social Security number. At a hearing of the National Committee on Vital and Health Statistics in 1998, one speaker argued that “any identifier issued for use in health care will become a single national identifier … used for every purpose under the sun including driver’s licenses, voter registration, welfare, employment and tax.” Using a health care identifier for non-health purposes would make

OCR for page 81
The Learning Healthcare System: Workshop Summary people’s information more vulnerable to abuse and misuse because the identifier would act as a key that could unlock many databases of sensitive information. To break this impasse, a more expansive approach is needed, focusing on the overarching goal of securely and reliably linking medical information. An identifier is one way to facilitate linkage, but not necessarily the only one. A 1998 NCVHS (National Committee on Vital and Health Statistics) white paper identified a number of possible approaches to linkage, some of which did not involve unique identifiers. At this stage, we should consider as many options as possible. It is simplistic to suggest that creating linkages is impossible simply because some initial proposals were faulty. Linkage Will Improve Health Care A reliable, confidential and secure means of linking medical records is necessary to provide the highest quality health care. In this era of health care fragmentation, most people see many different providers, in many different locations, throughout their lives. To get a full picture of each patient, a provider must request medical records from other providers or the patient, a burdensome process that rarely produces a thorough and accurate patient history, and sometimes produces disastrous errors. According to the Institute of Medicine, more than 500,000 people annually are injured due to avoidable adverse drug events in the United States. Linking medical records is, literally, a matter of life and death. The question, then, is not whether we need to link medical records but what method of linking records will best facilitate health care while also protecting privacy and ensuring security. The time is long overdue for politicians, technical specialists, and members of the health care industry to find a workable solution. Privacy Must Be Built in from the Outset If privacy and security are not built in at the outset, linkage will make medical information more vulnerable to misuse, both within health care and for purposes unrelated to care. Even when most records are kept in paper files in individual doctors’ offices, privacy violations occur. People have lost jobs and suffered stigma and embarrassment when details about their medical treatment were made public. Putting health information in electronic form, and creating the technical capacity to merge it with the push of a button, only magnifies the risk. Recently, computers containing the medical records of more 500,000 retired and current military personnel were stolen from a Department of Defense contractor. If those computers had been linked to an external network, the thieves might have been able to

OCR for page 81
The Learning Healthcare System: Workshop Summary break into the records without even entering the office. We must therefore make sure that any system we implement is as secure as possible. Similar Obstacles Have Been Overcome in Other Areas The fields of law enforcement and banking have succeeded in linking personal information across sectors, companies and locations. Like health care, these fields are decentralized, with many points of entry for data and many organizations with proprietary and jurisdictional differences. Yet the urgent need to link information has motivated them to implement feasible and relatively secure systems. Law enforcement, for example, uses the Interstate Identification Index, which includes names and personal identification information for most people who have been arrested or indicted for a serious criminal offense anywhere in the country. In the banking industry, automated teller machines use a common operating platform that allows information to pass between multiple banks, giving people instant access to their money, anytime, almost anywhere in the world with an ATM card and a PIN. Although the health care field is particularly diverse, complex, and disjointed, these examples show that, with dedication and creativity, it is possible to surmount both technical and privacy barriers to linking large quantities of sensitive information. A caveat—no information system, regardless of the safeguards built in—can be 100 percent secure. But appropriate levels of protection coupled with tough remedies and enforcement measures for breaches can strike a fair balance. First Principles In resolving the conjoined dilemmas of linking personal health information and maintaining confidentiality, the Health Privacy Project urges an adherence to the following first principles: Any system of linkage or identification must be secure, limiting disclosures from within and preventing unauthorized outside access. An effective system of remedies and penalties must be implemented and enforced. Misuse of the identifier, as well as misuse of the information to which it links, must be penalized. Any system of linkage or identifiers must be unique to health care. Patients must have electronic access to their own records. A mechanism for correcting—or supplementing—the record must be in place. Patients must have the ability to opt out of the system.

OCR for page 81
The Learning Healthcare System: Workshop Summary Consideration should be given to making only core encounter data (e.g., blood type and drug allergies) accessible in emergencies and developing the capacity for a more complete record to be available with patient consent in other circumstances, such as to another provider. With these privacy protections built in at the outset, a system of linking medical records may ultimately gain the public’s approval. REFERENCES Abraham, E, P-F Laterre, R Garg, H Levy, D Talwar, B Trzaskoma, B Francois, J Guy, M Bruckmann, A Rea-Neto, R Rossaint, D Perrotin, A Sablotzki, N Arkins, B Utterback, W Macias, and the Administration of Drotrecogin Alfa in Early Stage Severe Sepsis Study, G. 2005. Drotrecogin alfa (activated) for adults with severe sepsis and a low risk of death. New England Journal of Medicine 353(13):1332-1341. Bhatt, D, M Roe, E Peterson, Y Li, A Chen, R Harrington, A Greenbaum, P Berger, C Cannon, D Cohen, C Gibson, J Saucedo, N Kleiman, J Hochman, W Boden, R Brindis, W Peacock, S Smith Jr., C Pollack Jr., W Gibler, and E Ohman. 2004. Utilization of early invasive management strategies for high-risk patients with non-ST-segment elevation acute coronary syndromes: results from the CRUSADE Quality Improvement Initiative. Journal of the American Medical Association 292(17):2096-2104. Bhatt, D, K Fox, W Hacke, P Berger, H Black, W Boden, P Cacoub, E Cohen, M Creager, J Easton, M Flather, S Haffner, C Hamm, G Hankey, S Johnston, K-H Mak, J-L Mas, G Montalescot, T Pearson, P Steg, S Steinhubl, M Weber, D Brennan, L Fabry-Ribaudo, J Booth, E Topol, for the CHARISMA Investigators. 2006. Clopidogrel and aspirin versus aspirin alone for the prevention of atherothrombotic events. New England Journal of Medicine 354(16):1706-1717. Binanay, C, R Califf, V Hasselblad, C O’Connor, M Shah, G Sopko, L Stevenson, G Francis, C Leier, and L Miller. 2005. Evaluation study of congestive heart failure and pulmonary artery catheterization effectiveness: the ESCAPE trial. Journal of the American Medical Association 294(13):1625-1633. Califf, R. 2006a. Fondaparinux in ST-segment elevation myocardial infarction: the drug, the strategy, the environment, or all of the above? Journal of the American Medical Association 295(13):1579-1580. ———. 2006b. Benefit assessment of therapeutic products: the Centers for Education and Research on Therapeutics. Pharmacoepidemiology and Drug Safety. Califf, R, Y Tomabechi, K Lee, H Phillips, D Pryor, F Harrell Jr., P Harris, R Peter, V Behar, Y Kong, and R Rosati. 1983. Outcome in one-vessel coronary artery disease. Circulation 67(2):283-290. Califf, R, R Harrington, L Madre, E Peterson, D Roth, and K Schulman. In press. Curbing the cardiovascular disease epidemic: aligning industry, government, payers, and academics. Health Affairs. CDC Diabetes Cost-Effectiveness Group. 2002. Cost-effectiveness of intensive glycemic control, intensified hypertension control, and serum cholesterol level reduction for type 2 diabetes. Journal of the American Medical Association 287(19):2542-2551.

OCR for page 81
The Learning Healthcare System: Workshop Summary Chiasson, J, R Gomis, M Hanefeld, R Josse, A Karasik, and M Laakso. 1998. The STOP-NIDDM Trial: an international study on the efficacy of an alpha-glucosidase inhibitor to prevent type 2 diabetes in a population with impaired glucose tolerance: rationale, design, and preliminary screening data. Study to Prevent Non-Insulin-Dependent Diabetes Mellitus. Diabetes Care 21(10):1720-1725. Colhoun, H, D Betteridge, P Durrington, G Hitman, H Neil, S Livingstone, M Thomason, M Mackness, V Charlton-Menys, and J Fuller. 2004. Primary prevention of cardiovascular disease with atorvastatin in type 2 diabetes in the Collaborative Atorvastatin Diabetes Study (CARDS): multicentre randomised placebo-controlled trial. Lancet 364(9435):685-696. Connors, A, Jr., T Speroff, N Dawson, C Thomas, F Harrell Jr., D Wagner, N Desbiens, L Goldman, A Wu, R Califf, W Fulkerson Jr., H Vidaillet, S Broste, P Bellamy, J Lynn, and W Knaus. 1996. The effectiveness of right heart catheterization in the initial care of critically ill patients. SUPPORT Investigators. Journal of the American Medical Association 276(11):889-897. Cowley, G, and M Hager. 1996 (September 30). Are catheters safe? Newsweek: 71. Djulbegovic, B, A Frohlich, and C Bennett. 2005. Acting on imperfect evidence: how much regret are we ready to accept? Journal of Clinical Oncology 23(28):6822-6825. Eddy, D, and L Schlessinger. 2003a. Archimedes: a trial-validated model of diabetes. Diabetes Care 26(11):3093-3101. ———. 2003b. Validation of the archimedes diabetes model. Diabetes Care 26(11): 3102-3110. Eddy, D, L Schlessinger, and R Kahn. 2005. Clinical outcomes and cost-effectiveness of strategies for managing people at high risk for diabetes. Annals of Internal Medicine 143(4):251-264. FDA (Food and Drug Administration). 1999. Summary of Safety and Effectiveness: INTER FIX Intervertebral Body Fusion Device. Available from www.fda.gov/cdrh/pdf/p970015b. pdf. (accessed April 4, 2007). ———. 2002. Summary of Safety and Effectiveness Data: InFUSE Bone Graft / LT-CAGE Lumbar Tapered Fusion Device by Medtronic. Available from www.fda.gov/cdrh/pdf/ p000058b.pdf. (accessed April 4, 2007). ———. 2004 (March). Challenge and Opportunity on the Critical Path to New Medical Products. ———. 2006. Public Meeting for the Use of Bayesian Statistics in Medical Device Clinical Trials. Available from http://www.fda.gov/cdrh/meetings/072706-bayesian.html. (accessed April 4, 2007). Ferguson, T, Jr., L Coombs, and E Peterson. 2002. Preoperative beta-blocker use and mortality and morbidity following CABG surgery in North America. Journal of the American Medical Association 287(17):2221-2227. Fiaccadori, E, U Maggiore, M Lombardi, S Leonardi, C Rotelli, and A Borghetti. 2000. Predicting patient outcome from acute renal failure comparing three general severity of illness scoring systems. Kidney International 58(1):283-292. Flaker, G, J Warnica, F Sacks, L Moye, B Davis, J Rouleau, R Webel, M Pfeffer, and E Braunwald. 1999. Pravastatin prevents clinical events in revascularized patients with average cholesterol concentrations. Cholesterol and Recurrent Events CARE Investigators. Journal of the American College of Cardiology 34(1):106-112. Gerstein, H, S Yusuf, J Bosch, J Pogue, P Sheridan, N Dinccag, M Hanefeld, B Hoogwerf, M Laakso, V Mohan, J Shaw, B Zinman, and R Holman. 2006. Effect of rosiglitazone on the frequency of diabetes in patients with impaired glucose tolerance or impaired fasting glucose: a randomised controlled trial. Lancet 368(9541):1096-1105.

OCR for page 81
The Learning Healthcare System: Workshop Summary Greenfield, S, R Kravitz, N Duan, and S Kaplan. Unpublished. Heterogeneity of treatment effects: implications for guidelines, payment and quality assessment. Harrington, R, and R Califf. 2006. Late ischemic events after clopidogrel cessation following drug-eluting stenting: should we be worried? Journal of the American College of Cardiology 48(12):2584-2591. Hayward, R, D Kent, S Vijan, and T Hofer. 2005. Reporting clinical trial results to inform providers, payers, and consumers. Health Affairs 24(6):1571-1581. ———. 2006. Multivariable risk prediction can greatly enhance the statistical power of clinical trial subgroup analysis. BMC Medical Research Methodology 6:18. Hennessy, S, W Bilker, L Zhou, A Weber, C Brensinger, Y Wang, and B Strom. 2003. Retrospective drug utilization review, prescribing errors, and clinical outcomes. Journal of the American Medical Association 290(11):1494-1499. IOM (Institute of Medicine). 2000. Interpreting the Volume-Outcome Relationship in the Context of Health Care Quality: Workshop Summary. Washington, DC: National Academy Press. ———. 2006. Effect of the HIPAA Privacy Rule on Health Research: Proceedings of a Workshop Presented to the National Cancer Policy Forum. Washington, DC: The National Academies Press. Kaplan, S, and S Normand. 2006 (December). Conceptual and analytic issues in creating composite measure of ambulatory care performance. In Final Report to NQF. Kent, D. 2007. In press. Analyzing the results of clinical trials to expose individual patients’ risks might help doctors make better treatment decisions. American Scientist 95(1). Kent, D, R Hayward, J Griffith, S Vijan, J Beshansky, R Califf, and H Selker. 2002. An independently derived and validated predictive model for selecting patients with myocardial infarction who are likely to benefit from tissue plasminogen activator compared with streptokinase. American Journal of Medicine 113(2):104-111. Knowler, W, E Barrett-Connor, S Fowler, R Hamman, J Lachin, E Walker, and D Nathan. 2002. Reduction in the incidence of type 2 diabetes with lifestyle intervention or metformin. New England Journal of Medicine 346(6):393-403. Kravitz, R, N Duan, and J Braslow. 2004. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. The Milbank Quarterly 82(4):661-687. Lagakos, S. 2006. The challenge of subgroup analyses—reporting without distorting. New England Journal of Medicine 354(16):1667-1669. LaRosa, J, S Grundy, D Waters, C Shear, P Barter, J Fruchart, A Gotto, H Greten, J Kastelein, J Shepherd, and N Wenger. 2005. Intensive lipid lowering with atorvastatin in patients with stable coronary disease. New England Journal of Medicine 352(14):1425-1435. Lipscomb, B, G Ma, and D Berry. 2005. Bayesian predictions of final outcomes: regulatory approval of a spinal implant. Clinical Trials 2(4):325-333; discussion 334-339, 364-378. Litwin, M, S Greenfield, E Elkin, D Lubeck, J Broering, and S Kaplan. In press. Total Illness Burden Index Predicts Mortality. Maciosek, M, N Edwards, N, Coffield, A, Flottemesch, T, Nelson, W, Goodman, M, and Solberg, L. 2006. Priorities among effective clinical preventive services: methods. American Journal of Preventive Medicine 31(1):90-96. Mamdani, M, K Sykora, P Li, S Normand, D Streiner, P Austin, P Rochon, and G Anderson. 2005. Reader’s guide to critical appraisal of cohort studies: 2. Assessing potential for confounding. British Medical Journal 330(7497):960-962.

OCR for page 81
The Learning Healthcare System: Workshop Summary March, J, C Kratochvil, G Clarke, W Beardslee, A Derivan, G Emslie, E Green, J Heiligenstein, S Hinshaw, K Hoagwood, P Jensen, P Lavori, H Leonard, J McNulty, M Michaels, A Mossholder, T Osher, T Petti, E Prentice, B Vitiello, and K Wells. 2004. AACAP 2002 research forum: placebo and alternatives to placebo in randomized controlled trials in pediatric psychopharmacology. Journal of the American Academy of Child and Adolescent Psychiatry 43(8):1046-1056. McGlynn, E, S Asch, J Adams, J Keesey, J Hicks, A DeCristofaro, and E Kerr. 2003. The quality of health care delivered to adults in the United States. New England Journal of Medicine 348(26):2635-2645. Mehta, R, C Montoye, M Gallogly, P Baker, A Blount, J Faul, C Roychoudhury, S Borzak, S Fox, M Franklin, M Freundl, E Kline-Rogers, T LaLonde, M Orza, R Parrish, M Satwicz, M Smith, P Sobotka, S Winston, A Riba, and K Eagle. 2002. Improving quality of care for acute myocardial infarction: the Guidelines Applied in Practice (GAP) Initiative. Journal of the American Medical Association 287(10):1269-1276. Muthen, B, and K Shedden. 1999. Finite mixture modeling with mixture outcomes using the EM algorithm. Biometrics 55(2):463-469. National Health and Nutrition Evaluation Survey, 1998-2002. Available from http://www.cdc.gov/nchs/hnanes.htm. (accessed April 4, 2007). Normand, S, K Sykora, P Li, M Mamdani, P Rochon, and G Anderson. 2005. Readers guide to critical appraisal of cohort studies: 3. Analytical strategies to reduce confounding. British Medical Journal 330(7498):1021-1023. Pedersen, T, O Faergeman, J Kastelein, A Olsson, M Tikkanen, I Holme, M Larsen, F Bendiksen, C Lindahl, M Szarek, and J Tsai. 2005. High-dose atorvastatin vs usual-dose simvastatin for secondary prevention after myocardial infarction: the IDEAL study: A randomized controlled trial. Journal of the American Medical Association 294(19):2437-2445. Pfisterer, M, H Brunner-La Rocca, P Buser, P Rickenbacher, P Hunziker, C Mueller, R Jeger, F Bader, S Oss-wald, and C Kaiser, for the BASKET-LATE Investigators. In press. Late clinical events after clopidogrel discontinuation may limit the benefit of drug-eluting stents: an observational study of drug-eluting versus bare-metal stents. Journal of the American College of Cardiology. Pocock, S, V McCormack, F Gueyffier, F Boutitie, R Fagard, and J Boissel. 2001. A score for predicting risk of death from cardiovascular disease in adults with raised blood pressure, based on individual patient data from randomised controlled trials. British Medical Journal 323(7304):75-81. Prevention of cardiovascular events and death with pravastatin in patients with coronary heart disease and a broad range of initial cholesterol levels. The Long-Term Intervention with Pravastatin in Ischaemic Disease (LIPID) Study Group. 1998. New England Journal of Medicine 339(19):1349-1357. Randomised trial of cholesterol lowering in 4444 patients with coronary heart disease: the Scandinavian Simvastatin Survival Study (4S)). 1994. Lancet 344(8934):1383-1389. Rawls, J. 1971. A Theory of Justice. Boston, MA: Harvard University Press. Rochon, P, J Gurwitz, K Sykora, M Mamdani, D Streiner, S Garfinkel, S Normand, and GM Anderson. 2005. Reader’s guide to critical appraisal of cohort studies: 1. Role and design. British Medical Journal 330(7496):895-897. Rothwell, P, and C Warlow. 1999. Prediction of benefit from carotid endarterectomy in individual patients: A risk-modelling study. European Carotid Surgery Trialists’ Collaborative Group. Lancet 353(9170):2105-2110.

OCR for page 81
The Learning Healthcare System: Workshop Summary Ryan, J, E Peterson, A Chen, M Roe, E Ohman, C Cannon, P Berger, J Saucedo, E DeLong, S Normand, C Pollack Jr., and D Cohen. 2005. Optimal timing of intervention in non-ST-segment elevation acute coronary syndromes: insights from the CRUSADE (Can rapid risk stratification of unstable angina patients suppress adverse outcomes with early implementation of the ACC/AHA guidelines) Registry. Circulation 112(20):3049-3057. Schlessinger, L, and D Eddy. 2002. Archimedes: a new model for simulating health care systems—the mathematical formulation. Journal of Biomedical Informatics 35(1):37-50. Schneeweiss, S, M Maclure, B Carleton, R Glynn, and J Avorn. 2004. Clinical and economic consequences of a reimbursement restriction of nebulised respiratory therapy in adults: direct comparison of randomised and observational evaluations. British Medical Journal 328(7439):560. Selker, H, J Griffith, J Beshansky, C Schmid, R Califf, R D’Agostino, M Laks, K Lee, C Maynard, R Selvester, G Wagner, and W Weaver. 1997. Patient-specific predictions of outcomes in myocardial infarction for real-time emergency use: a thrombolytic predictive instrument. Annals of Internal Medicine 127(7):538-556. Shah, M, V Hasselblad, L Stevenson, C Binanay, C O’Connor, G Sopko, and R Califf. 2005. Impact of the pulmonary artery catheter in critically ill patients: meta-analysis of randomized clinical trials. Journal of the American Medical Association 294(13):1664-1670. Shepherd, J, S Cobbe, I Ford, C Isles, A Lorimer, P MacFarlane, J McKillop, and C Packard. 1995. Prevention of coronary heart disease with pravastatin in men with hypercholesterolemia. West of Scotland Coronary Prevention Study Group. New England Journal of Medicine 333(20):1301-1307. Shepherd, J, G Blauw, M Murphy, E Bollen, B Buckley, S Cobbe, I Ford, A Gaw, M Hyland, J Jukema, A Kamper, P Macfarlane, A Meinders, J Norrie, C Packard, I Perry, D Stott, B Sweeney, C Twomey, and R Westendorp. 2002. Pravastatin in elderly individuals at risk of vascular disease (PROSPER): a randomised controlled trial. Lancet 360(9346):1623-1630. Singh, A, L Szczech, K Tang, H Barnhart, S Sapp, M Wolfson, and D Reddan. 2006. Correction of anemia with epoetin alfa in chronic kidney disease. New England Journal of Medicine 355(20):2085-2098. Slotman, G. 2000. Prospectively validated prediction of organ failure and hypotension in patients with septic shock: the Systemic Mediator Associated Response Test (SMART). Shock 14(2):101-106. Snitker, S, R Watanabe, I Ani, A Xiang, A Marroquin, C Ochoa, J Goico, A Shuldiner, and T Buchanan. 2004. Changes in insulin sensitivity in response to troglitazone do not differ between subjects with and without the common, functional Pro12Ala peroxisome proliferator-activated receptor-gamma2 gene variant: Results from the Troglitazone in Prevention of Diabetes (TRIPOD) study. Diabetes Care 27(6):1365-1368. Stier, D, S Greenfield, D Lubeck, K Dukes, S Flanders, J Henning, J Weir, and S Kaplan. 1999. Quantifying comorbidity in a disease-specific cohort: adaptation of the total illness burden index to prostate cancer. Urology 54(3):424-429. Teno, J, F Harrell Jr., W Knaus, R Phillips, A Wu, A Connors Jr., N Wenger, D Wagner, A Galanos, N Desbiens, and J Lynn. 2000. Prediction of survival for older hospitalized patients: the HELP survival model. Hospitalized Elderly Longitudinal Project. Journal of the American Geriatrics Society 48(5 Suppl.):S16-S24. Teutsch, S, and M Berger. 2005. Evidence synthesis and evidence-based decision making: related but distinct processes. Medical Decision Making 25(5):487-489. Teutsch, S, M Berger, and M Weinstein. 2005. Comparative effectiveness: asking the right question. Choosing the right method. Health Affairs 24:128-132.

OCR for page 81
The Learning Healthcare System: Workshop Summary Tunis, S, D Stryer, and C Clancy. 2003. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association 290(12):1624-1632. Tuomilehto, J, J Lindstrom, J Eriksson, T Valle, H Hamalainen, P Ilanne-Parikka, S Keinanen-Kiukaanniemi, M Laakso, A Louheranta, M Rastas, V Salminen, and M Uusitupa. 2001. Prevention of type 2 diabetes mellitus by changes in lifestyle among subjects with impaired glucose tolerance. New England Journal of Medicine 344(18):1343-1350. Vijan, S, T Hofer, and R Hayward. 1997. Estimated benefits of glycemic control in microvascular complications in type 2 diabetes. Annals of Internal Medicine 127(9):788-795. Vincent, J, D Angus, A Artigas, A Kalil, B Basson, H Jamal, G Johnson 3rd, and G Bernard.Kalil, B Basson, H Jamal, G Johnson 3rd, and G Bernard. 2003. Effects of drotrecogin alfa (activated) on organ dysfunction in the PROWESS trial.Effects of drotrecogin alfa (activated) on organ dysfunction in the PROWESS trial. Critical Care Medicine 31(3):834-840. Welke, K, T Ferguson Jr., L Coombs, R Dokholyan, C Murray, M Schrader, and E Peterson. 2004. Validity of the society of thoracic surgeons national adult cardiac surgery database. Annals of Thoracic Surgery 77(4):1137-1139. Zimmerman, J, E Draper, L Wright, C Alzola, and W Knaus. 1998. Evaluation of acute physiology and chronic health evaluation III predictions of hospital mortality in an independent database. Critical Care Medicine 26(8):1317-1326.

OCR for page 81
The Learning Healthcare System: Workshop Summary This page intentionally left blank.