6

Generating and Applying Knowledge in Real Time

In 2008, Ann Morrison received two all-metal hip replacements at the age of 50. Soon after the procedure, she experienced intense rashes, pain, and inflammation at the sites of her surgery. The injurious devices were replaced in 2010, just 2 years after she received her initial hip replacements; hip replacements typically last 15 years or more. Today, as a result of extensive tissue damage caused by metal debris shed from the original replacements, Ann requires a brace to walk, and she still has not been able to return to her work as a physical therapist. With the proper digital infrastructure—electronic health records, the use of clinical data to compare the effectiveness and efficiency of different interventions, and registries to track side effects and safety—Ann’s experience could have been avoided. Instead, the U.S. health care system currently lacks the data, monitoring, and analysis capabilities necessary to effectively evaluate, disseminate, and implement the ever-increasing amount of health information and technologies (Meier and Roberts, 2011).

Although an unprecedented amount of information is available in journals, guidelines, and other sources, patients and clinicians often lack access to information they can feel confident is relevant, timely, and useful for the circumstances at hand. Moreover, the current system for disseminating knowledge is strained by the quantity of information now available, which means that new evidence often is not applied to care. After explaining the need for a new approach to generating clinical and biomedical knowledge,



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 149
6 Generating and Applying Knowledge in Real Time In 2008, Ann Morrison received two all-metal hip replacements at the age of 50. Soon after the procedure, she experienced intense rashes, pain, and inflammation at the sites of her surgery. The inju- rious devices were replaced in 2010, just 2 years after she received her initial hip replacements; hip replacements typically last 15 years or more. Today, as a result of extensive tissue damage caused by metal debris shed from the original replacements, Ann requires a brace to walk, and she still has not been able to return to her work as a physical therapist. With the proper digital infrastructure— electronic health records, the use of clinical data to compare the effectiveness and efficiency of different interventions, and registries to track side effects and safety—Ann’s experience could have been avoided. Instead, the U.S. health care system currently lacks the data, monitoring, and analysis capabilities necessary to effectively evaluate, disseminate, and implement the ever-increasing amount of health information and technologies (Meier and Roberts, 2011). Although an unprecedented amount of information is available in jour- nals, guidelines, and other sources, patients and clinicians often lack access to information they can feel confident is relevant, timely, and useful for the circumstances at hand. Moreover, the current system for disseminating knowledge is strained by the quantity of information now available, which means that new evidence often is not applied to care. After explaining the need for a new approach to generating clinical and biomedical knowledge, 149

OCR for page 149
150 BEST CARE AT LOWER COST this chapter describes emerging capacities, methods, and approaches that hold promise for helping to meet this need. It then examines what is neces- sary to create the data utility that will be essential to a continuously learn- ing and improving health care system. Next, the critical issue of building a learning bridge from knowledge to practice is explored. This is followed by a discussion of the crucial role of people, patients, and consumers as active stakeholders in the learning enterprise. The chapter concludes with recom- mendations for achieving the vision of a health care system that generates and applies knowledge in real time. NEED FOR A NEW APPROACH TO KNOWLEDGE GENERATION The current approach to generating new medical knowledge falls short in delivering the evidence needed to support the delivery of quality care. The evidence base is inadequate, and methods for generating medical knowledge have notable limitations. Inadequacy of the Evidence Base Clinical and biomedical research emerges at a remarkable rate, with almost 2,100 scientific publications, 75 clinical trials, and 11 systematic re- views being produced every day (Bastian et al., 2010).1 Although clinicians need not review every study to provide high-quality care, the ever-increasing volume of evidence makes it difficult to maintain a working knowledge of new clinical information. Even so, however, the availability of such high-quality evidence is not keeping pace with the ever-increasing demand for clinical information that can help guide decisions on different diagnostics, interventions and therapies, and care delivery approaches (see Box 6-1 for an example of this information paradox). Rather, the gap between the evidence possible and the evidence produced continues to grow, and studies indicate that the number of guideline statements backed by evidence is not at the level that should be expected. In some cases, 40 to 50 percent of the recom- mendations made in guidelines are based on expert opinion, case studies, or standards of care rather than on multiple clinical trials or meta-analyses (Chauhan et al., 2006; IOM, 2008, 2011b; Tricoci et al., 2009). A study of the strength of the current recommendations of the Infectious Diseases Society of America, for example, found that only 14 percent were based on more than one randomized controlled trial, and more than half were based 1  he number of journal publications was determined from searches on PubMed for 2010 T (National Library of Medicine: http://www.ncbi.nlm.nih.gov/pubmed/) using the methodology described in Chapter 2.

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 151 BOX 6-1 The Information Paradox The treatment of breast cancer is one example of the information paradox in clinical medicine. Relative to years past, a vast array of information about breast cancer is available. Five decades ago, breast cancer was detected from a physical exam, no biopsy was performed, and mastectomy was the recommended treatment for all detected breast cancers (Harrison, 1962). Today, multiple imag- ing technologies exist for the detection and diagnosis of the disease, including standard x-ray mammography, computed tomography (CT), ultrasound, posi- tron emission tomography (PET), and magnetic resonance imaging (MRI) (IOM, 2001b, 2005). Similarly, traditional biopsies required surgical excision of the area of interest, whereas new methods allow for a less invasive evaluation, such as fine needle aspiration biopsy and core needle biopsy, and may be performed under im- aging guidance (Bevers et al., 2009). Once diagnosed, the cancer can be further characterized by genetic characteristics (such as BRCA1, BRCA2, HER-2, and now multigene tests), in addition to its estrogen and progesterone receptor status. Treatments have developed at a similarly fast pace, with a number of surgical, ra- diological, chemotherapy, and endocrine therapies now being available, along with targeted therapies such as monoclonal antibodies (Kasper and Harrison, 2005; National Comprehensive Cancer Network, 2012). While progress in breast cancer diagnosis and treatment has been swift, however, the comparative efficacy and safety of these diagnostic technologies and treatments have not been evaluated; these innovations are administered without an adequate evidence basis. Likewise, the efficacy of many treatments or the accuracy of many diagnostic technologies is unknown for a given patient with a given condition (IOM, 2008). The results include widespread variation in patient care, confusion among patients and providers on the best methods for treating a specific disease or condition, and waste due to delivering services that are ineffective or even harmful for the patient. on expert opinion alone (Lee and Vielemeyer, 2011). Another study, exam- ining the joint cardiovascular clinical practice guidelines of the American College of Cardiology and the American Heart Association, found that the current guidelines were based largely on lower levels of evidence or expert opinion (Tricoci et al., 2009). The inadequacy of the evidence base for clinical guidelines has con- sequences for the evidence base for care delivered. Estimates vary on the proportion of clinical decisions in the United States that are adequately in- formed by formal evidence gained from clinical research, with some studies suggesting a figure of just 10-20 percent (Darst et al., 2010; IOM, 1985). These results suggest that there are substantial opportunities for improve- ment in ensuring that the knowledge generated by the clinical research enterprise meets the demands of evidence-based care.

OCR for page 149
152 BEST CARE AT LOWER COST Even after identifying relevant information for a given condition, cli- nicians still must ensure that the information is of high quality—that the risk of contradiction by later studies is minimal, that the information is uncolored by bias or conflicts of interest, and that it applies to a particular patient’s clinical circumstances. Several recent publications have observed that the rate of medical reversals is significant, with one recent paper finding that 13 percent of articles about medical practice in a high-profile journal contradicted the evidence for existing practices (Ioannidis, 2005b; Prasad et al., 2011). Another concern is managing conflicts of interest—which can occur in the research, education, and practice domains. As noted in the 2009 Institute of Medicine (IOM) report Conflict of Interest in Medical Research, Education, and Practice, patients can benefit when clinicians and researchers collaborate with the life science industry to develop new prod- ucts, yet there are concerns that financial ties could unduly influence profes- sional judgments. These tensions must be balanced to ensure that conflicts of interest do not negatively impact the integrity of the scientific research process, the objectivity of health professionals’ training and education, or the public’s trust in health care. There are approaches to managing conflicts of interest, especially financial relationships, without stifling important col- laborations and innovations (IOM, 2009b). Concerns exist as well about whether the current evidence base applies to the circumstances of particular patients. A study of clinical practice guidelines for nine of the most common chronic conditions, for example, found that fewer than half included guidance for the treatment of older patients with multiple comorbid conditions (Boyd et al., 2005). For patients and their health care providers, this lack of knowledge limits the ability to choose the most effective treatment for a condition. Furthermore, health care payers may not have the evidence they need to make coverage decisions for the patients enrolled in their plans. One analysis of Medicare payment policies for cardiovascular devices, for example, found that participants in the trials that provided evidence for coverage decisions differed from the Medicare population. Participants in the trials often were younger and healthier and had a different prevalence of comorbid conditions (Dhruva and Redberg, 2008). Further, without greater capacity, the challenges to evidence production will only continue to grow. This is particularly true given the projected proliferation of new medical technologies; the increased complexity of man- aging chronic diseases; and the growing use of genomics, proteomics, and other biological factors to personalize treatments and diagnostics (Califf, 2004). As noted in Chapter 2, in one 3-year period, genome-wide scans were able to identify more than 100 genetic variants associated with nearly 40 diseases and traits; this growth in genetic understanding led to the availability in 2008 of more than 1,200 genetic tests for clinical conditions

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 153 (Genetics and Public Policy Center, 2008; Manolio, 2010; Pearson and Manolio, 2008). Even as clinical research strains to keep pace with the rapid evolution of medical interventions and care delivery methods, improving and increas- ing the supply of knowledge with which to answer health care questions is a core aim of a learning health care system. The current research knowledge base provides limited support for answering important types of clinical questions, including those related to comparative effectiveness and long- term patient outcomes (British Medical Journal, 2011; Gill et al., 1996; IOM, 1985; Lee et al., 2005a; Tunis et al., 2003). This lack of knowledge is demonstrated by the fact that many technologies are not adequately evalu- ated before they see widespread clinical use. For example, cardiac com- puted tomography angiography (CTA) has been adopted widely throughout the medical community despite limited data on its effectiveness compared with alternative interventions, the risks of its use, and its substantial cost (­ edberg, 2007). New opportunities in technology and research design can R mitigate these limitations and offer a dynamic view of evidence and out- comes; leveraging these opportunities can bridge the gap between research and practice to accelerate the use of research in routine care. Limitations of Current Methods At present, support for clinical research often focuses on the random- ized controlled trial as the gold standard for testing the effectiveness of diagnostics and therapeutics. The randomized controlled trial has gained this status because of its ability to control for many confounding factors and to provide direct evidence on the efficacy of different treatments, interventions, and care delivery methods (Hennekens et al., 1987). Yet, while the randomized controlled trial has a highly successful track record in generating new clinical knowledge, it has, like most research methods available today, several limitations: such trials are not practical or fea- sible in all situations, are expensive and time-consuming, address only the questions they were designed to answer, and cannot answer every type of research question. A study of head-to-head randomized controlled trials for comparative effectiveness research purposes found that their costs ranged from $400,000 to $125 million, with the average costs for larger studies averaging $15-$20 million (Holve and Pittman, 2009, 2011). Randomized controlled trials also are slow to address the research questions they set out to answer. Half of all trials are delayed, 80 to 90 percent of these because of a shortage of will- ing trial participants (Grove, 2011). As currently designed and operated, moreover, randomized controlled trials do not address all clinically relevant populations, which may limit a trial’s generalizability to regular clinical

OCR for page 149
154 BEST CARE AT LOWER COST practice and many patient populations (Frangakis, 2009; Greenhouse et al., 2008; Stewart et al., 2007; Weisberg et al., 2009). At a time when many patients have multiple chronic conditions (Alecxih et al., 2010; Tinetti and Studenski, 2011), for example, patients with comorbidities are routinely excluded from most randomized controlled trials (Dhruva and Redberg, 2008; Van Spall et al., 2007). In addition, many current trials collect data only for a limited period of time, which means they may not capture long- term effects or low-probability side effects and may not reflect the practice conditions of many health care providers. Other research methods have limitations as well. For instance, the strength of observational studies is that they capture health practices in real-world situations, which aids in generalizing their results to more medi- cal practices. This research design can provide data throughout a product’s life cycle and allow for natural experiments provided by variations in care. However, observational studies are challenged to minimize bias and ensure that their results were due to the intervention under consideration. For this reason, as demonstrated by the use of hormone replacement therapy (see Box 6-2) and Vitamin E for the treatment of coronary disease, results of observational trials do not always accord with those of randomized BOX 6-2 Considerations for Producing Evidence: The Story of Hormone Replacement Therapy Trials Research on the impact of hormone replacement therapy on coronary heart disease provides a cautionary note for less traditional research methods (­ anson, M 2010). Initial observational studies of women taking hormone replacement therapy suggested a reduction in the risk of heart disease in the range of 30 to 50 percent (Grady et al., 1992; Grodstein et al., 2000). However, later randomized trials, especially the Women’s Health Initiative, found no effect or even an elevated risk (Ioannidis, 2005a; Manson et al., 2003). Several factors may have led to these divergent results, including traditional confounding elements, the fact that these studies were limited in their ability to assess short-term or acute outcomes, and the predominance of follow-up data among long-term hormone therapy users. This example demonstrates that observational studies need to be careful to capture both short- and long-term outcomes (Grodstein et al., 2003). In addition, these types of studies need to consider the differential effects on clinically relevant subgroups; in this case, hormone therapy may have different impacts depending on whether it is started before or after the onset of menopause (Grodstein et al., 2006; IOM, 2008). The experience of hormone replacement therapy research highlights several areas for improvement in observational research design.

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 155 TABLE 6-1  Examples of Research Methods and Questions Addressed by Each Research Design Questions Addressed Traditional randomized controlled trial Efficacy, therapeutic efficacy Active comparator randomized controlled Comparative effectiveness trials, matched-pair studies Surveillance studies Safety, side effects, indications Cohort studies, retrospective audit studies, Effectiveness (generalizability to regular prospective case series clinical practice and larger patient populations) SOURCE: Data derived from Walach et al., 2006. controlled trials (Lee et al., 2005b; Rossouw et al., 2002), although some studies have shown concordance between the results derived from the two methods (Concato et al., 2000). The challenge, therefore, is not determining which research method is the best for a particular condition but rather which provides the informa- tion most appropriate to a particular clinical need. Table 6-1 summarizes different research designs and the questions most appropriately addressed by each. In the case of examining biomedical treatments and diagnostic technologies, different types of studies will be more appropriate for differ- ent stages of a product’s life cycle. Early studies will need to focus on safety and efficacy, which will require randomized controlled trials, while later studies will need to focus on comparative effectiveness and surveillance of unexpected effects, requiring a mix of observational studies and random- ized controlled trials. (See Figure 6-1 for a depiction of the change in ap- propriate research methods over time.) As this report was being written, the methodology committee of the Patient-Centered Outcomes Research Institute (PCORI) had developed a translation table to aid in determining the research methods most appropriate for addressing certain comparative clinical effectiveness research questions (PCORI, 2012). Each study must be tailored to provide useful, practical, and reliable results for the condi- tion at hand. Conclusion 6-1: Despite the accelerating pace of scientific discov- ery, the current clinical research enterprise does not sufficiently address pressing clinical questions. The result is decisions by both patients and clinicians that are inadequately informed by evidence.

OCR for page 149
156 BEST CARE AT LOWER COST Related findings: • Clinical and biomedical research studies are being produced at an increasing rate. As noted in the findings supporting Conclusion 2-1, on average approximately 75 clinical trials and a dozen systematic reviews are published daily (see Chapter 2). • The evidence basis for clinical guidelines and recommendations needs to be strengthened. In some cases, 40 to 50 percent of the recommendations made in guidelines are based on expert opinion, case studies, or standards of care rather than on multiple clinical trials or meta-analyses. • Even at the current pace of production, the knowledge base pro- vides limited support for answering many of the most important types of clinical questions. A study of clinical practice guidelines for nine of the most common chronic conditions found that fewer than half included guidance for the treatment of patients with multiple comorbid conditions. • New methods are needed to address current limitations in clinical research. The cost of current methods for clinical research averages $15-$20 million for larger studies—and much more for some—yet the studies do not reflect the practice conditions of many health care providers. Market Entry Data Produced per Year Surveillance and Observational Studies Randomized Controlled Trials (Efficacy) Systematic Reviews Randomized Controlled Trials (Effectiveness) Timeline of Medical Product Research FIGURE 6-1  Different types of research are needed at different stages of a medical product’s life cycle. Early trials will need to focus on therapeutic efficacy, while later research will need to focus on comparative effectiveness and surveillance. SOURCE: Adapted from IOM, 2010a. Figure 6-1

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 157 EMERGING CAPACITIES, METHODS, AND APPROACHES As discussed above, there is a clear need for new approaches to knowl- edge generation, management, and application to guide clinical care, qual- ity improvement, and delivery system organization. The current clinical research enterprise requires substantial resources and takes significant time to address individual research questions. Moreover, the results provided by these studies do not always generate the information needed by patients and their clinicians and may not always be generalizable to a larger population. New research methods are needed that address these serious limitations. Developments in information technology and research infrastructure have the potential to expand the ability of the research system to meet this need. For example, the anticipated growth in the adoption of digital records pres- ents an unprecedented opportunity to expand the supply of data available for learning, generating insights from the regular delivery of care (see the discussion of the data utility in the next section for further detail on these opportunities). These new developments can increase the output derived from the substantial clinical research investments of agencies and founda- tions, including the Agency for Healthcare Research and Quality (AHRQ), the National Institutes of Health (NIH), and PCORI. New tools are extending research methods and overcoming many of the limitations highlighted in the previous section (IOM, 2010a). The scientific community has recognized the need for change. High-profile efforts—in- cluding NIH’s Clinical and Translational Science Awards and the U.S. Food and Drug Administration’s Clinical Trials Transformation Initiative—have been undertaken to improve the quality, efficiency, and applicability of clini- cal trials, and new translational research paradigms have been developed (Lauer and Skarlatos, 2010; Luce et al., 2009; Woolf, 2008; Zerhouni, 2005). Based on these efforts and the work of academic research leaders, new forms of experimental designs have been developed, including prag- matic clinical trials, delayed design trials, and cluster randomized controlled trials2 (Campbell et al., 2007; Eldridge et al., 2008; Tunis et al., 2003, 2010). Other new methods have been devised to develop knowledge from data produced during the regular course of care. Initial results derived with these new methods have shown promise (see Box 6-3 for a description of one new method). Advanced statistical methods, including Bayesian analy- sis, allow for adaptive research designs that can learn as a study advances, making studies more flexible (Chow and Chang, 2008). Taken together, 2  n pragmatic clinical trials, the questions faced by decision makers dictate the study design I (Tunis et al., 2003b). In delayed design trials, participants are randomized to either receive the intervention or have it withheld for a period of time, with both groups receiving the in- tervention by the end of the study (Tunis et al., 2010). In cluster randomized controlled trials, groups of subjects, rather than individual subjects, are randomized (Campbell et al., 2007).

OCR for page 149
158 BEST CARE AT LOWER COST BOX 6-3 New Methods for Randomized Clinical Trials: Point-of-Care Clinical Trial One new method for conducting experimental research is the point-of-care clinical trial. These trials currently are being conducted at the Boston Veterans Af- fairs Health Care System, with similar trials being proposed or conducted at other locations (Vickers and Scardino, 2009). The method entails using an electronic health records system to conduct randomized controlled trials by automatically flagging patients who have a choice between competing treatments. If patients do not express a preference, they are asked whether they would be willing to participate in a trial and if so, are randomly assigned to a treatment protocol. The electronic health record system records outcome data and automatically calcu- lates the effectiveness of the treatment protocols. Disadvantages of such trials are that they do not allow for a control group and can be used only for treatments that are already approved for standard care. This type of trial has started being applied to consideration of competing methods for insulin administration (a sliding scale versus a weight-based regimen) for blood sugar control (Fiore et al., 2011). these new methods are designed to reduce the expense and effort of con- ducting research, improve the applicability of the results to clinical deci- sions, improve the ability to identify smaller effects, and be applied when traditional methods cannot be used. In addition to new research methods, advances in statistical analysis, simulation, and modeling have supplemented traditional methods for con- ducting trials. Given that even the most tightly controlled trials show a distribution in patient responses to a given treatment or intervention, new statistical techniques can help segment results for different populations. Further, new Bayesian techniques for data analysis can separate out the ef- fects of different clinical interventions on overall population health (Berry et al., 2006). With the growth in computational power, new models have been developed that can replicate physiological pathways and disease states (Eddy and Schlessinger, 2003; Stern et al., 2008). These models can then be used to simulate clinical trials and individualize clinical guidelines to a patient’s particular situation and biology; this approach thus holds promise for improving health status while reducing costs (Eddy et al., 2011). As computational power grows, the potential applications of these simulation and modeling tools will continue to increase. Despite the opportunities afforded by new research methods, several challenges must be addressed as these methods are improved. One such challenge for the clinical re- search enterprise is keeping pace with the introduction of new procedures,

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 159 treatments, diagnostic technologies, and care delivery models. As currently structured, clinical trials often are not comparable, so that a new trial must be conducted to compare the effectiveness of new treatments, diagnostics, or care delivery models with that of existing ones. One solution to this problem is to create standard comparators for a given disease or clinical condition, which would allow new innovations to be compared easily using existing data for current treatments or diagnostic technologies. Ad- ditionally, as the research enterprise is expanded, additional emphasis may be required in fields that are underserved by the current clinical research paradigm, such as pediatrics (Cohen et al., 2007; IOM, 2009c; Simpson et al., 2010). One exception to this observation is pediatric cancer care. Vir- tually all of the treatment provided in pediatric oncology is recorded and applied to registries or active clinical trials, which then inform future care for children undergoing treatment (IOM, 2010b; Pawlson, 2010). CREATION OF THE DATA UTILITY In considering how to take advantage of opportunities to create a more nimble, timely, and targeted clinical research enterprise, three basic questions should be considered: (1) What does the system need to know? (2) How will the information be captured and used? and (3) How will the resulting knowledge be organized and shared? These questions have impor- tant ramifications for the design and operation of the overall data system. With respect to the first question, stakeholders in the health care system are interested in comparing the effectiveness of different treatments and interventions, monitoring the current safety of medical products through surveillance, undertaking quality improvement activities, and understand- ing the quality and performance of different providers and health care orga- nizations. Achieving these goals will require capturing data on the care that is delivered to patients, such as processes and structures of care delivery, and the outcomes of that care, such as longitudinal health outcomes and other outcomes important to patients. With respect to how these data will be used to generate new health care knowledge, uses will include comparing the effects of different treatments, interventions, or care protocols; estab- lishing guidelines and best practices; and searching for unexpected effects of treatments or interventions. Finally, the new knowledge generated will have little impact if not shared broadly with all involved in delivering care for a given patient or, for research cases, all those involved in research. Each of these three questions is explored in further detail below.

OCR for page 149
178 BEST CARE AT LOWER COST taken to disseminate clinical knowledge broadly and ensure its widespread application. Recommendation 3: Clinical Decision Support A  ccelerate integration of the best clinical knowledge into care decisions. Decision support tools and knowledge management systems should be routine features of health care delivery to ensure that decisions made by clinicians and patients are informed by current best evidence. Strategies for progress toward this goal: • Clinicians and health care organizations should adopt tools that deliver reliable, current clinical knowledge to the point of care, and organizations should adopt incentives that encourage the use of these tools. • Research organizations, advocacy organizations, professional spe- cialty societies, and care delivery organizations should facilitate the development, accessibility, and use of evidence-based and harmo- nized clinical practice guidelines. • Public and private payers should promote the adoption of decision support tools, knowledge management systems, and evidence-based clinical practice guidelines by structuring payment and contracting policies to reward effective, evidence-based care that improves patient health. • Health professional education programs should teach new methods for accessing, managing, and applying evidence; engaging in life- long learning; understanding human behavior and social science; and delivering safe care in an interdisciplinary environment. • Research funding agencies and organizations should promote re- search into the barriers and systematic challenges to the dissemina- tion and use of evidence at the point of care, and support research to develop strategies and methods that can improve the usefulness and accessibility of patient outcome data and scientific evidence for clinicians and patients. Collectively, implementation of the above recommendations would increase the supply of clinical data, reduce legal and regulatory barriers to the creation of new knowledge, and improve the integration of new knowl- edge into regular clinical practice. Addressing the issues targeted by these recommendations can increase the knowledge available to answer relevant clinical questions while promoting the use of new clinical information in regular patient care.

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 179 REFERENCES AHRQ (Agency for Healthcare Research and Quality). 2010. Registries for evaluating patient outcomes: A user’s guide. 2nd ed. Rockville, MD: AHRQ. Akhter, N., S. Milford-Beland, M. T. Roe, R. N. Piana, J. Kao, and A. Shroff. 2009. Gender differences among patients with acute coronary syndromes undergoing percutaneous coronary intervention in the American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR). American Heart Journal 157(1):141-148. Alecxih, L., S. Shen, I. Chan, D. Taylor, and J. Drabek. 2010. Individuals living in the com- munity with chronic conditions and functional limitations: A closer look. http://aspe.hhs. gov/daltcp/reports/2010/closerlook.pdf (accessed June 23, 2011). Alston, C., and L. Paget. 2012. Communicating evidence in health care: Engaging patients for improved health care decisions. http://iom.edu/~/media/Files/Activity%20Files/ Quality/VSRT/IC%20Meeting%20Docs/ECIC%2006-07-12/Lyn%20Paget%20and%20 Chuck%20Alston.pdf (accessed August 31, 2012). Association of Academic Health Centers. 2008. HIPAA creating barriers to research and dis- covery: HIPAA problems widespread and unresolved since 2003. http://www.aahcdc.org/ policy/reddot/AAHC_HIPAA_Creating_Barriers.pdf (accessed June 9, 2011). Avorn, J., and M. Fischer. 2010. “Bench to behavior”: Translating comparative effectiveness research into improved clinical practice. Health Affairs 29(10):1891-1900. Bastian, H., P. Glasziou, and I. Chalmers. 2010. Seventy-five trials and eleven systematic re- views a day: How will we ever keep up? PLoS Medicine 7(9):e1000326. Bate, P., G. Robert, and H. Bevan. 2004. The next phase of healthcare improvement: What can we learn from social movements? Quality & Safety in Health Care 13(1):62-66. Beckjord, E. B., R. Rechis, S. Nutt, L. Shulman, and B. W. Hesse. 2011. What do people af- fected by cancer think about electronic health information exchange? Results from the 2010 Livestrong Electronic Health Information Exchange Survey and the 2008 Health Information National Trends Survey. Journal of Oncology Practice 7(4):237-241. Behrman, R. E., J. S. Benner, J. S. Brown, M. McClellan, J. Woodcock, and R. Platt. 2011. Developing the sentinel system—a national resource for evidence development. New England Journal of Medicine 364(6):498-499. Berry, D. A., L. Inoue, Y. Shen, J. Venier, D. Cohen, M. Bondy, R. Theriault, and M. F. Munsell. 2006. Modeling the impact of treatment and screening on U.S. breast cancer mortality: A Bayesian approach. Journal of the National Cancer Institute Monographs (36):30-36. Berwick, D. M. 2003. Disseminating innovations in health care. Journal of the American Medical Association 289(15):1969-1975. Bevers, T. B., B. O. Anderson, E. Bonaccio, S. Buys, M. B. Daly, P. J. Dempsey, W. B. Farrar, I. Fleming, J. E. Garber, R. E. Harris, A. S. Heerdt, M. Helvie, J. G. Huff, N. Khakpour, S. A. Khan, H. Krontiras, G. Lyman, E. Rafferty, S. Shaw, M. L. Smith, T. N. Tsangaris, C. Williams, T. Yankeelov, T. Yaneeklov, and National Comprehensive Cancer Network. 2009. NCCN clinical practice guidelines in oncology: Breast cancer screening and diag- nosis. Journal of the National Comprehensive Cancer Network 7(10):1060-1096. Blue Cross Blue Shield of Massachusetts Foundation. 2007. How doctors think—interview with Dr. Jerome Groopman. http://bluecrossmafoundation.org/~/media/Files/Newsroom/ Press%20Releases/090626PodcastPR.pdf (accessed August 31, 2012). Blumenthal, D. 2006. Data withholding in genetics and the other life sciences: Prevalences and predictors. Journal of Academic Medicine 81(2):137-145. Bocchino, C. A. 2011. Public-private partnerships: Health plans. In Learning what works: Infrastructure required for comparative effectiveness research. Institute of Medicine. Washington, DC: The National Academies Press. Pp. 293-300.

OCR for page 149
180 BEST CARE AT LOWER COST Boyd, C. M., J. Darer, C. Boult, L. P. Fried, L. Boult, and A. W. Wu. 2005. Clinical prac- tice guidelines and quality of care for older patients with multiple comorbid diseases: Implications for pay for performance. Journal of the American Medical Association 294(6):716-724. British Medical Journal. 2011. Clinical evidence. http://clinicalevidence.bmj.com/ceweb/about/ knowledge.jsp (accessed October 7, 2011). Brown, J. S., J. H. Holmes, K. Shah, K. Hall, R. Lazarus, and R. Platt. 2010. Distributed health data networks: A practical and preferred approach to multi-institutional evaluations of comparative effectiveness, safety, and quality of care. Medical Care 48(Suppl. 6):S45-S51. Brownstein, J. S., S. N. Murphy, A. B. Goldfine, R. W. Grant, M. Sordo, V. Gainer, J. A. Colecchi, A. Dubey, D. M. Nathan, J. P. Glaser, and I. S. Kohane. 2010. Rapid identifica- tion of myocardial infarction risk associated with diabetes medications using electronic medical records. Diabetes Care 33(3):526-531. Cabana, M. D., C. S. Rand, N. R. Powe, A. W. Wu, M. H. Wilson, P. A. Abboud, and H. R. Rubin. 1999. Why don’t physicians follow clinical practice guidelines? A framework for improvement. Journal of the American Medical Association 282(15):1458-1465. Cain, M., and R. Mittman. 2002. Diffusion of innovation in health care. Oakland: California HealthCare Foundation. Califf, R. M. 2004. Defining the balance of risk and benefit in the era of genomics and pro- teomics. Health Affairs (Millwood) 23(1):77-87. Campbell, M. J., A. Donner, and N. Klar. 2007. Developments in cluster randomized trials and statistics in medicine. Statistics in Medicine 26(1):2-19. Carnegie Foundation for the Advancement of Teaching. 2010. Getting ideas into action: Build- ing networked improvement communities in education. http://www.carnegiefoundation. org/spotlight/webinar-bryk-gomez-building-networked-improvement-communities-in- education (accessed June 17, 2011). Cebul, R. D., T. E. Love, A. K. Jain, and C. J. Hebert. 2011. Electronic health records and quality of diabetes care. New England Journal of Medicine 365(9):825-833. Cella, D., S. Yount, N. Rothrock, R. Gershon, K. Cook, B. Reeve, D. Ader, J. F. Fries, B. Bruce, M. Rose, and PROMIS Cooperative Group. 2007. The Patient-Reported Outcomes Measurement Information System (PROMIS): Progress of an NIH roadmap cooperative group during its first two years. Medical Care 45(5, Suppl. 1):S3-S11. Cella, D., W. Riley, A. Stone, N. Rothrock, B. Reeve, S. Yount, D. Amtmann, R. Bode, D. Buysse, S. Choi, K. Cook, R. Devellis, D. DeWalt, J. F. Fries, R. Gershon, E. A. Hahn, J. S. Lai, P. Pilkonis, D. Revicki, M. Rose, K. Weinfurt, R. Hays, and PROMIS Coopera- tive Group. 2010. The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005-2008. Journal of Clinical Epidemiology 63(11):1179-1194. Chauhan, S. P., V. Berghella, M. Sanderson, E. F. Magann, and J. C. Morrison. 2006. American College of Obstetricians and Gynecologists practice bulletins: An overview. American Journal of Obstetrics and Gynecology 194(6):1564-1572. Choudhry, N. K., R. Levin, and J. Avorn. 2008. The economic consequences of non-evidence- based clopidogrel use. American Heart Journal 155(5):904-909. Chow, S. C., and M. Chang. 2008. Adaptive design methods in clinical trials—a review. Or- phanet Journal of Rare Diseases 3:11. Cohen, E., E. Uleryk, M. Jasuja, and P. C. Parkin. 2007. An absence of pediatric randomized controlled trials in general medical journals, 1985-2004. Journal of Clinical Epidemiol- ogy 60(2):118-123.

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 181 Committee on Developments in the Science of Learning, Committee on Learning Research and Educational Practice, and National Research Council. 2000. How people learn: Brain, mind, experience, and school (expanded edition). Washington, DC: National Academy Press. Concato, J., N. Shah, and R. I. Horwitz. 2000. Randomized, controlled trials, observa- tional studies, and the hierarchy of research designs. New England Journal of Medicine 342(25):1887-1892. Cutler, D. M., M. B. McClellan, and National Bureau of Economic Research. 1996. The de- terminants of technological change in heart attack treatment. In NBER working paper series working paper 5751. http://www.nber.org/papers/w5751.pdf?new_window=1 (ac- cessed August 31, 2012). Darst, J. R., J. W. Newburger, S. Resch, R. H. Rathod, and J. E. Lock. 2010. Deciding without data. Congenital Heart Disease 5(4):339-342. Davidoff, F. 2009. Heterogeneity is not always noise: Lessons from improvement. Journal of the American Medical Association 302(23):2580-2586. Davis, D. A., and A. TaylorVaisey. 1997. Translating guidelines into practice—a systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines. Canadian Medical Association Journal 157(4):408-416. Davis, R. L., D. Eastman, H. McPhillips, M. A. Raebel, S. E. Andrade, D. Smith, M. U. Yood, S. Dublin, and R. Platt. 2011. Risks of congenital malformations and perinatal events among infants exposed to calcium channel and beta-blockers during pregnancy. Pharma- coepidemiology and Drug Safety 20(2):138-145. Decker, S. L., E. W. Jamoom, and J. E. Sisk. 2012. Physicians in nonprimary care and small practices and those age 55 and older lag in adopting electronic health record systems. Health Affairs (Millwood) 31(5):1108-1114. Della Penna, R., H. Martel, E. B. Neuwirth, J. Rice, M. I. Filipski, J. Green, and J. Bellows. 2009. Rapid spread of complex change: A case study in inpatient palliative care. BMC Health Services Research 9:245. DesRoches, C. M., C. Worzala, M. S. Joshi, P. D. Kralovec, and A. K. Jha. 2012. Small, non- teaching, and rural hospitals continue to be slow in adopting electronic health record systems. Health Affairs (Millwood) 31(5):1092-1099. Detmer, D. E. 2003. Building the national health information infrastructure for personal health, health care services, public health, and research. BMC Medical Information Decision Making 3:1. Dhruva, S. S., and R. F. Redberg. 2008. Variations between clinical trial participants and Medicare beneficiaries in evidence used for Medicare national coverage decisions. Ar- chives of Internal Medicine 168(2):136-140. Dolan, P. L. 2010. 86% of physicians use Internet to access health information. American Medical News, January 11. Dopson, S., L. FitzGerald, E. Ferlie, J. Gabbay, and L. Locock. 2002. No magic targets! Changing clinical practice to become more evidence based. Health Care Management Review 27(3):35-47. Eddy, D. M., and L. Schlessinger. 2003. Archimedes: A trial-validated model of diabetes. Diabetes Care 26(11):3093-3101. Eddy, D. M., J. Adler, B. Patterson, D. Lucas, K. A. Smith, and M. Morris. 2011. Individual- ized guidelines: The potential for increasing quality and reducing costs. Annals of Internal Medicine 154(9):627-634. Eldridge, S., D. Ashby, C. Bennett, M. Wakelin, and G. Feder. 2008. Internal and external validity of cluster randomised trials: Systematic review of recent trials. British Medical Journal 336(7649):876-880.

OCR for page 149
182 BEST CARE AT LOWER COST Ferlie, E. B., and S. M. Shortell. 2001. Improving the quality of health care in the United King- dom and the United States: A framework for change. Milbank Quarterly 79(2):281-315. Fiore, L. D., M. Brophy, R. E. Ferguson, L. D’Avolio, J. A. Hermos, R. A. Lew, G. Doros, C. H. Conrad, J. A. O’Neil, T. P. Sabin, J. Kaufman, S. L. Swartz, E. Lawler, M. H. Liang, J. M. Gaziano, and P. W. Lavori. 2011. A point-of-care clinical trial comparing insulin administered using a sliding scale versus a weight-based regimen. Clinical Trials 8(2):183-195. Flodgren, G., E. Parmelli, G. Doumit, M. Gattellari, M. A. O’Brien, J. Grimshaw, and M. P. Eccles. 2011. Local opinion leaders: Effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews (8):CD000125. Frangakis, C. 2009. The calibration of treatment effects from clinical trials to target popula- tions. Clinical Trials 6(2):136-140. Fridsma, D. 2011. Join query health in developing national standards for population queries. In HealthITBuzz, September 23. Genetics and Public Policy Center. 2008. FDA regulation of genetic tests. http://www.dna policy.org/policy.issue.php?action=detail&issuebrief_id=11&print=1 (accessed January 4, 2010). Gill, P., A. C. Dowell, R. D. Neal, N. Smith, P. Heywood, and A. E. Wilson. 1996. Evidence based general practice: A retrospective study of interventions in one training practice. British Medical Journal 312(7034):819-821. Go, A. S., D. J. Magid, B. Wells, S. H. Sung, A. E. Cassidy-Bushrow, R. T. Greenlee, R. D. Langer, T. A. Lieu, K. L. Margolis, F. A. Masoudi, C. J. McNeal, G. H. Murata, K. M. Newton, R. Novotny, K. Reynolds, D. W. Roblin, D. H. Smith, S. Vupputuri, R. E. White, J. Olson, J. S. Rumsfeld, and J. H. Gurwitz. 2008. The Cardiovascular Research Net- work: A new paradigm for cardiovascular quality and outcomes research. Circulation: ­ Cardiovascular Quality and Outcomes 1(2):138-147. Goss, E., M. P. Link, S. S. Bruinooge, T. S. Lawrence, J. E. Tepper, C. D. Runowicz, and R. L. Schilsky. 2009. The impact of the privacy rule on cancer research: Variations in attitudes and application of regulatory standards. Journal of Clinical Oncology 27(24):4014-4020. Grady, D., S. M. Rubin, D. B. Petitti, C. S. Fox, D. Black, B. Ettinger, V. L. Ernster, and S. R. Cummings. 1992. Hormone therapy to prevent disease and prolong life in postmeno- pausal women. Annals of Internal Medicine 117(12):1016-1037. Green, P. L., and P. E. Plsek. 2002. Coaching and leadership for the diffusion of innovation in health care: A different type of multi-organization improvement collaborative. Joint Commission Journal on Quality Improvement 28(2):55-71. Greene, S. M., A. M. Geiger, E. L. Harris, A. Altschuler, L. Nekhlyudov, M. B. Barton, S. J. Rolnick, J. G. Elmore, and S. Fletcher. 2006. Impact of IRB requirements on a multicenter survey of prophylactic mastectomy outcomes. Annals of Epidemiology 16(4):275-278. Greenhalgh, T., G. Robert, F. Macfarlane, P. Bate, and O. Kyriakidou. 2004. Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly 82(4):581-629. Greenhouse, J. B., E. E. Kaizar, K. Kelleher, H. Seltman, and W. Gardner. 2008. Generalizing from clinical trial data: A case study. The risk of suicidality among pediatric antidepres- sant users. Statistics in Medicine 27(11):1801-1813. Grodstein, F., J. E. Manson, G. A. Colditz, W. C. Willett, F. E. Speizer, and M. J. Stampfer. 2000. A prospective, observational study of postmenopausal hormone therapy and pri- mary prevention of cardiovascular disease. Annals of Internal Medicine 133(12):933-941. Grodstein, F., T. B. Clarkson, and J. E. Manson. 2003. Understanding the divergent data on postmenopausal hormone therapy. New England Journal of Medicine 348(7):645-650.

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 183 Grodstein, F., J. E. Manson, and M. J. Stampfer. 2006. Hormone therapy and coronary heart disease: The role of time since menopause and age at hormone initiation. Journal of Women’s Health 15(1):35-44. Grove, A. 2011. Rethinking clinical trials. Science 333(6050):1679. Grover, F. L., A. L. Shroyer, K. Hammermeister, F. H. Edwards, T. B. Ferguson, S. W. Dziuban, J. C. Cleveland, R. E. Clark, and G. McDonald. 2001. A decade’s experience with qual- ity improvement in cardiac surgery using the Veterans Affairs and Society of Thoracic Surgeons national databases. Annals of Surgery 234(4):464-472. Harrison, T. R. 1962. Principles of internal medicine. 4th ed. New York: Blakiston Division, McGraw-Hill. Hennekens, C. H., J. E. Buring, and S. L. Mayrent. 1987. Epidemiology in medicine. 1st ed. Boston, MA: Little, Brown. Hlatky, M. A., K. L. Lee, F. E. Harrell, R. M. Califf, D. B. Pryor, D. B. Mark, and R. A. Rosati. 1984. Tying clinical research to patient-care by use of an observational database. Statistics in Medicine 3(4):375-384. Holmes, B., and S. Karp. 2005. National consumer health privacy survey 2005. http://www. chcf.org/~/media/MEDIA%20LIBRARY%20Files/PDF/C/PDF%20ConsumersHealth InfoTechnologyNationalSurvey.pdf (accessed August 21, 2012). Holve, E., and P. Pittman. 2009. A first look at the volume and cost of comparative effective- ness research in the United States. Washington, DC: AcademyHealth. Holve, E., and P. Pittman. 2011. The cost and volume of comparative effectiveness research. In Learning what works: Infrastructure required for comparative effectiveness research: Workshop summary. Institute of Medicine. Washington, DC: The National Academies Press. Pp. 89-96. Hornbrook, M. C., G. Hart, J. L. Ellis, D. J. Bachman, G. Ansell, S. M. Greene, E. H. Wagner, R. Pardee, M. M. Schmidt, A. Geiger, A. L. Butani, T. Field, H. Fouayzi, I. Miroshnik, L. Liu, R. Diseker, K. Wells, R. Krajenta, L. Lamerato, and C. N. Dudas. 2005. Building a virtual cancer research organization. JNCI Monographs 2005(35):12-25. Hsiao, C.-J., E. Hing, T. C. Socey, and B. Cai. 2011. Electronic health record systems and intent to apply for meaningful use incentives among office-based physician practices: United States, 2001-2011. Hyattsville, MD: National Center for Health Statistics. Ioannidis, J. P. A. 2005a. Contradicted and initially stronger effects in highly cited clinical research. Journal of the American Medical Association 294(2):218-228. Ioannidis, J. P. A. 2005b. Why most published research findings are false. PLoS Medicine 2(8):696-701. IOM (Institute of Medicine). 1985. Assessing medical technologies. Washington, DC: National Academy Press. IOM. 2001a. Crossing the quality chasm: A new health system for the 21st century. Washing- ton, DC: National Academy Press. IOM. 2001b. Mammography and beyond: Developing technologies for the early detection of breast cancer. Washington, DC: National Academy Press. IOM. 2005. Saving women’s lives: Strategies for improving breast cancer detection and di- agnosis: A Breast Cancer Research Foundation and Institute of Medicine symposium. Washington, DC: The National Academies Press. IOM. 2008. Knowing what works in health care: A roadmap for the nation. Washington, DC: The National Academies Press. IOM. 2009a. Beyond the HIPAA privacy rule: Enhancing privacy, improving health through research. Washington, DC: The National Academies Press. IOM. 2009b. Conflict of interest in medical research, education, and practice. Washington, DC: The National Academies Press.

OCR for page 149
184 BEST CARE AT LOWER COST IOM. 2009c. Initial national priorities for comparative effectiveness research. Washington, DC: The National Academies Press. IOM. 2010a. Redesigning the clinical effectiveness research paradigm: Innovation and practice- based approaches: Workshop summary. Washington, DC: The National Academies Press. IOM. 2010b. Transforming clinical research in the United States: Challenges and opportuni- ties: Workshop summary. Washington, DC: The National Academies Press. IOM. 2011a. Clinical data as the basic staple of health learning: Workshop summary. Wash- ington, DC: The National Academies Press. IOM. 2011b. Clinical practice guidelines we can trust. Washington, DC: The National Acad- emies Press. IOM. 2011c. Digital infrastructure for the learning health system: The foundation for continu- ous improvement in health and health care: A workshop summary. Washington, DC: The National Academies Press. IOM. 2011d. Public engagement and clinical trials: New models and disruptive technologies: Workshop summary. Washington, DC: The National Academies Press. Kalorama Information. 2010. Handhelds in healthcare: The world market for PDAs, tablet PCs, handheld monitors & scanners. http://www.kaloramainformation.com/Handhelds- Healthcare-PDAs-2703662/ (accessed August 31, 2011). Kasper, D. L., and T. R. Harrison. 2005. Harrison’s principles of internal medicine. 16th ed., 2 vols. New York: McGraw-Hill, Medical Publications Division. Kuperman, G. J. 2011. Health-information exchange: Why are we doing it, and what are we doing? Journal of the American Medical Informatics Association 18(5):678-682. Larson, E. B. 2007. The HMO research network as a test bed. In The learning healthcare system: Workshop summary. Institute of Medicine. Washington, DC: The National Academies Press, Pp. 223-232. Larsson, S., P. Lawyer, G. Garellick, B. Lindahl, and M. Lundström. 2011. Use of 13 disease registries in 5 countries demonstrates the potential to use outcome data to improve health care’s value. Health Affairs (Millwood) 31(1):220-227. Lauer, M. S., and S. Skarlatos. 2010. Translational research for cardiovascular diseases at the National Heart, Lung, and Blood Institute: Moving from bench to bedside and from bedside to community. Circulation 121(7):929-933. Lazarus, R., K. Yih, and R. Platt. 2006. Distributed data processing for public health surveil- lance. BMC Public Health 6:235. Lee, D. H., and O. Vielemeyer. 2011. Analysis of overall level of evidence behind Infec- tious Diseases Society of America practice guidelines. Archives of Internal Medicine 171(1):18-22. Lee, H. Y., H. S. Ahn, J. A. Jang, Y. M. Lee, H. J. Hann, M. S. Park, and D. S. Ahn. 2005a. Comparison of evidence-based therapeutic intervention between community- and hospital-based primary care clinics. International Journal of Clinical Practice 59(8):975-980. Lee, I. M., N. R. Cook, J. M. Gaziano, D. Gordon, P. M. Ridker, J. E. Manson, C. H. Hennekens, and J. E. Buring. 2005b. Vitamin E in the primary prevention of cardiovas- cular disease and cancer—The Women’s Health Study: A randomized controlled trial. Journal of the American Medical Association 294(1):56-65. Let data speak to data. 2005. Nature 438(7068):531. Lohr, S. 2011. Big medical groups begin patient data-sharing project. The New York Times, April 6. Luce, B. R., J. M. Kramer, S. N. Goodman, J. T. Connor, S. Tunis, D. Whicher, and J. S. Schwartz. 2009. Rethinking randomized clinical trials for comparative effectiveness research: The need for transformational change. Annals of Internal Medicine 151(3):206-209.

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 185 Mandel, K. E. 2010. Aligning rewards with large-scale improvement. Journal of the American Medical Association 303(7):663-664. Manolio, T. A. 2010. Emerging genomic information. In Redesigning the clinical effectiveness research paradigm: Innovation and practice-based approaches: Workshop summary. Institute of Medicine. Washington, DC: The National Academies Press. Pp. 189-206. Manson, J. E. 2010. Hormone replacement therapy. In Redesigning the clinical effectiveness research paradigm: Innovation and practice-based approaches: A workshop summary. Institute of Medicine. Washington, DC: National Academies Press. Pp. 89-104. Manson, J. E., J. Hsia, K. C. Johnson, J. E. Rossouw, A. R. Assaf, N. L. Lasser, M. Trevisan, H. R. Black, S. R. Heckbert, R. Detrano, O. L. Strickland, N. D. Wong, J. R. Crouse, E. Stein, M. Cushman, and Women’s Health Initiative Investigators. 2003. Estrogen plus progestin and the risk of coronary heart disease. New England Journal of Medicine 349(6):523-534. McCannon, C. J., and R. J. Perla. 2009. Learning networks for sustainable, large-scale im- provement. Joint Commission Journal on Quality and Patient Safety 35(5):286-291. McCannon, C. J., M. W. Schall, D. R. Calkins, and A. G. Nazem. 2006. Saving 100,000 lives in US hospitals. British Medical Journal 332(7553):1328-1330. McCullough, J. S. 2008. The adoption of hospital information systems. Health Economics 17(5):649-664. Meadows, T. A., D. L. Bhatt, A. T. Hirsch, M. A. Creager, R. M. Califf, E. M. Ohman, C. P. Cannon, K. A. Eagle, M. J. Alberts, S. Goto, S. C. Smith, P. W. Wilson, K. E. Watson, P. G. Steg, and REACH Registry Investigators. 2009. Ethnic differences in the prevalence and treatment of cardiovascular risk factors in US outpatients with peripheral arterial disease: Insights from the reduction of atherothrombosis for continued health (REACH) registry. American Heart Journal 158(6):1038-1045. Meier, B., and J. Roberts. 2011. Hip implant complaints suge, even as the dangers are studied. New York Times. http://www.nytimes.com/2011/08/23/business/complaints-soar-on-hip- implants-as-dangers-are-studied.html?pagewanted=all (accessed January 16, 2012). Murphy, S. N., G. Weber, M. Mendis, V. Gainer, H. C. Chueh, S. Churchill, and I. Kohane. 2010. Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2). Journal of the American Medical Informatics Association 17(2):124-130. National Comprehensive Cancer Network. 2012. NCCN clinical practice guidelines in oncol- ogy: Breast cancer. Fort Washington, PA: National Comprehensive Cancer Network. National Quality Forum. 2010. Quality data model 2.1. http://www.qualityforum.org/ Projects/h/QDS_Model/Quality_Data_Model.aspx#t=2&s=&p= (accessed April 27, 2011). Ness, R. B. 2007. Influence of the HIPAA privacy rule on health research. Journal of the American Medical Association 298(18):2164-2170. Nolan, K., M. W. Schall, F. Erb, and T. Nolan. 2005. Using a framework for spread: The case of patient access in the Veterans Health Administration. Joint Commission Journal on Quality and Patient Safety 31(6):339-347. Norton, W. E., and B. S. Mittman. 2010. Scaling-up health promotion/disease prevention programs in community settings: Barriers, facilitators, and initial recommendations. West Hartford, CT: The Patrick and Catherine Weldon Donaghue Medical Research Foundation. O’Connor, G. T., S. K. Plume, E. M. Olmstead, J. R. Morton, C. T. Maloney, W. C. Nugent, F. Hernandez, R. Clough, B. J. Leavitt, L. H. Coffin, C. A. Marrin, D. Wennberg, J. D. Birkmeyer, D. C. Charlesworth, D. J. Malenka, H. B. Quinton, and J. F. Kasper. 1996. A regional intervention to improve the hospital mortality associated with coronary artery bypass graft surgery. The Northern New England Cardiovascular Disease Study Group. Journal of the American Medical Association 275(11):841-846.

OCR for page 149
186 BEST CARE AT LOWER COST Pawlson, G. 2010. Course-of-care data. In Redesigning the clinical effectiveness research paradigm: Innovation and practice-based approaches: Workshop summary. Institute of Medicine. Washington, DC: The National Academies Press. Pp. 325-331. PCORI (Patient-Centered Outcomes Research Institute). 2012. Preliminary draft methodology report: Our questions, our decisions: Standards for patient-centered outcomes research. http://www.pcori.org/assets/PCORI-MC-Research_Methods_Framework-Review_v Finalv3.pdf (accessed July 6, 2012). Pearson, T. A., and T. A. Manolio. 2008. How to interpret a genome-wide association study. Journal of the American Medical Association 299(11):1335-1344. Physician Consortium for Performance Improvement. 2011. Advancing health care improve- ment through patient registries: Moving forward. http://www.ama-assn.org/resources/ doc/cqi/registry-meeting-paper.pdf (accessed August 30, 2011). Pisano, G. P., R. M. J. Bohmer, and A. C. Edmondson. 2001. Organizational differences in rates of learning: Evidence from the adoption of minimally invasive cardiac surgery. Management Science 47(6):752-768. Piwowar, H. A., M. J. Becich, H. Bilofsky, R. S. Crowley, and on behalf of the caBIG Data Sharing and Intellectual Capital Workspace. 2008. Towards a data sharing cul- ture: Recommendations for leadership from academic health centers. PLoS Medicine 5(9):1315-1319. Platt, R. 2010. Distributed data networks. In Redesigning the clinical effectiveness research paradigm: Innovation and practice-based approaches: Workshop summary. Institute of Medicine. Washington, DC: The National Academies Press. Pp. 253-261. Podolny, J., and K. Page. 1998. Network forms of organization. Annual Review of Sociology 24:57-76. popHealth. 2012. An open source quality measure reference implementation. http://project- pophealth.org/about.html (accessed June 13, 2012). Prasad, V., V. Gall, and A. Cifu. 2011. The frequency of medical reversal. Archives of Internal Medicine 171(18):1675-1676. President’s Information Technology Advisory Committee. 2001. Transforming health care through information technology. http://www.itrd.gov/pubs/pitac/pitac-hc-9feb01.pdf (ac- cessed August 31, 2012). President’s Information Technology Advisory Committee. 2004. Revolutioniz- ing health care through information technology. http://www.itrd.gov/pitac/ meetings/2004/20040617/20040615_hit.pdf (accessed August 31, 2012). Redberg, R. F. 2007. Evidence, appropriateness, and technology assessment in cardiology: A case study of computed tomography. Health Affairs 26(1):86-95. Research!America. 2004. Taking our pulse: The parade/research!America health poll. http:// www.researchamerica.org/uploads/poll2004parade.pdf (accessed August 21, 2012). Robert Wood Johnson Foundation. 2010. How registries can help performance measurement improve care. In White paper (High-Value Health Care Project). http://www.healthquality alliance.org/userfiles/Final%20Registries%20paper%20062110(1).pdf (accessed August 31, 2012). Robinson, J. C., L. P. Casilino, R. R. Gillies, D. R. Rittenhouse, S. S. Shortell, and S. Fernandes-Taylor. 2009. Financial incentives, quality improvement programs, and the adoption of clinical information technology. Medical Care 47(4):411-417. Rogers, E. M. 2003. Diffusion of innovations. 5th ed. New York: Free Press. Rossouw, J. E., G. L. Anderson, R. L. Prentice, A. Z. LaCroix, C. Kooperberg, M. L. Stefanick, R. D. Jackson, S. A. A. Beresford, B. V. Howard, K. C. Johnson, M. Kotchen, and J. Ockene. 2002. Risks and benefits of estrogen plus progestin in healthy postmenopausal women—principal results from the Women’s Health Initiative Randomized Controlled Trial. Journal of the American Medical Association 288(3):321-333.

OCR for page 149
GENERATING AND APPLYING KNOWLEDGE IN REAL TIME 187 Rothman, M., L. Burke, P. Erickson, N. K. Leidy, D. L. Patrick, and C. D. Petrie. 2009. Use of existing patient-reported outcome (PRO) instruments and their modification: The ISPOR good research practices for evaluating and documenting content validity for the use of existing instruments and their modification pro task force report. Value Health 12(8):1075-1083. Savage, E. B., T. B. Ferguson, and V. J. DiSesa. 2003. Use of mitral valve repair: Analysis of contemporary United States experience reported to the Society of Thoracic Surgeons National Cardiac Database. Annals Thoracic Surgery 75(3):820-825. Schectman, J. M., W. S. Schroth, D. Verme, and J. D. Voss. 2003. Randomized controlled trial of education and feedback for implementation of guidelines for acute low back pain. Journal of General Internal Medicine 18(10):773-780. Shih, C., and E. Berliner. 2008. Diffusion of new technology and payment policies: Coronary stents. Health Affairs 27(6):1566-1576. Simon, S. R., K. A. Chan, S. B. Soumerai, A. K. Wagner, S. E. Andrade, A. C. Feldstein, J. E. Lafata, R. L. Davis, and J. H. Gurwitz. 2005. Potentially inappropriate medication use by elderly persons in U.S. health maintenance organizations, 2000-2001. Journal of the American Geriatrics Society 53(2):227-232. Simpson, L. A., L. Peterson, C. M. Lannon, S. B. Murphy, C. Goodman, Z. Ren, and A. Zajicek. 2010. Special challenges in comparative effectiveness research on children’s and adolescents’ health. Health Affairs (Millwood) 29(10):1849-1856. Sittig, D. F., A. Wright, J. A. Osheroff, B. Middleton, J. M. Teich, J. S. Ash, E. Campbell, and D. W. Bates. 2008. Grand challenges in clinical decision support. Journal of Biomedical Informatics 41(2):387-392. Soumerai, S. B., T. J. McLaughlin, J. H. Gurwitz, E. Guadagnoli, P. J. Hauptman, C. Borbas, N. Morris, B. McLaughlin, X. Gao, D. J. Willison, R. Asinger, and F. Gobel. 1998. Effect of local medical opinion leaders on quality of care for acute myocardial in- farction: A randomized controlled trial. Journal of the American Medical Association 279(17):1358-1363. Stern, M., K. Williams, D. Eddy, and R. Kahn. 2008. Validation of prediction of diabetes by the Archimedes model and comparison with other predicting models. Diabetes Care 31(8):1670-1671. Stewart, W. F., N. R. Shah, M. J. Selna, R. A. Paulus, and J. M. Walker. 2007. Bridging the inferential gap: The electronic health record and clinical evidence. Health Affairs 26(2):w181-w191. Tinetti, M. E., and S. A. Studenski. 2011. Comparative effectiveness research and patients with multiple chronic conditions. New England Journal of Medicine 364(26):2478-2481. Tricoci, P., J. M. Allen, J. M. Kramer, R. M. Califf, and S. C. Smith. 2009. Scientific evidence underlying the ACC/AHA clinical practice guidelines. Journal of the American Medical Association 301(8):831-841. Tunis, S. R., D. B. Stryer, and C. M. Clancy. 2003. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association 290(12):1624-1632. Tunis, S. R., J. Benner, and M. McClellan. 2010. Comparative effectiveness research: Pol- icy context, methods development and research infrastructure. Statistics in Medicine 29(19):1963-1976. Undem, T. 2010. Consumers and health information technology: A national survey. http:// www.chcf.org/~/media/MEDIA%20LIBRARY%20Files/PDF/C/PDF%20Consumers HealthInfoTechnologyNationalSurvey.pdf (accessed August 21, 2012). Van Spall, H. G. C., A. Toren, A. Kiss, and R. A. Fowler. 2007. Eligibility criteria of random- ized controlled trials published in high-impact general medical journals. Journal of the American Medical Association 297(11):1233-1240.

OCR for page 149
188 BEST CARE AT LOWER COST Vickers, A. J., and P. T. Scardino. 2009. The clinically-integrated randomized trial: Proposed novel method for conducting large trials at low cost. Trials 10:14. Vogt, T. M., J. Elston-Lafata, D. Tolsma, and S. M. Greene. 2004. The role of research in integrated healthcare systems: The HMO research network. American Journal of Man- aged Care 10(9):643-648. Vos, L., M. L. Dückers, C. Wagner, and G. G. van Merode. 2010. Applying the quality im- provement collaborative method to process redesign: A multiple case study. Implementa- tion Science 5:19. Wagner, E. H., S. M. Greene, G. Hart, T. S. Field, S. Fletcher, A. M. Geiger, L. J. Herrinton, M. C. Hornbrook, C. C. Johnson, J. Mouchawar, S. J. Rolnick, V. J. Stevens, S. H. Taplin, D. Tolsma, and T. M. Vogt. 2005. Building a research consortium of large health systems: The Cancer Research Network. JNCI Monographs 2005(35):3-11. Walach, H., T. Falkenberg, V. Fønnebø, G. Lewith, and W. B. Jonas. 2006. Circular instead of hierarchical: Methodological principles for the evaluation of complex interventions. BMC Medical Research Methodology 6:29. Washington, A. E., and S. H. Lipstein. 2011. The Patient-Centered Outcomes Research In- stitute—promoting better information, decisions, and health. New England Journal of Medicine 365(15):e31. Weber, G. M., S. N. Murphy, A. J. McMurry, D. Macfadden, D. J. Nigrin, S. Churchill, and I. S. Kohane. 2009. The Shared Health Research Information Network (SHRINE): A prototype federated query tool for clinical data repositories. Journal of the American Medical Informatics Association 16(5):624-630. Wei, F., D. L. Miglioretti, M. T. Connelly, S. E. Andrade, K. M. Newton, C. L. Hartsfield, K. A. Chan, and D. S. Buist. 2005. Changes in women’s use of hormones after the women’s health initiative estrogen and progestin trial by race, education, and income. Journal of the National Cancer Institute Monograph (35):106-112. Weisberg, H. I., V. C. Hayden, and V. P. Pontes. 2009. Selection criteria and generalizability within the counterfactual framework: Explaining the paradox of antidepressant-induced suicidality? Clinical Trials 6(2):109-118. Wicks, P., T. E. Vaughan, M. P. Massagli, and J. Heywood. 2011. Accelerated clinical discov- ery using self-reported patient data collected online and a patient-matching algorithm. Nature Biotechnology 29(5):411-414. Woolf, S. H. 2008. The meaning of translational research and why it matters. Journal of the American Medical Association 299(2):211-213. Woolley, M., and S. M. Propst. 2005. Public attitudes and perceptions about health-related research. Journal of the American Medical Association 294(11):1380-1384. Wright, A., and D. F. Sittig. 2008. A four-phase model of the evolution of clinical decision support architectures. International Journal of Medical Informatics 77(10):641-649. Yih, W. K., B. Caldwell, R. Harmon, K. Kleinman, R. Lazarus, A. Nelson, J. Nordin, B. Rehm, B. Richter, D. Ritzwoller, E. Sherwood, and R. Platt. 2004. National bioterrorism syndromic surveillance demonstration program. Morbidity and Mortality Weekly Report 53(Suppl.):43-49. Zerhouni, E. A. 2005. Translational and clinical science—time for a new vision. New England Journal of Medicine 353(15):1621-1623.