Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 221
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary 4 Organizing and Improving Data Utility INTRODUCTION An enormous untapped capacity for data analysis is emerging as the research community hones its capacity to collect, store, and study data. We are now generating and have access to vastly larger collections of data than have been available before. The potential for mining these robust databases to expand the evidence base is experiencing commensurate growth. New and emerging design models and tools for data analysis have significant potential to inform clinical effectiveness research. However, further work is needed to fully harness the data and insights these large databases contain. As these methods are tested and developed, they are likely to become an even more valuable part of the overall research arsenal—helping to address inefficiencies in current research practices, providing meaningful complements to existing approaches, and offering means to productively process the increasingly complex information generated as part of the research enterprise today. This chapter aims to (1) characterize some key implications of these larger electronically accessible health records and databases for research, and (2) identify the most pressing opportunities to apply these data more effectively to clinical effectiveness research. The papers that follow were derived from the workshop session devoted to organizing and improving data utility. These papers identify technological and policy advances needed to better harness these emerging data sources for research relevant to providing the care most appropriate to each patient. From his perspective at the Geisinger Health System, Ronald A. Paulus describes successful applications of electronic health records (EHRs) and
OCR for page 222
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary point-of-care data to create delivery-based evidence and make further steps in transforming clinical practice. These data present the opportunity to develop data useful for studies needed to complement and fill gaps in randomized controlled trial (RCT) findings. In the next paper, Alexander M. Walker from Worldwide Health Information Science Consultants and the Harvard School of Public Health discusses approaches to the development, application, and shared distribution of information from large administrative databases in clinical effectiveness research. He describes augmented databases that include laboratory and consumer data and discusses approaches to creating an infrastructure for medical record review, implementing methods for automated and quasi-automated examination of masses of data, developing “rapid-cycle” analyses to circumvent the delays of claims processing and adjudication, and opening new initiatives for collaborative sharing of data that respect patients’ and institutions’ legitimate needs for privacy and confidentiality. In the context of the ongoing debate about the relative value of observational data (e.g., as provided by registries) versus RCTs, Alan J. Moskowitz from Columbia University argues that registries provide data that are important complements to randomized trials (including efficacy and so-called pragmatic randomized trials) and to analyses of large administrative datasets. In fact, Moskowitz asserts, registries can assess “real-world” health and economic outcomes to help guide decision making on policies for patient care. Complicated research questions increasingly need current information derived from a variety of sources. One promising source is distributed research models, which provide multi-user access to enormous stores of highly useful data. Several models are currently being developed. Speaking on that topic was Richard Platt, from Harvard Pilgrim Health Care and Harvard Medical School, who reports on several complex efforts to design and implement distributed research models that derive large stores of useful data from a variety of sources for multiple users. THE ELECTRONIC HEALTH RECORD AND CARE REENGINEERING: PERFORMANCE IMPROVEMENT REDEFINED Ronald A. Paulus, M.D., M.B.A.; Walter F. Stewart, Ph.D., M.P.H.; Albert Bothe, Jr., M.D.; Seth Frazier, M.B.A.; Nirav R. Shah, M.D., M.P.H.; and Mark J. Selna, M.D.; Geisinger Introduction The U.S. healthcare system has struggled with numerous, seemingly intractable problems including fragmented, uncoordinated, and highly variable care that results in safety risks and waste; consumer dissatis-
OCR for page 223
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary faction; and the absence of productivity and efficiency gains common in other industries (The Commonwealth Fund Commission on a High Performance Health System, 2005). Multiple stakeholders—patients and families, physicians, payors, employers, and policy makers—have all called for order of magnitude improvements in healthcare quality and efficiency. While many industries have leveraged technology to deliver vastly superior value in highly competitive environments over the last several decades, healthcare performance has, on a comparative basis, stagnated. In the absence of the ability to transform performance, health care “competition” has too often focused on delivering more expensive services promoted by better marketing and geographic presence; true outcomes-based competition has been lacking (Porter and Olmsted-Teisberg, 2006). Implications of these failures have been profound for the care delivery system and for all Americans. Recently, one area of hope has emerged: the adoption of electronic health records. EHRs, if successfully deployed, have tremendous potential to transform care delivery. Despite a primary focus on benefits derived from practice standardization and decision support, diverse uses of EHR data including enhanced quality improvement and research activities may offer an equal or even greater potential for fundamental care delivery transformation. Limits of guideline-based evidence have produced a growing recognition that observational data may be essential to complement gaps in randomized controlled trial data needed to fulfill this transformation potential. Despite serious challenges, EHR data may offer an invaluable look into interventions and outcomes in clinical practice and offer promise as a complementary source of evidence directly relevant to everyday practice needs. EHR data also may provide an essential complement to clinical performance improvement initiatives. Healthcare performance improvement activities are defined here as an ongoing cycle of positive change in organization, care process, decision management, workflow, or other components of care, regardless of methodology (collectively PI) (Hartig and Allison, 2007). Despite the underlying logic and history of success in other business sectors, the impact of healthcare performance improvement activities is often negligible or unsustainable. As with the evidence gap, EHR data offer promise as a transformation resource for PI. The inability to achieve broad and systematic quality and operational improvements in our delivery system has left all stakeholders deeply frustrated. This paper explores a potentially powerful new approach to leverage the latent synergy between EHR-based PI efforts and research and presents a vision of how PI at the clinical enterprise level is being transformed by the EHR and associated data aggregation and analysis activities. In that context, we describe a revision to the classic Plan-Do-Study-Act (PDSA)
OCR for page 224
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary cycle that reflects this integration and the development of a Performance Improvement Architecture (PI Architecture), a set of reusable parts, components, and modules along with a process methodology that focuses relentlessly on eliminating all unnecessary care steps, safely automating processes, delegating care to the lowest cost, competent caregiver, maximizing supply chain efficiencies and activating patients in their own self-care. Early Geisinger Health System (Geisinger) experience suggests that use of such a PI Architecture in creating change is likely to provide guidance on what to improve, an enhanced ability to implement and track initiatives and to specifically link discrete elements of change to meaningful outcomes, a simultaneous focus on quality and efficiency, improved utilization of scarce healthcare resources and personnel, dramatic acceleration of the pace of change, and the capacity to maintain and grow that change over time. Delivery-Based Evidence—A New EHR Role When doctors care for patients, the very essence of the interaction requires extrapolation from knowledge and experience to tailor care for the particular circumstances at hand (i.e., bridging the “inferential gap”) (Stewart et al., 2007). No two patients are alike. While a certain level of “experimentation” is a part of good care, the knowledge base required for such experimentation is growing at a pace that far exceeds the ongoing learning capacity of primary care providers and even most specialists. Hence, the nature of care provided is dated or experimental, venturing beyond what is known or is optimal. How do providers move beyond the limits of what they can learn or “trials where n = 1”? Although the RCT serves as the “gold standard” design for making causal inferences from data, there are practical limits to the utility of RCT-based evidence (Brook and Lohr, 1985; Flum et al., 2001; Krumholz et al., 1998). Today, RCTs are largely guided by the Food and Drug Administration (FDA) and related regulatory needs, not necessarily by the most important clinical questions. They are frequently performed in specialized settings (e.g., academic medical centers or the Veterans Administration) that are not representative of the broader arena of care delivery. RCTs are used to test drugs and devices in highly selected populations (i.e., patients with relatively low co-morbid disease burdens), under artificial conditions (i.e., a simple, focused question) that are often unrelated to usual clinical care (i.e., managing complex needs of patients with multiple co-morbidities), and are focused on outcomes that may be incomplete (e.g., short-term outcomes leading to changes in a disease mediator). Efficacy equivalence with existing therapies rather than comparative effectiveness is the dominant focus of most trials, with little or no thought given to economic constraints or consequences. RCTs are not usually positioned to address fundamental
OCR for page 225
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary questions of need for subgroups with different co-morbidities, and results rarely translate into the clinical effectiveness hoped for under real-world practice conditions (Hayward et al., 1995). As the population continues to age and the prevalence of co-morbidities increases, the gap between what we know from RCTs and what we need to know to support objective clinical decisions is increasing, despite the pace at which new knowledge is being generated. Furthermore, decisions based primarily on randomized trial data do not incorporate local values, knowledge, or patient preferences into care decisions. From a distance, EHR data offer promise as a complementary source of evidence to more directly address questions relevant to everyday practice needs. However, a closer look at EHR data reveals challenges. Compared to data collection standards established for research, EHR data suffer from many limitations in both quality and completeness. In research settings, specialized staff follow strict data collection protocols; in routine care, even simple measures such as blood pressure or smoking status are measured with many more sources of error. For example, the wording of a question may differ, and responses to even identical questions can be documented in different manners. In routine care, the completeness of data may vary significantly by patient, being directly related to the underlying disease burden and the need for care. Furthermore, physicians may select a particular medication within a class based on the perceived severity of a patient’s disease, resulting in a complex form of bias that is difficult to eliminate (i.e., confounding by indication) (de Koning et al., 2005). In the near term, these and other limitations will raise questions about the credibility of evidence derived from EHR data. However, weaknesses inherent to EHR data as a source of evidence (e.g., false-positive associations) and to the current practice of PI (e.g., initiatives confined to guideline-based knowledge) can be mitigated through replication studies using independent EHRs and by using PI to test and validate EHR-based hypotheses. Healthcare Quality Improvement Since the early observations of Shewart, Juran, and Demming, quality improvement has become routine in most business sectors and has been formalized into a diverse set of methodologies and underlying philosophies such as Total Quality Management, Continuous Quality Improvement, Six Sigma, Lean, Reengineering and Microsystems (Juran, 1995). While latecomers, healthcare organizations have increasingly adopted these practices in an attempt to optimize outcomes. Healthcare PI involves an ongoing cycle of change in organization, care process, decision management, workflow, or other components of care, evolving from a culture often previously
OCR for page 226
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary dominated by blame and fault finding (e.g., peer and utilization review) to devising evidence-based “systems” of care. In general, healthcare PI relies on “planning” or “experimentation” approaches to improve outcomes. These models employ a diversity of philosophies including a commitment to identifying, meeting, and exceeding stakeholder needs; continuously improving in conjunction with escalating performance standards; applying structured, problem-solving processes using statistical and related tools such as control charts, cause-and-effect diagrams, and benchmarking; and empowering all employees to drive quality improvements. Experimentation-based PI typically relies on the PDSA model (Shewhart, 1939), as recently refined by the Institute for Healthcare Improvement (IHI) for the healthcare community (see Box 4-1) (Institute for Healthcare Improvement). Most approaches involve analysis that begins with a “diagnosis” of cause(s), albeit with limited data, followed by new data collection (frequently manual) to validate that the new process improves outcomes. Deployment of these models is often labor-intensive (e.g., evidence gathering, workflow observation), and effectuating change may take months, in part due to lack of dedicated support resources as well as a historical lack of focus on scalability. As a result, each successive itera- BOX 4-1 IHI PDSA Cycle Step 1: Plan—Plan the test or observation, including a plan for collecting data. State the objective of the test. Make predictions about what will happen and why. Develop a plan to test the change. (Who? What? When? Where? What data need to be collected?) Step 2: Do—Try out the test on a small scale. Carry out the test. Document problems and unexpected observations. Begin analysis of the data. Step 3: Study—Set aside time to analyze the data and study the results. Complete the analysis of the data. Compare the data to your predictions. Summarize and reflect on what was learned. Step 4: Act—Refine the change, based on what was learned from the test. Determine what modifications should be made. Prepare a plan for the next test.
OCR for page 227
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary tion may be performed without the ability to reuse previously developed tools, datasets, or analytics. Limitations to Healthcare Performance Improvement Despite the underlying logic and history of success in other business sectors, the impact of healthcare PI has too frequently been negligible or unsustainable (Blumenthal and Kilo, 1998). The gap between the potential for PI and results from actual practice has been substantial, as have the consequences of historical failures to improve outcomes. A number of factors explain this gap. First, PI initiatives are commonly motivated by guideline-based evidence and, as such, are subject to the same limitations as RCT data discussed above. Second, the PI-focused outcome may be only distantly or indirectly related to meaningful change in patient health or to a concrete measure of return on investment (ROI), largely because of the limits to available data and how such initiatives are organizationally motivated and executed. For example, there may not be the organizational will to make change happen or to support change efforts to sustainability. Even when PI is applied to an important problem (e.g., slowing progression of diabetes) in a manner that improves a chosen metric (e.g., ordering a HbA1c lab test), the effort may have only an incomplete or a delayed effect on more relevant outcomes (e.g., fewer complications, reductions in hospital admissions or improved quality of life). Third, outcomes are usually not evaluated in real time or at frequent intervals, limiting the timeliness, ease, and speed of innovation, as well as the dynamism of the process itself. When change and the associated process unfold in slow motion, participants’ (or their authorizing leaders’) commitment may not rise to or maintain the threshold required to institutionalize new standards of practice. Fourth, validation that a PI intervention actually works may be lacking altogether or lacking in scientific or analytic rigor, leaving inference to the realm of guesswork. Fifth, when human or labor-intensive processes are required to maintain change, performance typically regresses to baseline levels as vigilance wanes. Lastly, without a broad strategic framework, PI can be perceived as the “initiative of the month,” leading to temporary improvements that are quickly lost due to inadequate hardwiring, support systems, vigilance, or PI integration across an organization. The Geisinger Health System Experience At Geisinger, PI is evolving to become a continuous process involving data generation, performance measurement, and analysis to transform clinical practice, mediated by iterative changes to clinical workflows by elimi-
OCR for page 228
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary FIGURE 4-1 Transformation infrastructure. nating, automating, or delegating activities to meet quality and efficiency goals (see Figure 4-1). By way of background, Geisinger is an integrated delivery system located in central and northeastern Pennsylvania comprised of nearly 700 employed physicians across 55 clinical practice sites providing adult and pediatric primary and specialty care; 3 acute care hospitals (one closed, two open staff); several specialty hospitals; a 215,000 member health plan (accounting for approximately one-third of the Geisinger Clinic patient care revenue); and numerous other clinical services and programs. Geisinger serves a population of 2.5 million people, poorer and sicker than national benchmarks, with markedly less in- and out-migration. Organizationally, Geisinger manages through clinical service lines, each co-led by a physician-administrator pair. Strategic functions such as quality and innovation are centralized with matrixed linkage to operational leaders. A commercial EHR platform adopted in 1995 is fully utilized across the system (Epic Systems Corporation, 2008). An integrated database consisting of EHR, financial, operational, claims, and patient satisfaction data serves as the foundation of a Clinical Decision Intelligence System (CDIS). At Geisinger, data are increasingly viewed as a core asset. A very heavy emphasis is placed on the collection, normalization, and application of clinical, financial, operational, claims, and other data to inform, guide, measure, refine, and document the results of PI efforts. These data are combined with other inputs (e.g., evidence-based guidelines, third-party benchmarks) and leveraged via decision support applications as schematically illustrated below (see Figure 4-2).
OCR for page 229
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary FIGURE 4-2 Clinical decision intelligence system design. Transforming Performance Improvement: From a Human Process to a Scalable Performance Improvement Architecture Early Geisinger experience supports the view that a PI Architecture, including EHR data and associated data warehousing capabilities can transform healthcare PI, as well as how an organization behaves. Data, System, and Analytic Requirements Most performance improvement efforts lack the rich data required to validate outcomes (i.e., test the initial hypothesis) or the integrated data infrastructure required for rapid feedback to refine or modify large-scale interventions. When available at all, data are often limited in scope and consist of simple administrative and/or manually collected elements that may not be generated as part of the routine course of care. By contrast, robust EHRs inherently provide for extensive, longitudinal data (i.e., clinical test results, vital signs, reason for order or other explicit information regarding the intent of the provider, etc.). When used in conjunction with an integrated data warehouse and normalized, searchable electronic data,
OCR for page 230
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary EHRs can motivate a quantum shift in the PI paradigm. As a core asset, this new PI Architecture is used to ask questions, pose hypotheses, refine understanding, and ultimately develop improvement initiatives that are directly relevant to current practice with a dual focus on quality and efficiency. Natural “experiments” are intrinsic to EHR data. Patients with essentially the same or similar disease profiles receive different care. For example, one 60-year-old diabetic patient may be prescribed drug A, while a similar diabetic patient may be described drug B because of formulary or practice style differences. When repeated hundreds or thousands of times, routinely collected EHR data offer a unique data mining resource for important clinical and economic insights. When combined with health plan claims and other information, additional questions may be answered such as: Is there a difference in drug fill/refill rates between drugs A and B identified above? In addition to the need for an EHR, an integrated, normalized data asset simplifies the logistics and cycle time for exploration, development of an ROI argument (e.g., forecasting, simulating), planning and implementation, and performance analysis. While data aggregation, standardization, and normalization are often centralized activities, data access should be as decentralized, simple, and low cost as possible (i.e., no incremental barrier to review). Providing clinical and business end-users with direct, unrestricted access helps to motivate a cultural shift toward identifying opportunities for improving care quality and access and for reducing the cost of care. In this way, everyday clinical hunches (e.g., a patient who used drug X subsequently shows impaired renal function) can be formulated into questions (e.g., “has this phenomenon been observed in the last X hundred patients that we cared for here?”), rapid analysis, and “answers.” This capability to rapidly place in context both the individual patient and the broader population is routinely missing in nearly all healthcare delivery organizations. This frame of reference is important for physicians who have been shown to be overly sensitized by recent patient experience (Greco and Eisenberg, 1993; Poses and Anthony, 1991). The PI Architecture should be capable of answering previously imponderable questions such as “How many patients with chronic kidney disease do we care for?” and in so doing, compare the results from operationally identified patients (e.g., derived from the Problem List) versus biologically identified patients (e.g., via calculations from laboratory creatinine measurements). This level of data interrogation enables PI teams to be fully grounded in the reality of what actually happens, rather than guided by impressions, selective or hazy memories, or idyllic desires. Similarly, when using benchmarks to compare performance, hypothesis-driven data mining asks “Why are we different?,” regardless of whether that difference is positive or negative. As such, it enables even a benchmarking leader to continue to innovate and improve (Gawande, 2004). This approach parallels Berwick’s recent call to “equip the workforce to study the effects of their efforts, actively
OCR for page 231
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary and objectively, as part of daily work” and creates a “culture of empirical review” as a critical determinant of success (Berwick, 2008). Organizational Requirements Global and local organizational requirements are essential to institutionalizing a culture of improvement using a PI Architecture. First, Board and CEO level support for transformation is required to support adoption. PI Architecture investment is not trivial, and several years are required to reach peak output. Stable resourcing and strategic investment is essential to achieve success. Control and responsibility of the PI process (e.g., selection of issues, control of implementation, and evaluation of outcomes and ongoing feedback) must be entrusted to leaders held accountable for results. Where PI is centralized, local clinical and operational leaders must be engaged from the beginning to be part of and motivated by the opportunities inherent to the care process change. In addition, staff (or teams) should be experienced in change management, workflow analysis, health information technology (HIT) integration, and performance management skills and orientation. The extent to which this group has aligned goals and is free to innovate beyond usual organizational constraints, policies, and practices will dictate the breadth of possible change. Finally, passion for success is a powerful force. We believe that an entrepreneurial approach to PI, a well-established motivation in other business sectors, produces sustainable change, especially when balanced with appropriate skepticism on defining success and the “permission” to fail but with the expectation of ultimately persevering. At Geisinger, this culture is embedded through formal links between the traditional silos of Innovation, Clinical Effectiveness, Research, and the Clinical Enterprise along with critical underlying support from Information Technology. Innovation’s role is to support a broad range of change initiatives that are designed to fundamentally challenge historical assumptions. Innovation typically reaches for large successes with a focus on knowledge transfer across the organization and on creating a reusable, scalable transformation infrastructure. Clinical Effectiveness often takes a complementary approach to change across a broader swath of the organization with a focus on process redesign and skill development. The Clinical Enterprise represents the “front line” of patient care; its “sources of pain” provide a strong indication of opportunity; its ideas, clinical hunches, and feedback on innovation are essential for success. At Geisinger, research has a multi-year horizon. Adoption of a traditional research and development model, used in other business sectors, leads to a translation-focused process to bring value to the clinical enterprise, rather than a focus on traditional “knowledge creation.” This model
OCR for page 256
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary data (HIPAA’s “covered entities”) extracted a common set of data elements from their information systems, transformed them into a common format, and stored the data so they could access it easily for repeated queries; (2) to function as a distributed network, they executed identical computer programs that were developed by an agreed-upon process to which all participants provided input; (3) they typically shared summary data with a coordinating center, rather than person-level analysis files; and (4) they provided detailed, patient-level data, sometimes to a health department, only in the event of a specific need to know more about the individual. The National Bioterrorism Syndromic Surveillance Demonstration Program used a distributed network approach to surveillance for bioterrorism events and clusters of naturally occurring illness, in five HMO Research Network health plans (Lazarus et al., 2006; Yih et al., 2004). This demonstration program used a fully distributed automated method to identify clusters of illness. It accomplished this by having the health plans execute computer programs that created daily extracts of the preceding days’ encounters, put them into a standard format, and identified new episodes of illness that met Centers for Disease Control and Prevention’s (CDC’s) criteria for syndromes of interest, such as influenza-like illness or lower gastrointestinal illness. The programs assigned the new episodes to the patients’ zip codes of residence, and then each site automatically communicated the daily totals of new episodes for each syndrome in each zip codes to a coordinating center that used a space and time scan statistic to identify unusual clusters of illness. Notice of these clusters was sent from the coordinating center back to the originating site and to the relevant health department. If the health department wanted more information about the individuals who were part of the cluster, it contacted the health plan, which retained full information about the individuals and could provide identifying information as well as the full clinical detail available in the patients’ electronic medical records (Figure 4-5). This program illustrates the ability of a distributed system to provide immediate information to support public health needs. Although the health plans used information from their entire populations, they only shared person-level information about individuals in whom the health department was specifically interested. The CDC-sponsored Vaccine Safety Datalink (VSD), founded in 1991, has operated since 2000 as a distributed data network in eight health plan members of the HMO Research Network. The VSD’s distributed network operates a real-time active postmarketing surveillance system for new vaccines. It relies on weekly automated submission to a coordinating center of counts of vaccine exposures and prespecified outcomes of interest in a total analyzable population of approximately 8 million individuals. It uses sequential analysis methods to identify signals of excess risk, which are validated by review of full text medical records (Lieu et al., 2007). This dis-
OCR for page 257
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary FIGURE 4-5 Schematic view of data flow for the National Bioterrorism Syndromic Surveillance Demonstration Program. tributed method of active surveillance recently identified a signal of excess seizures associated with a quadrivalent measles-mumps-rubella-varicella vaccine, prompting a change in the Advisory Committee for Immunization Practice’s recommendation for use of the vaccine (Centers for Disease Control and Prevention, 2008). The Vaccine Safety Datalink’s general approach to real-time postmarketing surveillance also should be applicable to drugs, although additional development will be required (Brown et al., 2007). An ad hoc distributed network assembled to evaluate the risk of Guillain-Barré syndrome, a potentially life-threatening neurologic condition following meningococcal conjugate vaccine (ClinicalTrials.gov, 2008) is notable both because of the size of the covered population and because it uses a hybrid data model that incorporates both distributed and pooled data methods. Five health plans with a combined membership exceeding 50 million people—half the number required by the FDAAA—are collaborating in this study. The health plans operate as a distributed network insofar as they create standard data files and execute shared computer programs that perform the large majority of the analyses, which are shared in tabular form and then pooled. The health plans also obtain detailed clinical information about potential cases of Guillain-Barré syndrome identified through diagnosis codes by obtaining full text medical records. Final case status is determined by an expert panel that reviews these records after the health plans redact personal identifying information. The study includes both an analysis of the full cohort, which is performed in a fully distributed fashion, plus a nested case control study that uses multivariate methods
OCR for page 258
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary requiring creation of a pooled dataset involving 0.2 percent of the entire cohort (12,000 individuals). To support the case control study health plans create analysis-level files containing one record for each case or control. The only protected health information that the covered entity shares with the coordinating center is the month and year in which individuals were immunized. These examples of distributed networks illustrate the potential for distributing much of the data processing as well as the data storage. Distributed processing minimizes the need to create pooled person-level datasets, and is thus an important contributor to minimizing the amount of patient level data that must leave the covered entities. Organizational Models for Distributed Networks Distributed networks can operate in several ways. Figure 4-6 shows a schematic of the network design that is planned as part of the AHRQ prototype distributed network mentioned above. The system will accommodate FIGURE 4-6 Distributed research network prototype using central coordinating center.
OCR for page 259
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary different kinds of data and is planned eventually to include claims data, inpatient and outpatient EMR data, registry data, and other information that are not part of the current prototype. It also will be able to integrate information in personally controlled health records (Mandl and Kohane, 2008), to the extent that these become widespread and that both individuals and the organizations that hold the records make them available. In this system, a common query system will send queries to participating organizations. Queries go to participating sites through their firewalls, as much processing as possible takes place behind the firewalls, and then responses to the queries are sent back from participating organizations. As noted above, the network will emphasize the sharing of results of analyses, rather than patient-level datasets. Another organizational model is the peer-to-peer design used by the Shared Pathology Information Network (SPIN) (Drake et al., 2007). This model has been generalized to apply to other uses, including public health surveillance and clinical research (McMurry et al., 2007). The peer-to-peer approach also underlies the planned Shared Health Research Information Network (SHRINE) (Brigham and Women’s Hospital, Harvard Medical School, 2008), developed at Harvard to support research uses of separate data warehouses maintained by different healthcare institutions. This networking capability is an extension of software created for Informatics for Integrating Biology and the Bedside (i2b2), to support clinical research using health care institutions’ clinical data warehouses (Partners Healthcare, 2008). Governance Developing effective governance models for distributed networks to improve population health and healthcare delivery will be a major challenge. Figure 4-7 illustrates a potential governance model for a multipurpose network that accommodates participation by multiple users. In this model the development and maintenance of infrastructure is largely separate from, though informed by, the users. Governance of infrastructure would focus on the creation of data standards and infrastructure that allow the same resources to support separate user groups and uses. In such a model, decisions about the availability of the network’s information to public and private users would rest most naturally with the holders of the data, who could choose as individual organizations whether or not to participate in individual activities or categories of activities on a case by case basis. However, since certain types of uses are likely to recur, individual data holders or groups of data holders may develop standards that apply generally to their participation. Such standards might address issues such as the amount of participation of data holders in the development and execu-
OCR for page 260
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary FIGURE 4-7 Potential schema for organization and governance of a multipurpose national distributed network. In this arrangement, the distributed network serves multiple users, which would include both public agencies, such as the FDA, CDC, NIH, or AHRQ, and private organizations, such as academic research organizations or industry. Different priorities and rules of access would apply depending on the use and the user. tion of studies, ensuring confidentiality of personal information, secondary use of data, transparency regarding the specific studies being performed, and commitments to dissemination of results. Specific examples of activities the network might support include the following. The FDA might use relevant parts of the network to support postmarketing surveillance, CDC might use the same or other parts to support prevention initiatives, AHRQ might use it to support comparative effectiveness research, and the NIH might use it to support clinical research. Private organizations would also be logical users of the network to support a wide range of inquiries. Each activity, or category of activity, could be led by the separate user groups, usually in collaboration with the data holders, and have separate governance mechanisms and funding models.
OCR for page 261
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary Summary We will need distributed networks to assess medical care and its outcomes because this is almost certainly a more realistic way to develop and maintain these data than large pooled databases. Experience to date makes clear that it is technically feasible to build and use distributed networks, although considerable investment will be needed to develop additional resources and to create more efficient methods of using the networks. Furthermore, it appears feasible to develop distributed networks so that a common infrastructure can support an array of different uses in the public interest. Creation of effective governance mechanisms will be a considerable challenge, as will development of a sustainable mechanism to fund development and maintenance of infrastructure for both technical issues and governance. REFERENCES Agency for Healthcare Research and Quality. 2008. Developing a Distributed Research Network to Conduct Population-based Studies and Safety Surveillance. http://effectivehealthcare.ahrq.gov/healthInfo.cfm?infotype=nr&ProcessID=54 (accessed March 30, 2008). Andrade, S. E., M. A. Raebel, A. N. Morse, R. L. Davis, K. A. Chan, J. A. Finkelstein, K. K. Fortman, H. McPhillips, D. Roblin, D. H. Smith, M. U. Yood, R. Platt, and J. H. Gurwitz. 2006. Use of prescription medications with a potential for fetal harm among pregnant women. Pharmacoepidemiology and Drug Safety 15(8):546-554. Berwick, D. M. 2008. The science of improvement. Journal of the American Medical Association 299(10):1182-1184. Blumenthal, D., and C. M. Kilo. 1998. A report card on continuous quality improvement. Milbank Quarterly 76(4):511, 625-648. Brigham and Women’s Hospital. Harvard Medical School. 2008. Decision Systems Group, Weekly Seminars. http://www.dsg.harvard.edu/index.php/Main/Seminars2007#d51 (accessed April 19, 2008). Brook, R. H., and K. N. Lohr. 1985. Efficacy, effectiveness, variations, and quality. Boundary-crossing research. Medical Care 23(5):710-722. Brown, J. S., M. Kulldorff, K. A. Chan, R. L. Davis, D. Graham, P. T. Pettus, S. E. Andrade, M. A. Raebel, L. Herrinton, D. Roblin, D. Boudreau, D. Smith, J. H. Gurwitz, M. J. Gunter, and R. Platt. 2007. Early detection of adverse drug events within population-based health networks: Application of sequential testing methods. Pharmacoepidemiology and Drug Safety 16(12):1275-1284. Buehler, J. W., Sosin, D. M., and R. Platt. 2007. Evaluation of Surveillance Systems for Early Epidemic Detection, in Infectious Disease Survellance. Edited by N. M. M’ikanatha, R. Lynfield, C. A. Van Beneden, and H. de Valk. Malden, MA: Blackwell Publishing. Centers for Disease Control and Prevention. 2008. Vaccines and Immunizations: Recommendations and Guidelines. http://www.cdc.gov/vaccines/recs/ACIP/slides-feb08.htm#mmrv (accessed March 31, 2008). ClinicalTrials.gov. 2008. Safety Study of GBS Following Menactra Meningococcal Vaccination. http://clinicaltrials.gov/ct2/show/NCT00575653?term=NCT00575653&rank=1 (accessed March 31, 2008).
OCR for page 262
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary The Commonwealth Fund Commission on a High Performance Health System. 2005. Framework for a High Performance Health System for the United States. New York: The Commonwealth Fund. de Koning, J. S., N. S. Klazinga, P. J. Koudstaal, A. Prins, G. J. Borsboom, and J. P. Mackenbach. 2005. The role of ’confounding by indication’ in assessing the effect of quality of care on disease outcomes in general practice: Results of a case-control study. BMC Health Services Research 5(1):10. Drake, T. A., J. Braun, A. Marchevsky, I. S. Kohane, C. Fletcher, H. Chueh, B. Beckwith, D. Berkowicz, F. Kuo, Q. T. Zeng, U. Balis, A. Holzbach, A. McMurry, C. E. Gee, C. J. McDonald, G. Schadow, M. Davis, E. M. Hattab, L. Blevins, J. Hook, M. Becich, R. S. Crowley, S. E. Taube, and J. Berman. 2007. A system for sharing routine surgical pathology specimens across institutions: The shared pathology informatics network. Human Pathology 38(8):1212-1225. Egorova, N., et al. 2008. National outcomes for the treatment of ruptured abdominal aortic aneurysm: Comparison of open versus endovascular repairs. Journal of Vascular Surgery 48(5):1092.e2-1100.e2. Egorova, N., J. Giacovelli, A. Gelijns, L. Mureebe, G. Greco, N. Morrissey, R. Nowygrod, A. Moskowitz, J. McKinsey, and K. C. Kent. 2009. Defining high risk patients for endovascular aneurysm repair. Journal of Vascular Surgery 50(6):1271-1279. Eng, P. M., J. D. Seeger, J. Loughlin, C. R. Clifford, S. Mentor, and A. M. Walker. 2008. Supplementary data collection with case-cohort analysis to address potential confounding in a cohort study of thromboembolism in oral contraceptive initiators matched on claims-based propensity scores. Pharmacoepidemiology and Drug Safety 17(3):297-305. Epic Systems Corporation. 2008. EpicCare. http://www.epicsystems.com/ (accessed July 8, 2008). FDA (Food and Drug Administration). 2007a. Food and Drug Administration Sentinel Network Public Meeting. http://www.fda.gov/oc/op/sentinel/transcript030707.html (accessed March 30, 2008). ———. 2007b. Law Strengthens FDA. http://www.fda.gov/oc/initiatives/advance/fdaaa.html (accessed March 30, 2008). ———. 2008. Sentinal Network. http://www.fda.gov/oc/op/sentinel/ (accessed March 30, 2008). Flum, D. R., A. Morris, T. Koepsell, and E. P. Dellinger. 2001. Has misdiagnosis of appendicitis decreased over time? A population-based analysis. Journal of the American Medical Association 286(14):1748-1753. Gawande, A. 2004. The bell curve: What happens when patients find out how good their doctors really are? The New Yorker. December 6, 2004. Gelijns, A. C., N. Rosenberg, and A. J. Moskowitz. 1998. Capturing the unexpected benefits of medical research. New England Journal of Medicine 339(10):693-698. Gelijns, A. C., L. D. Brown, C. Magnell, E. Ronchi, and A. J. Moskowitz. 2005. Evidence, politics, and technological change. Health Affairs (Millwood) 24(1):29-40. Gliklich, R. E., and N. Dreyer. 2007. AHRQ Registries for Evaluating Patient Outcomes: A User’s Guide. AHRQ Publication 07-EHCOO1-1. Rockville, MD: U.S. Department of Health and Human Services, Public Health Service, Agency for Healthcare Research and Quality. Greco, P. J., and J. M. Eisenberg. 1993. Changing physicians’ practices. New England Journal of Medicine 329(17):1271-1273. Hannan, E. L., M. J. Racz, G. Walford, R. H. Jones, T. J. Ryan, E. Bennett, A. T. Culliford, O. W. Isom, J. P. Gold, and E. A. Rose. 2005. Long-term outcomes of coronary-artery bypass grafting versus stent implantation. New England Journal of Medicine 352(21):2174-2183.
OCR for page 263
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary Hartig, J. R., and J. Allison. 2007. Physician performance improvement: An overview of methodologies. Clinical and Experimental Rheumatology 25(6 Supl 47):50-54. Hayward, R. S., M. C. Wilson, S. R. Tunis, E. B. Bass, and G. Guyatt. 1995. Users’ guides to the medical literature. VIII. How to use clinical practice guidelines. A. Are the recommendations valid? The evidence-based medicine working group. Journal of the American Medical Association 274(7):570-574. Hlatky, M. A., K. L. Lee, F. E. Harrell, Jr., R. M. Califf, D. B. Pryor, D. B. Mark, and R. A. Rosati. 1984. Tying clinical research to patient care by use of an observational database. Statistics in Medicine 3(4):375-387. Institute for Healthcare Improvement. Testing Changes. http://www.ihi.org/IHI/Topics/Improvement/ImprovementMethods/HowToImprove/testingchanges.htm (accessed July 8, 2008). INTERMACS (Interagency Registry for Mechanically Assisted Circulatory Support). 2008. http://www.intermacs.org/ (accessed July 9, 2008). IOM (Institute of Medicine). 2006. The Future of Drug Safety. Washington, DC: The National Academies Press. Jick, S., J. A. Kaye, L. Li, and H. Jick. 2007. Further results on the risk of nonfatal venous thromboembolism in users of the contraceptive transdermal patch compared to users of oral contraceptives containing norgestimate and 35 microg of ethinyl estradiol. Contraception 76(1):4-7. Juran, J. M. 1995. Managerial Breakthrough: The Classic Book on Improving Management Performance. 30th anniversary ed. New York: McGraw-Hill. Krumholz, H. M., M. J. Radford, Y. Wang, J. Chen, A. Heiat, and T. A. Marciniak. 1998. National use and effectiveness of beta-blockers for the treatment of elderly patients after acute myocardial infarction: National cooperative cardiovascular project. Journal of the American Medical Association 280(7):623-629. Lazarus, R., K. Yih, and R. Platt. 2006. Distributed data processing for public health surveillance. BMC Health Services Research 6:235. Lietz, K., J. W. Long, A. G. Kfoury, M. S. Slaughter, M. A. Silver, C. A. Milano, J. G. Rogers, Y. Naka, D. Mancini, and L. W. Miller. 2007. Outcomes of left ventricular assist device implantation as destination therapy in the post-rematch era: Implications for patient selection. Circulation 116(5):497-505. Lieu, T. A., M. Kulldorff, R. L. Davis, E. M. Lewis, E. Weintraub, K. Yih, R. Yin, J. S. Brown, and R. Platt. 2007. Real-time vaccine safety surveillance for the early detection of adverse events. Medical Care 45(10 Supl 2):S89-S95. Mandl, K. D., and I. S. Kohane. 2008. Tectonic shifts in the health information economy. New England Journal of Medicine 358(16):1732-1737. The Markle Foundation. 2006. The Common Framework: Overview and Principles. Connecting for Health. http://www.connectingforhealth.org/commonframework/docs/Overview.pdf (accessed March 19, 2008). McMurry, A. J., C. A. Gilbert, B. Y. Reis, H. C. Chueh, I. S. Kohane, and K. D. Mandl. 2007. A self-scaling, distributed information architecture for public health, research, and clinical care. Journal of the American Medical Informatics Association 14(4):527-533. Miller, L. W., K. E. Nelson, R. R. Bostic, K. Tong, M. S. Slaughter, and J. W. Long. 2006. Hospital costs for left ventricular assist devices for destination therapy: Lower costs for implantation in the post-rematch era. Journal of Heart and Lung Transplantation 25(7):778-784. Mona Eng, P., J. D. Seeger, J. Loughlin, K. Oh, and A. M. Walker. 2007. Serum potassium monitoring for users of ethinyl estradiol/drospirenone taking medications predisposing to hyperkalemia: Physician compliance and survey of knowledge and attitudes. Contraception 75(2):101-107.
OCR for page 264
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary Neaton, J. D., S. L. Normand, A. Gelijns, R. C. Starling, D. L. Mann, and M. A. Konstam. 2007. Designs for mechanical circulatory support device studies. Journal of Cardiac Failure 13(1):63-74. Oz, M. C., A. C. Gelijns, L. Miller, C. Wang, P. Nickens, R. Arons, K. Aaronson, W. Richenbacher, C. van Meter, K. Nelson, A. Weinberg, J. Watson, E. A. Rose, and A. J. Moskowitz. 2003. Left ventricular assist devices as permanent heart failure therapy: The price of progress. Annals of Surgery 238(4):577-583; discussion 583-585. Partners Healthcare. 2008. Informatics for Integrating Biology and the Bedside. http://www.i2b2.org/ (accessed April 19, 2008). Porter, M. E., and E. Olmsted-Teisberg. 2006. Redefining Health Care—Creating Value-based Competition on Results. Boston, MA: Harvard Business School Press. Poses, R. M., and M. Anthony. 1991. Availability, wishful thinking, and physicians’ diagnostic judgments for patients with suspected bacteremia. Medical Decision Making 11(3):159-168. Raebel, M. A., D. L. McClure, S. R. Simon, K. A. Chan, A. Feldstein, S. E. Andrade, J. E. Lafata, D. Roblin, R. L. Davis, M. J. Gunter, and R. Platt. 2007. Laboratory monitoring of potassium and creatinine in ambulatory patients receiving angiotensin converting enzyme inhibitors and angiotensin receptor blockers. Pharmacoepidemiology and Drug Safety 16(1):55-64. Robert Wood Johnson Foundation. 2007. National Effort to Measure and Report on Quality and Cost-effectiveness of Health Care Unveiled. http://www.rwjf.org/pr/product.jsp?id=22371&typeid=160 (accessed March 30, 2008). Roberts, T. G., Jr., and B. A. Chabner. 2004. Beyond fast track for drug approvals. New England Journal of Medicine 351(5):501-505. Rose, E. A., A. C. Gelijns, A. J. Moskowitz, D. F. Heitjan, L. W. Stevenson, W. Dembitsky, J. W. Long, D. D. Ascheim, A. R. Tierney, R. G. Levitan, J. T. Watson, P. Meier, N. S. Ronan, P. A. Shapiro, R. M. Lazar, L. W. Miller, L. Gupta, O. H. Frazier, P. Desvigne-Nickens, M. C. Oz, and V. L. Poirier. 2001. Long-term mechanical left ventricular assistance for end-stage heart failure. New England Journal of Medicine 345(20):1435-1443. Sands, B. E., M. S. Duh, C. Cali, A. Ajene, R. L. Bohn, D. Miller, J. A. Cole, S. F. Cook, and A. M. Walker. 2006. Algorithms to identify colonic ischemia, complications of constipation and irritable bowel syndrome in medical claims data: Development and validation. Pharmacoepidemiology and Drug Safety 15(1):47-56. Schneeweiss, S. 2007. Developments in post-marketing comparative effectiveness research. Clinical Pharmacology and Therapeutics 82(2):143-156. Second International Conference on Improving Use of Medicines. 2004. Recommendations on Insurance Coverage. http://mednet3.who.int/icium/icium2004/Documents/ Insurance%20coverage.doc (accessed July 8, 2008). Seeger, J. D., P. L. Williams, and A. M. Walker. 2005. An application of propensity score matching using claims data. Pharmacoepidemiology and Drug Safety 14(7):465-476. Seeger, J. D., J. Loughlin, P. M. Eng, C. R. Clifford, J. Cutone, and A. M. Walker. 2007. Risk of thromboembolism in women taking ethinylestradiol/drospirenone and other oral contraceptives. Obstetrics and Gynecology 110(3):587-593. Shewhart, W. A. 1939. Statistical Method from the Viewpoint of Quality Control. Dover Publications, December 1, 1986. Stewart, W. F., N. R. Shah, M. J. Selna, R. A. Paulus, and J. M. Walker. 2007. Bridging the inferential gap: The electronic health record and clinical evidence. Health Affairs (Millwood) 26(2):w181-w191. Tunis, S. R., D. B. Stryer, and C. M. Clancy. 2003. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association 290(12):1624-1632.
OCR for page 265
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary U.S. Department of Health and Human Services. 2008. Medical Privacy—National Standards to Protect the Privacy of Personal Health Information. http://www.hhs.gov/ocr/hipaa/ (accessed March 30, 2008). Wagner, A. K., K. A. Chan, I. Dashevsky, M. A. Raebel, S. E. Andrade, J. E. Lafata, R. L. Davis, J. H. Gurwitz, S. B. Soumerai, and R. Platt. 2006. FDA drug prescribing warnings: Is the black box half empty or half full? Pharmacoepidemiology and Drug Safety 15(6):369-386. Walker, A. M., and R. P. Wise. 2002. Precautions for proactive surveillance. Pharmacoepidemiology and Drug Safety 11(1):17-20. Walker, A. M., G. Schneider, J. Yeaw, B. Nordstrom, S. Robbins, and D. Pettitt. 2006. Anemia as a predictor of cardiovascular events in patients with elevated serum creatinine. Journal of the American Society of Nephrology 17(8):2293-2298. Yih, W. K., B. Caldwell, R. Harmon, K. Kleinman, R. Lazarus, A. Nelson, J. Nordin, B. Rehm, B. Richter, D. Ritzwoller, E. Sherwood, and R. Platt. 2004. National bioterrorism syndromic surveillance demonstration program. Morbidity and Mortality Weekly Report (53 Supl):43-49.
OCR for page 266
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary This page intentionally left blank.