National Academies Press: OpenBook
« Previous: 3 Taking Advantage of New Tools and Techniques
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

4
Organizing and Improving Data Utility

INTRODUCTION

An enormous untapped capacity for data analysis is emerging as the research community hones its capacity to collect, store, and study data. We are now generating and have access to vastly larger collections of data than have been available before. The potential for mining these robust databases to expand the evidence base is experiencing commensurate growth. New and emerging design models and tools for data analysis have significant potential to inform clinical effectiveness research. However, further work is needed to fully harness the data and insights these large databases contain. As these methods are tested and developed, they are likely to become an even more valuable part of the overall research arsenal—helping to address inefficiencies in current research practices, providing meaningful complements to existing approaches, and offering means to productively process the increasingly complex information generated as part of the research enterprise today.

This chapter aims to (1) characterize some key implications of these larger electronically accessible health records and databases for research, and (2) identify the most pressing opportunities to apply these data more effectively to clinical effectiveness research. The papers that follow were derived from the workshop session devoted to organizing and improving data utility. These papers identify technological and policy advances needed to better harness these emerging data sources for research relevant to providing the care most appropriate to each patient.

From his perspective at the Geisinger Health System, Ronald A. Paulus describes successful applications of electronic health records (EHRs) and

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

point-of-care data to create delivery-based evidence and make further steps in transforming clinical practice. These data present the opportunity to develop data useful for studies needed to complement and fill gaps in randomized controlled trial (RCT) findings. In the next paper, Alexander M. Walker from Worldwide Health Information Science Consultants and the Harvard School of Public Health discusses approaches to the development, application, and shared distribution of information from large administrative databases in clinical effectiveness research. He describes augmented databases that include laboratory and consumer data and discusses approaches to creating an infrastructure for medical record review, implementing methods for automated and quasi-automated examination of masses of data, developing “rapid-cycle” analyses to circumvent the delays of claims processing and adjudication, and opening new initiatives for collaborative sharing of data that respect patients’ and institutions’ legitimate needs for privacy and confidentiality. In the context of the ongoing debate about the relative value of observational data (e.g., as provided by registries) versus RCTs, Alan J. Moskowitz from Columbia University argues that registries provide data that are important complements to randomized trials (including efficacy and so-called pragmatic randomized trials) and to analyses of large administrative datasets. In fact, Moskowitz asserts, registries can assess “real-world” health and economic outcomes to help guide decision making on policies for patient care.

Complicated research questions increasingly need current information derived from a variety of sources. One promising source is distributed research models, which provide multi-user access to enormous stores of highly useful data. Several models are currently being developed. Speaking on that topic was Richard Platt, from Harvard Pilgrim Health Care and Harvard Medical School, who reports on several complex efforts to design and implement distributed research models that derive large stores of useful data from a variety of sources for multiple users.

THE ELECTRONIC HEALTH RECORD AND CARE REENGINEERING: PERFORMANCE IMPROVEMENT REDEFINED

Ronald A. Paulus, M.D., M.B.A.; Walter F. Stewart, Ph.D., M.P.H.; Albert Bothe, Jr., M.D.; Seth Frazier, M.B.A.; Nirav R. Shah, M.D., M.P.H.; and Mark J. Selna, M.D.; Geisinger

Introduction

The U.S. healthcare system has struggled with numerous, seemingly intractable problems including fragmented, uncoordinated, and highly variable care that results in safety risks and waste; consumer dissatis-

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

faction; and the absence of productivity and efficiency gains common in other industries (The Commonwealth Fund Commission on a High Performance Health System, 2005). Multiple stakeholders—patients and families, physicians, payors, employers, and policy makers—have all called for order of magnitude improvements in healthcare quality and efficiency. While many industries have leveraged technology to deliver vastly superior value in highly competitive environments over the last several decades, healthcare performance has, on a comparative basis, stagnated. In the absence of the ability to transform performance, health care “competition” has too often focused on delivering more expensive services promoted by better marketing and geographic presence; true outcomes-based competition has been lacking (Porter and Olmsted-Teisberg, 2006). Implications of these failures have been profound for the care delivery system and for all Americans.

Recently, one area of hope has emerged: the adoption of electronic health records. EHRs, if successfully deployed, have tremendous potential to transform care delivery. Despite a primary focus on benefits derived from practice standardization and decision support, diverse uses of EHR data including enhanced quality improvement and research activities may offer an equal or even greater potential for fundamental care delivery transformation. Limits of guideline-based evidence have produced a growing recognition that observational data may be essential to complement gaps in randomized controlled trial data needed to fulfill this transformation potential. Despite serious challenges, EHR data may offer an invaluable look into interventions and outcomes in clinical practice and offer promise as a complementary source of evidence directly relevant to everyday practice needs.

EHR data also may provide an essential complement to clinical performance improvement initiatives. Healthcare performance improvement activities are defined here as an ongoing cycle of positive change in organization, care process, decision management, workflow, or other components of care, regardless of methodology (collectively PI) (Hartig and Allison, 2007). Despite the underlying logic and history of success in other business sectors, the impact of healthcare performance improvement activities is often negligible or unsustainable. As with the evidence gap, EHR data offer promise as a transformation resource for PI. The inability to achieve broad and systematic quality and operational improvements in our delivery system has left all stakeholders deeply frustrated.

This paper explores a potentially powerful new approach to leverage the latent synergy between EHR-based PI efforts and research and presents a vision of how PI at the clinical enterprise level is being transformed by the EHR and associated data aggregation and analysis activities. In that context, we describe a revision to the classic Plan-Do-Study-Act (PDSA)

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

cycle that reflects this integration and the development of a Performance Improvement Architecture (PI Architecture), a set of reusable parts, components, and modules along with a process methodology that focuses relentlessly on eliminating all unnecessary care steps, safely automating processes, delegating care to the lowest cost, competent caregiver, maximizing supply chain efficiencies and activating patients in their own self-care. Early Geisinger Health System (Geisinger) experience suggests that use of such a PI Architecture in creating change is likely to provide guidance on what to improve, an enhanced ability to implement and track initiatives and to specifically link discrete elements of change to meaningful outcomes, a simultaneous focus on quality and efficiency, improved utilization of scarce healthcare resources and personnel, dramatic acceleration of the pace of change, and the capacity to maintain and grow that change over time.

Delivery-Based Evidence—A New EHR Role

When doctors care for patients, the very essence of the interaction requires extrapolation from knowledge and experience to tailor care for the particular circumstances at hand (i.e., bridging the “inferential gap”) (Stewart et al., 2007). No two patients are alike. While a certain level of “experimentation” is a part of good care, the knowledge base required for such experimentation is growing at a pace that far exceeds the ongoing learning capacity of primary care providers and even most specialists. Hence, the nature of care provided is dated or experimental, venturing beyond what is known or is optimal.

How do providers move beyond the limits of what they can learn or “trials where n = 1”? Although the RCT serves as the “gold standard” design for making causal inferences from data, there are practical limits to the utility of RCT-based evidence (Brook and Lohr, 1985; Flum et al., 2001; Krumholz et al., 1998). Today, RCTs are largely guided by the Food and Drug Administration (FDA) and related regulatory needs, not necessarily by the most important clinical questions. They are frequently performed in specialized settings (e.g., academic medical centers or the Veterans Administration) that are not representative of the broader arena of care delivery. RCTs are used to test drugs and devices in highly selected populations (i.e., patients with relatively low co-morbid disease burdens), under artificial conditions (i.e., a simple, focused question) that are often unrelated to usual clinical care (i.e., managing complex needs of patients with multiple co-morbidities), and are focused on outcomes that may be incomplete (e.g., short-term outcomes leading to changes in a disease mediator). Efficacy equivalence with existing therapies rather than comparative effectiveness is the dominant focus of most trials, with little or no thought given to economic constraints or consequences. RCTs are not usually positioned to address fundamental

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

questions of need for subgroups with different co-morbidities, and results rarely translate into the clinical effectiveness hoped for under real-world practice conditions (Hayward et al., 1995). As the population continues to age and the prevalence of co-morbidities increases, the gap between what we know from RCTs and what we need to know to support objective clinical decisions is increasing, despite the pace at which new knowledge is being generated. Furthermore, decisions based primarily on randomized trial data do not incorporate local values, knowledge, or patient preferences into care decisions.

From a distance, EHR data offer promise as a complementary source of evidence to more directly address questions relevant to everyday practice needs. However, a closer look at EHR data reveals challenges. Compared to data collection standards established for research, EHR data suffer from many limitations in both quality and completeness. In research settings, specialized staff follow strict data collection protocols; in routine care, even simple measures such as blood pressure or smoking status are measured with many more sources of error. For example, the wording of a question may differ, and responses to even identical questions can be documented in different manners. In routine care, the completeness of data may vary significantly by patient, being directly related to the underlying disease burden and the need for care. Furthermore, physicians may select a particular medication within a class based on the perceived severity of a patient’s disease, resulting in a complex form of bias that is difficult to eliminate (i.e., confounding by indication) (de Koning et al., 2005). In the near term, these and other limitations will raise questions about the credibility of evidence derived from EHR data. However, weaknesses inherent to EHR data as a source of evidence (e.g., false-positive associations) and to the current practice of PI (e.g., initiatives confined to guideline-based knowledge) can be mitigated through replication studies using independent EHRs and by using PI to test and validate EHR-based hypotheses.

Healthcare Quality Improvement

Since the early observations of Shewart, Juran, and Demming, quality improvement has become routine in most business sectors and has been formalized into a diverse set of methodologies and underlying philosophies such as Total Quality Management, Continuous Quality Improvement, Six Sigma, Lean, Reengineering and Microsystems (Juran, 1995). While latecomers, healthcare organizations have increasingly adopted these practices in an attempt to optimize outcomes. Healthcare PI involves an ongoing cycle of change in organization, care process, decision management, workflow, or other components of care, evolving from a culture often previously

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

dominated by blame and fault finding (e.g., peer and utilization review) to devising evidence-based “systems” of care.

In general, healthcare PI relies on “planning” or “experimentation” approaches to improve outcomes. These models employ a diversity of philosophies including a commitment to identifying, meeting, and exceeding stakeholder needs; continuously improving in conjunction with escalating performance standards; applying structured, problem-solving processes using statistical and related tools such as control charts, cause-and-effect diagrams, and benchmarking; and empowering all employees to drive quality improvements. Experimentation-based PI typically relies on the PDSA model (Shewhart, 1939), as recently refined by the Institute for Healthcare Improvement (IHI) for the healthcare community (see Box 4-1) (Institute for Healthcare Improvement). Most approaches involve analysis that begins with a “diagnosis” of cause(s), albeit with limited data, followed by new data collection (frequently manual) to validate that the new process improves outcomes. Deployment of these models is often labor-intensive (e.g., evidence gathering, workflow observation), and effectuating change may take months, in part due to lack of dedicated support resources as well as a historical lack of focus on scalability. As a result, each successive itera-

BOX 4-1

IHI PDSA Cycle

Step 1: PlanPlan the test or observation, including a plan for collecting data.

  • State the objective of the test.

  • Make predictions about what will happen and why.

  • Develop a plan to test the change. (Who? What? When? Where? What data need to be collected?)

Step 2: DoTry out the test on a small scale.

  • Carry out the test.

  • Document problems and unexpected observations.

  • Begin analysis of the data.

Step 3: StudySet aside time to analyze the data and study the results.

  • Complete the analysis of the data.

  • Compare the data to your predictions.

  • Summarize and reflect on what was learned.

Step 4: ActRefine the change, based on what was learned from the test.

  • Determine what modifications should be made.

  • Prepare a plan for the next test.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

tion may be performed without the ability to reuse previously developed tools, datasets, or analytics.

Limitations to Healthcare Performance Improvement

Despite the underlying logic and history of success in other business sectors, the impact of healthcare PI has too frequently been negligible or unsustainable (Blumenthal and Kilo, 1998). The gap between the potential for PI and results from actual practice has been substantial, as have the consequences of historical failures to improve outcomes. A number of factors explain this gap.

First, PI initiatives are commonly motivated by guideline-based evidence and, as such, are subject to the same limitations as RCT data discussed above. Second, the PI-focused outcome may be only distantly or indirectly related to meaningful change in patient health or to a concrete measure of return on investment (ROI), largely because of the limits to available data and how such initiatives are organizationally motivated and executed. For example, there may not be the organizational will to make change happen or to support change efforts to sustainability. Even when PI is applied to an important problem (e.g., slowing progression of diabetes) in a manner that improves a chosen metric (e.g., ordering a HbA1c lab test), the effort may have only an incomplete or a delayed effect on more relevant outcomes (e.g., fewer complications, reductions in hospital admissions or improved quality of life). Third, outcomes are usually not evaluated in real time or at frequent intervals, limiting the timeliness, ease, and speed of innovation, as well as the dynamism of the process itself. When change and the associated process unfold in slow motion, participants’ (or their authorizing leaders’) commitment may not rise to or maintain the threshold required to institutionalize new standards of practice. Fourth, validation that a PI intervention actually works may be lacking altogether or lacking in scientific or analytic rigor, leaving inference to the realm of guesswork. Fifth, when human or labor-intensive processes are required to maintain change, performance typically regresses to baseline levels as vigilance wanes. Lastly, without a broad strategic framework, PI can be perceived as the “initiative of the month,” leading to temporary improvements that are quickly lost due to inadequate hardwiring, support systems, vigilance, or PI integration across an organization.

The Geisinger Health System Experience

At Geisinger, PI is evolving to become a continuous process involving data generation, performance measurement, and analysis to transform clinical practice, mediated by iterative changes to clinical workflows by elimi-

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 4-1 Transformation infrastructure.

FIGURE 4-1 Transformation infrastructure.

nating, automating, or delegating activities to meet quality and efficiency goals (see Figure 4-1).

By way of background, Geisinger is an integrated delivery system located in central and northeastern Pennsylvania comprised of nearly 700 employed physicians across 55 clinical practice sites providing adult and pediatric primary and specialty care; 3 acute care hospitals (one closed, two open staff); several specialty hospitals; a 215,000 member health plan (accounting for approximately one-third of the Geisinger Clinic patient care revenue); and numerous other clinical services and programs. Geisinger serves a population of 2.5 million people, poorer and sicker than national benchmarks, with markedly less in- and out-migration. Organizationally, Geisinger manages through clinical service lines, each co-led by a physician-administrator pair. Strategic functions such as quality and innovation are centralized with matrixed linkage to operational leaders. A commercial EHR platform adopted in 1995 is fully utilized across the system (Epic Systems Corporation, 2008). An integrated database consisting of EHR, financial, operational, claims, and patient satisfaction data serves as the foundation of a Clinical Decision Intelligence System (CDIS).

At Geisinger, data are increasingly viewed as a core asset. A very heavy emphasis is placed on the collection, normalization, and application of clinical, financial, operational, claims, and other data to inform, guide, measure, refine, and document the results of PI efforts. These data are combined with other inputs (e.g., evidence-based guidelines, third-party benchmarks) and leveraged via decision support applications as schematically illustrated below (see Figure 4-2).

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 4-2 Clinical decision intelligence system design.

FIGURE 4-2 Clinical decision intelligence system design.

Transforming Performance Improvement: From a Human Process to a Scalable Performance Improvement Architecture

Early Geisinger experience supports the view that a PI Architecture, including EHR data and associated data warehousing capabilities can transform healthcare PI, as well as how an organization behaves.

Data, System, and Analytic Requirements

Most performance improvement efforts lack the rich data required to validate outcomes (i.e., test the initial hypothesis) or the integrated data infrastructure required for rapid feedback to refine or modify large-scale interventions. When available at all, data are often limited in scope and consist of simple administrative and/or manually collected elements that may not be generated as part of the routine course of care. By contrast, robust EHRs inherently provide for extensive, longitudinal data (i.e., clinical test results, vital signs, reason for order or other explicit information regarding the intent of the provider, etc.). When used in conjunction with an integrated data warehouse and normalized, searchable electronic data,

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

EHRs can motivate a quantum shift in the PI paradigm. As a core asset, this new PI Architecture is used to ask questions, pose hypotheses, refine understanding, and ultimately develop improvement initiatives that are directly relevant to current practice with a dual focus on quality and efficiency.

Natural “experiments” are intrinsic to EHR data. Patients with essentially the same or similar disease profiles receive different care. For example, one 60-year-old diabetic patient may be prescribed drug A, while a similar diabetic patient may be described drug B because of formulary or practice style differences. When repeated hundreds or thousands of times, routinely collected EHR data offer a unique data mining resource for important clinical and economic insights. When combined with health plan claims and other information, additional questions may be answered such as: Is there a difference in drug fill/refill rates between drugs A and B identified above?

In addition to the need for an EHR, an integrated, normalized data asset simplifies the logistics and cycle time for exploration, development of an ROI argument (e.g., forecasting, simulating), planning and implementation, and performance analysis. While data aggregation, standardization, and normalization are often centralized activities, data access should be as decentralized, simple, and low cost as possible (i.e., no incremental barrier to review). Providing clinical and business end-users with direct, unrestricted access helps to motivate a cultural shift toward identifying opportunities for improving care quality and access and for reducing the cost of care. In this way, everyday clinical hunches (e.g., a patient who used drug X subsequently shows impaired renal function) can be formulated into questions (e.g., “has this phenomenon been observed in the last X hundred patients that we cared for here?”), rapid analysis, and “answers.” This capability to rapidly place in context both the individual patient and the broader population is routinely missing in nearly all healthcare delivery organizations. This frame of reference is important for physicians who have been shown to be overly sensitized by recent patient experience (Greco and Eisenberg, 1993; Poses and Anthony, 1991).

The PI Architecture should be capable of answering previously imponderable questions such as “How many patients with chronic kidney disease do we care for?” and in so doing, compare the results from operationally identified patients (e.g., derived from the Problem List) versus biologically identified patients (e.g., via calculations from laboratory creatinine measurements). This level of data interrogation enables PI teams to be fully grounded in the reality of what actually happens, rather than guided by impressions, selective or hazy memories, or idyllic desires. Similarly, when using benchmarks to compare performance, hypothesis-driven data mining asks “Why are we different?,” regardless of whether that difference is positive or negative. As such, it enables even a benchmarking leader to continue to innovate and improve (Gawande, 2004). This approach parallels Berwick’s recent call to “equip the workforce to study the effects of their efforts, actively

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

and objectively, as part of daily work” and creates a “culture of empirical review” as a critical determinant of success (Berwick, 2008).

Organizational Requirements

Global and local organizational requirements are essential to institutionalizing a culture of improvement using a PI Architecture. First, Board and CEO level support for transformation is required to support adoption. PI Architecture investment is not trivial, and several years are required to reach peak output. Stable resourcing and strategic investment is essential to achieve success. Control and responsibility of the PI process (e.g., selection of issues, control of implementation, and evaluation of outcomes and ongoing feedback) must be entrusted to leaders held accountable for results. Where PI is centralized, local clinical and operational leaders must be engaged from the beginning to be part of and motivated by the opportunities inherent to the care process change. In addition, staff (or teams) should be experienced in change management, workflow analysis, health information technology (HIT) integration, and performance management skills and orientation. The extent to which this group has aligned goals and is free to innovate beyond usual organizational constraints, policies, and practices will dictate the breadth of possible change. Finally, passion for success is a powerful force. We believe that an entrepreneurial approach to PI, a well-established motivation in other business sectors, produces sustainable change, especially when balanced with appropriate skepticism on defining success and the “permission” to fail but with the expectation of ultimately persevering.

At Geisinger, this culture is embedded through formal links between the traditional silos of Innovation, Clinical Effectiveness, Research, and the Clinical Enterprise along with critical underlying support from Information Technology. Innovation’s role is to support a broad range of change initiatives that are designed to fundamentally challenge historical assumptions. Innovation typically reaches for large successes with a focus on knowledge transfer across the organization and on creating a reusable, scalable transformation infrastructure. Clinical Effectiveness often takes a complementary approach to change across a broader swath of the organization with a focus on process redesign and skill development. The Clinical Enterprise represents the “front line” of patient care; its “sources of pain” provide a strong indication of opportunity; its ideas, clinical hunches, and feedback on innovation are essential for success.

At Geisinger, research has a multi-year horizon. Adoption of a traditional research and development model, used in other business sectors, leads to a translation-focused process to bring value to the clinical enterprise, rather than a focus on traditional “knowledge creation.” This model

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

leads to ongoing interactions, where research leverages the insights of Innovations, Clinical Effectiveness, the Clinical Enterprise, the ROI model, and the tactics of implementation. To some degree, PI initiatives serve as the preliminary work for research to pursue a product-oriented process for extending and scaling the PI architecture that moves beyond the tactics of initiatives, relies less on organizational vigilance and individual learning, and can more easily be scaled within Geisinger and potentially exported elsewhere. The continuum of activities among collaborating divisions offers a unique potential for broader commercial application via Geisinger Ventures, which seeks to capture fundamental breakthrough technologies, techniques, or approaches to care that represent a sensible and more certain means of translating knowledge to practice (i.e., through the commercial marketplace) in a manner that cannot be achieved rapidly by publications, speaking, or collaboratives.

Building a Performance Improvement Architecture

The core feature of the PI Architecture (and associated analytics and process methodology) is to support the following key goals: (1) to rank-order PI initiatives for the largest ROI; (2) to support a simultaneous focus on quality and efficiency; (3) to require the development or refinement of reusable parts, components, and modules from each PI initiative to support future efforts; and (4) to ensure that practitioners evaluate the opportunity to eliminate any unnecessary steps in care, automate processes when safe and effective to do so, delegate care to the least-cost, competent caregiver, and activate the patient as a participant in her own self-care.

Using this model, care processes selected for improvement can be identified proactively via a thoughtful rank-ordering of problems based upon ROI criteria (whether clinical, business, or both). Example ROI-based approaches include selecting those processes with outcomes farthest from benchmark performance; those with the largest impact by patient population or resource consumption; or those with the most significant variation. The absence of an ROI-based selection process often precludes the development of a “clinical business plan” that can meet the requirements of skeptical observers, an activity routine in other industries and one where if skipped makes post-intervention value determination problematic. As Berwick noted, when evaluating areas for PI intervention one must “reconsider thresholds for action on evidence” (Berwick, 2008). In this context an appropriate threshold may be far below the traditional research standard of significance where p < 0.05. Less restrictive interpretations of data and “evidence” are commonplace in other industries, where in the absence of better information, a p-value < 0.5 is often indicative of a reasonable idea for change, and p-values < 0.25 would routinely create sustained success.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Of course, evidence at this level may not indicate a true need to change care, but rather the need to more formally study a partially validated question in a more rigorous manner.

Once selected, attractive areas for more detailed PI intervention tend to fall into two broad categories: (1) what should we be doing systematically that we are not? and (2) what should we stop doing that is causing harm or simply not adding value? These questions are fundamentally related to whether or not some aspect of provider-delivered care (e.g., the treatment plan, flow, caregivers, timing, or setting such as inpatient versus outpatient, nursing unit versus nursing unit, etc.) improves the value of healthcare delivery. One structured way to perform this analysis is to review at least the following:

  1. Missing Elements of Care. Is something missing that seems to provide benefit (e.g., beta blockers post MI, statins for CAD)?

  2. Potential Diagnostic or Therapeutic Substitutions. Does something (or someone) seem to work better than another (e.g., breast MRI versus mammography in high-risk patients)?

  3. Excess Diagnostic or Treatment Intensity. What care patterns persist but appear to add no apparent value (e.g., plain film + CT + MRI + PET)?

  4. Flow Impediments. Does the sequence of care and/or settings seem to make a difference (e.g., weekend care, getting to the right inpatient unit)?

  5. Supply Chain Inefficiencies. Is care standardized enough to generate maximum supply chain economies and familiarity (e.g., implant devices or benefits of silver-impregnated versus standard foley catheters relative to UTI)?

  6. Provider Care Team Variation. Are there different outcomes with different providers and/or provider teams (e.g., physician–physician, physician–NP, etc.)?

Box 4-2 defines an update of the PDSA cycle to reflect the availability of a PI Architecture.

Benefits of a Performance Improvement Architecture

Several important benefits from our recent experience evolving this approach are noteworthy as potentially generalizable findings.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

BOX 4-2

Performance Improvement Architecture Cycle

Step 1: Document FocusDocument the current state using local data.

  • Identify settings and circumstances from which the PI is most likely to be generalizable, scalable, and sustainable; choose high-ranking opportunities where stakeholder support is evident or predictable.

  • Define current practice and variation level and measure gap between current and desired state.

  • Confirm all needed data are available for review, minimum documentation: flow, diagnostic and treatment intensity, supply chain, accountable clinicians, safety.

Step 2: SimulateConfirm hypothesis via electronic review and simulate results if desired state is achieved.

  • Establish what benefits the minimal, maximum, and expected change would yield.

  • Translate those benefits into clinical, financial, and satisfaction metrics and targets.

  • Compare different avenues for change to allow for rank-ordering of the most likely approach to yield the largest return.

Step 3: IterateTry out the test on a small scale, but with a strategy for rapid escalation.

  • Carry out the test, documenting both expected and unexpected observations relative to the simulation.

  • Compare performance to previously established metrics in near real time; confirm or deny ROI.

  • Iterate for success or shut down, and move on if results are below threshold expectations.

Step 4: AccelerateLeverage reusable parts from past initiatives and build core infrastructure for future work.

  • Always use prior components and off-the-shelf content whenever available.

  • Resist the temptation for “one-off” solutions that are inherently unscalable.

  • Ensure that solutions implemented for a given initiative are incorporated into the overall transformation architecture for future use and scalability.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Reduced Cycle Time

First, much of the historical PDSA cycle can be performed electronically. For example, opportunities for improvement can be automatically rank-ordered according to specified criteria (e.g., systematically screening care relative to evidence-based guidelines, with deviations used as objective input for ranking). Also, “clinical hunches,” comparisons of actual performance to guidelines, evaluation of new medical literature findings for local practice, and other comparisons can be tested via database queries in a matter of minutes, rather than taking days, weeks, or months using traditional human-based assessments. If designed appropriately, the impact of hopeful interventions can be simulated. Such simulations can provide insight into the need for change and also can help to establish the clinical-business case and anticipate the ROI from any given intervention, again with only limited resource commitments. As a result, those hypotheses that actually make it to a real test of change are much more likely to be important and to have a greater chance for success. Once tests are underway, real-time data access supports rapid change cycles, where sequentially refined hypotheses can be tested and refined in increasingly shorter short periods of time.

Increased Quality of Hypothesis Generation and Relevance of Initiatives

Second, the purview of inquiry moves beyond guidelines, encompassing questions more directly relevant to practice and the related business case, as well as what an organization should stop doing, recognizing that many components of care are embedded by tradition and offer little or no value. Importantly, metrics can be focused on measures that are directly relevant to patient health (e.g., actual low density lipoprotein levels rather than lab orders) and downstream impact (e.g., cardiac events or visits avoided), substantially improving the saliency of feedback to guide productive change that yields tangible value, holds the attention of organizational leaders, and motivates continued vigilance.

Increased Sustainability

EHRs can be used to “hard wire” process changes, to automatically track and trend important metrics after an intervention has been made, to watch for regression, and to learn of unexpected consequences (whether good or bad). Further, dashboards can serve to link PI efforts to strategic objectives and gain the attention of a much broader community to provide additional incentive to maintain gains from change.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Increased Focus on Return on Investment

Because resources are always constrained, it is critical to focus PI efforts on those interventions that can deliver the most clinical and business return. Under this framework, PI is strongly focused on ROI as evidenced by empirical data. As those data allow for more thoughtful clinical-business planning, more leaders are engaged (e.g., CFO, clinic directors), thereby enabling PI to rapidly evolve its purview to a much broader and more refined set of measurable outcomes that are likely to impact quality and efficiency in a material way.

Enhanced Research Capabilities

A PI Architecture can augment research. First, PI informs opportunities for success. Interventions that appear to be important, impactful, and sustained, guide researchers on opportunities that are likely to be successful for more complete testing via a robust study and for development and testing of tools to replace labor-intensive workflows and processes. Second, the data asset can be used to quickly “confirm or deny” results from newly published trials, whether randomized or observational. Further, when performed proactively, data mining for unintended consequences of new drugs can be an important adjunct to current forms of postmarketing surveillance. Similarly, one can mine such databases (which include reason codes for medication orders) for off-label usage patterns, risks, and benefits. All of these data-driven opportunities would be enhanced even further if disparate health systems using common data standards pooled their data (or results) for such purposes. Finally, EHRs can be used to identify patients who meet criteria for research studies and to capture data elements relevant for analysis.

Summary and Conclusion

Many health systems are experimenting with new approaches to quality improvement that leverage EHR capabilities. In addition to practice standardization and decision support, EHR data provide a new source of hypotheses and evidence for both PI and research. When complimented by a broader data aggregation, analysis infrastructure, and process to create a PI Architecture, the potential is significant. While there are numerous limitations yet to be overcome, the latent potential between EHR (and other electronic) data, performance improvement and research is both significant and exciting. The next decade of work will be transformative; this is an exciting time for health care.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

ADMINISTRATIVE DATABASES IN CLINICAL EFFECTIVENESS RESEARCH

Alexander M. Walker, M.D., Dr.P.H.

Harvard School of Public Health

World Health Information Science Consultants, LLC

Background

The most exigent demands for large-scale integration of medical data have come from healthcare administrators and payors. Their needs to create effective payment schemes and basic monitoring of medical resource utilization have been susceptible to ready standardization and have provided immediate financial returns that have in turn justified the investment in the requisite data systems. The many-to-many relations between insurers and providers in the United States, in which an insurer may deal with hundreds of thousands of providers and a provider may deal with tens of insurers, has meant that the only functioning systems are highly standardized and internally consistent.

The resulting progress in the development of administrative databases stands in marked contrast to the world of electronic health records, which capture far more complex clinical and laboratory data, and for which there has been the growth of many competing local standards. While the advantages in patient care with a well-functioning electronic record are evident to practitioners, the cost and complexity of these systems still poses a barrier to implementation. Implemented systems that follow different standards pose even more formidable barriers to standardization.

For all the advantages that a research-enabled electronic health record will one day offer, it is administrative databases that form the heart of large-scale population research for most medical applications. The purpose of this report is to touch on the key features of these resources.

Insurance Claims Data

The most widespread technique for distributing healthcare funds in industrial countries involves some form of fee-for-service reimbursement, in which providers of services turn to private or governmental insurance programs for payment for specified services. Insurance schemes have been advocated as the most effective way to pay for services even in societies with limited medical resources (Second International Conference on Improving Use of Medicines, 2004).

The population definition for an insurance database is contained in the eligibility file, which identifies all covered individuals and basic demo-

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

graphic data such as date of birth, sex, and address. This file will include dates of coverage and may contain some family information in the form of an identified primary contract holder, along with dependents.

The service claims in a typical insurance database include identities of both the provider and recipient of services, the nature and date(s) of services, and presumptive diagnoses that motivated the services. Services may be visits, diagnostic tests, or procedures. The results of laboratory procedures, as opposed to the fact of the test having been performed, are not part of the insurance claims system.

Hospitalizations are a special form of service, typically accompanied by more detailed information, including dates, procedures, and primary and secondary diagnoses. In the United States, physician charges that do not flow through the hospital billing system appear as individual provider claims during a period of hospitalization and can be used to flesh out events during hospitalization.

Pharmacy insurance claims arise for each dispensing, with identities of the pharmacy, the prescribing physician, and the recipient and details on the product supplied, substance, manufacturer, form, dose, quantity, and days supply. The indication for treatment is not typically part of the claim and must typically be inferred from diagnoses recently assigned in conjunction with visits to the prescribing physician.

Insurers may use these data internally for administrative purposes. Researchers in the United States operate under rules set by HIPAA (the Health Insurance Portability and Accountability Act), which circumscribes their permitted activities in order to safeguard individual’s medical privacy. Under HIPAA, personally identifying data, termed PHI (protected health information) includes both obvious identifiers, such as name and address, and data from which persons might be identifiable with the supplementary use of other publicly identifiable information. This includes for example exact date of birth. HIPAA provides standards for creating “deidentified” data, which can be exchanged and analyzed without further oversight. If PHI is required, researchers must obtain the permission of a Privacy Board, which is typically constituted under an Institutional Review Board (IRB). The researcher needs to provide details of methods by which the minimum necessary amount of PHI will be employed for the minimum time required and which will safeguard that PHI during its period of use.

Currently available insurance claims databases with full information range in size of up to about 20 million persons cross-sectionally, with substantially larger numbers of cumulative “lives” and for data that may omit one or more of the elements above. U.S. Medicare data, not yet widely available, include claims information on over 40 million persons over the age of 65, with drug data from 2007 forward.

Though well suited to studies of health services utilization, insur-

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

ance data serve clinical research only with substantial further processing and with caution even at that. Drug use is inferred from dispensing data. Medical conditions must be inferred from patterns of claims for services, treatments, and diagnostic procedures. Thus a recently used algorithm for venous thromboembolism included the occurrence of a suitable diagnosis associated with a physician visit, emergency room or hospital claim, performance of an appropriate imaging procedure, and at least two dispensings of an anticoagulant (Jick et al., 2007; Sands et al., 2006). Algorithms for more subtle conditions may be more complex still. Conditions for the pattern of care attendant on a “rule out” diagnosis resembles that for a confirmed diagnosis may be impossible to identify with any specificity.

The advantages of pure insurance claims data include easy access to data on very large numbers of individuals, detailed drug information, and the absence of reporting biases related to knowledge of exposure or outcome. There are substantial drawbacks. There is a lag in the creation of research-ready insurance files that runs from months to a year. The lack of medical record validation means that crucial cases may be missed and that others may be incorrectly ascribed to a condition under study. For nonemergency conditions, it may be very difficult to pinpoint the date of onset and the distinction between recurring, recrudescent, and new-onset conditions may be elusive. Apart from special circumstances involving serious acute outcomes and drug or vaccine exposures, insurance claims data may typically be insufficient for clinical research purposes.

Augmented Claims Data

Research groups within the insurance organizations that generate data have begun to systematically augment these files. Increasingly insurers are negotiating arrangements with independent laboratories under which the analyte results must be submitted with the claim for reimbursement. These are outpatient files and do not represent complete laboratory histories. Since the arrangements are made between the insurer and the laboratory, an individual’s record will contain repeated measures to the extent that he/she returns to the same site for testing. These have been used for example to relate cardiovascular disease to severe anemia (Walker et al., 2006).

Marketing databases contain self-report data on ethnicity and income, which have been linked to insurance data. In the United States there are available files that link postal code information to detailed census data on income and ethnicity as well.

Far more important than laboratory values and income has been the ability to return to providers and patients for direct information. With Privacy Board approval, researchers can approach physicians and institutions holding patients’ medical records to verify diagnoses and treatments,

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

and to eke out information on lifestyle, chronic risk factors, and family history that is not available in the insurance claims history. With IRB approval, they can approach patients themselves for information, biometric data, and even tissue samples. These studies permit analyses carried out with a reasonable certainty that the underlying elements are correct.

A good example of the multifaceted work that augmented claims databases permit is an FDA-mandated program of surveillance of the oral contraceptive Yasmin. The progestational agent in Yasmin is drospirenone, which is functionally related to the potassium-sparing diuretic spironolactone. Though no problems of potassium handling had been seen in clinical trials, the analogy was sufficient to bring the FDA to have the sponsor initiate a program that (1) followed hospitalization and mortality in over 20,000 Yasmin initiators and a two-fold larger comparison group; (2) monitored contraindicated dispensing to patients with adrenal, renal, and hepatic dysfunction; (3) quantified the use of potassium monitoring in certain indicated patients; and (4) ascertained the outcomes of breakthrough pregnancies. Chart reviews, physician interviews, and even interventions with doctors prescribing to contraindicated patients rounded out a clinically useful surveillance program (Eng et al., 2008; Mona Eng et al., 2007; Seeger et al., 2007).

Enhanced claims studies include the insurance claims database advantages of large numbers of subjects, detailed drug exposure information, and lack of reporting bias, and add to these much greater confidence in the nature of events being studied and knowledge of timing. Like insurance claims studies, research programs in augmented databases may still be hindered by a lag in adjudication of claims on the order of months to a year. These data resources serve well for observational studies of outcomes that are highly likely to result in medical care.

Automated and Quasi-Automated Database Review

Many of the research and surveillance activities that take place in insurance files take advantage of repeatedly implemented computer routines, which offer the hope that some of these programs could be automated as decision support tools for both clinical safety and efficacy.

The core idea for creating such tools is to simplify the welter of claims data into manageable units. In part this can come about through routine implementation of algorithms, such as the one described above for venous thromboembolism, into standard units for off-the-shelf programming or routine tabulation. A number of data holders have taken this concept even further, with the concept of “episode groupers,” programs that recast a broad range of related claims into single clinical entities, such as for example “community acquired pneumonia.”

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

A second element of routine surveillance computer programs is a standard way for handling the confounding that is so prevalent in observational studies of the outcomes medical regimens. One approach is to take the wealth of data represented in insurance claims into a multivariate prediction of therapeutic choice, called a propensity score. These models can be rich because they draw on thousands or ten of thousands of observations and can incorporate claims history items that collectively represent strong proxies for confounding factors (Seeger et al., 2005). Propensity-matched groups can be created routinely in advance for new, commonly used therapies, or scores can be calculated and stored with individual records for future use.

The final sine qua non of automated surveillance is a plan for dealing with multiplicity of outcomes. Some investigators have proposed restricting attention to a smallish number of disease outcomes previously associated with drug effects, such as hepatitis, rashes, or ocular toxicity. This may be a strategy with little marginal gain, as these will be precisely the drug outcomes for which clinicians are most sensitive and likely to report adverse effects already. Another option is to apply a formal Bonferroni correction to thousands of possible combinations of drugs and outcomes being tested, much in the same way that whole-genome scans are subjected to radical statistical attenuation to reduce false positives. This approach has the drawback of curtailing power to detect true association in proportion to the reduction in risk of false positives (Walker and Wise, 2002).

A more productive approach to multiplicity in large database is to apply both statistical and medical logic to the problem of pruning false-positive results. Does the timing look right? Is the outcome plausible in light of the mechanism of action, or perhaps the route of administration of a drug? Are there analogies to be drawn from the experience with similar products?

Decision support tools do not have a promising history. Perhaps the technology for creating them tends to lag the decision-maker needs, or it may be that the enthusiasm required to generate development funding inevitably raises expectations beyond what the technologists can reasonably achieve. It may be that comprehensive indexing, retrieval, and counting functions, and not sophisticated analysis, are the proper goal of massive, automated data integration.

Distributed Processing

Part of the push for greater sensitivity and speed in drug safety surveillance is taking the form of programs to include large numbers of automated databases in common surveillance mechanisms. At the level of database amalgamation, the large U.S. insurance databases would seem to be ideal candidates, as they already operate under common rules for coding and

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

have similar structures, imposed by the common format of the component data items.

There are however major institutional barriers to having holders of large datasets contribute them to a common pool. Giving up the ability to approve of the analyses done in one’s own data underlies some of the reluctance, and it may be that the details of pricing and reimbursement contained in the data are considered sensitive and proprietary.

The most promising solution to both computational and institutional obstacles to very large database research may lie in distributed processing, discussed by Richard Platt in more detail elsewhere in this volume. Under distributed processing models, data holders create standard views of their databases, or even standardized extracts. A central office then distributes computer code to pull out key information from each database, for transmittal back, where the statistical coordinating center assembles the elements into a common analysis.

A Note of Caution

Observational data, no matter how assembled, require special care in clinical effectiveness research. The likelihood that persons undergoing compared therapies will different with respect to fundamental predictors of outcome is large and needs to be addressed head-on. There is a growing family of research methodologies, including propensity techniques (mentioned above), proxy variable analysis, and instrumental variables that are the objects of vigorous methodological research (Schneeweiss, 2007). While these necessary efforts continue, science-based skepticism of non-randomized studies remains highly appropriate, even though unthinking rejection may properly belong to the past.

CLINICAL EFFECTIVENESS RESEARCH: THE PROMISE OF REGISTRIES

Alan J. Moskowitz, M.D., and Annetine C. Gelijns, Ph.D.

Mount Sinai School of Medicine

Introduction

In comparison to other sectors of the economy, modern health care is a technologically highly innovative field. New drugs, devices, procedures, and behavioral interventions continuously emerge from the research and development (R&D) pipeline and then get established into clinical practice. The R&D process in medicine generally involves “premarketing” clinical trials, particularly in the case of drugs, biological products, and devices.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

The development process of these new technologies as well as procedures, however, does not end with their introduction and adoption into practice. Over time as these interventions diffuse into widespread use, the medical profession tends to further modify and extend their application—by finding new populations, indications, and long-term effects. These dynamic patterns of adaptation and evolution underscore the importance of measuring the health and economic outcomes of clinical interventions in everyday practice and drive the renewed interest in developing a robust clinical effectiveness research enterprise.

There are various ways of measuring the clinical effectiveness of diagnostic and therapeutic interventions, including so-called pragmatic randomized trials, large administrative dataset analyses, and observational studies using clinical registries (Gliklick and Dreyer, 2007; Tunis et al., 2003). In this paper, we focus on the role and potential of registries in capturing information about “real-world” health and economic outcomes. We also highlight their potential value for assessing quality of care; for instance, through studies of risk-adjusted volume–outcome relationships. Finally, we address an often under-examined benefit of clinical registries; that is, their potential to accumulate information that, in turn, can increase the efficiency of randomized clinical trials, and premarketing studies in general. As such, clinical registries can be an important tool to help to guide decision making for patient care and health policy.

Obviously, clinical registries have their methodological and practical vulnerabilities, and we will review some analytical, organizational, and financial measures to strengthen them. In particular, we will discuss the incentives of stakeholders to support these data collection efforts and new models of public-private partnerships. But first, we will provide a more in-depth rationale for investing in clinical registries, most of which can be found in the dynamics of the medical innovation process itself.

Importance of Downstream Innovation and Learning

Over time, we have seen a move toward more rigorous and well-controlled premarketing studies for all therapeutic and diagnostic modalities. Despite this move, there are practical constraints that limit how much we will learn in the premarketing setting. Randomized trials involve a sampling process and typically minimize heterogeneity of the target population to facilitate the efficient testing of hypotheses. Clinical trials have limited timeframes and usually are underpowered for secondary end-points. Moreover, the skill of the participating centers may be specialized, raising questions about generalizability of trial results to a broader set of healthcare institutions and practitioners. Regulatory (premarket approval) and clinical decisions, therefore, are made in the context of uncertainty and

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

limited information about the ultimate outcomes of an intervention. Dispelling such uncertainties requires measuring outcomes in widespread clinical use (Gelijns et al., 2005).

The focus on general practice allows us to capture outcomes of a broader set of providers and to detect long-term and low-frequency events, such as serious adverse events. In addition to the spreading of a new technology throughout the healthcare system, and its attendant change in outcomes, clinical practice is the locus of much downstream learning and innovation. First, after a new technology is introduced into practice, the medical profession typically expands and shapes the targeted patient population within a particular disease category. A case in point is coronary artery bypass grafting (CABG) surgery. Only 4 percent of patients, who were treated with such surgery a decade after its introduction, would have met the eligibility criteria of the trials that determined its initial value (Hlatky et al., 1984). These trials excluded the elderly, women, and patients with a range of co-morbidities, all of whom are recipients of CABG surgery today.

Second, the process of postmarketing innovation also includes the discovery of totally new, and often unexpected, indications of use. The history of pharmaceutical innovation is replete with such discoveries (see Table 4-1). A case in point are the alpha blockers, which were first introduced for hypertension and only 20 years later were found to be an important agent in the treatment of benign prostate hyperplasia. We found that the discovery of such new indications of use is an important public health and economic phenomenon, accounting for nearly half of the overall market for blockbuster drugs (Gelijns et al., 1998).

A third dimension of downstream learning is that physicians gain

TABLE 4-1 Original and New Indications for Pharmaceuticals

Drug

Original Indications

New Indications

Beta-blockers

Angina pectoris, arrhythmias

Hypertension, anxiety, migraine headaches

Aspirin

Pain

Stroke, coronary artery disease

Anticonvulsants

Seizure disorders

Mood stabilization

Alpha blockers

Hypertension

Benign prostatic hyperplasia

RU-486

Abortive agent

Endometriosis, fibroid tumors, benign brain tumors

Fluoxetine (Prozac)

Depression

Bulimia, obsessive compulsive disorder

Thalidomide

Anti-emetic and tranquilizer

Leprosy; graft-vs-host, Bechet’s, AIDS, ulcers

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

further know-how about integrating a technology into the overall management of particular patients. Consider, for example, left ventricular assist devices (LVADs). These devices were FDA approved in 1998 to support end-stage heart failure patients awaiting cardiac transplant as a bridge to transplantation. Subsequently, LVADs were approved for marketing by the FDA in 2002 and for reimbursement by Medicare in 2003 for those patients who were ineligible for transplantation. This indication is also referred to as “destination therapy,” or intended life-long implantation of the LVAD. Whereas LVAD destination therapy was shown to provide a clear survival, functional status, and quality-of-life benefit over medical management, LVADs were plagued by significant serious adverse events, especially bleeding, infections, and thromboembolic events (Rose et al., 2001). Following approval of the device, the expanding experience of clinicians further highlighted shortcomings in its use and safety, which led to subsequent incremental device improvements by the manufacturing community. At the same time, clinicians improved their management of LVAD patients by modifying the operative procedure, developing new ways to prevent driveline infections, and changing anticoagulation regimens, among others. These changes in patient management techniques led to a reduction in the adverse event profile associated with the therapy. Beyond changing clinical outcomes, these changes affected economic outcomes as well. Over time, for example, there has been a 25 percent reduction in the length of stay for the implant hospitalization from an average of 44 days in the pivotal FDA trial (with a mean cost of $210,187) to 33 days within 3 years of dissemination—the most costly part of the care process (Miller et al., 2006; Oz et al., 2003). The dissemination to the broader healthcare system, and the changes in technologies, patients, and management techniques over time, argue for ongoing monitoring of health outcomes.

What Can We Learn from Registries?

Clinical registries, as mentioned, are an important means to capture use and outcomes in everyday practice. A recent Agency for Healthcare Research and Quality (AHRQ) report defined registries as an “organized system using observational study methods to collect uniform data to evaluate specified outcomes for a population defined by a particular disease, condition or exposure, and that serves a predetermined scientific, clinical or policy purpose” (Gliklick and Dreyer, 2007). Table 4-2 depicts some registries and their different objectives.

An important objective of registries is to collect data on long-term outcomes and rare adverse events. This is especially the case, where outcomes and adverse events take a long time to manifest themselves; a dramatic example can be found in diethylstilbestrol (DES), where clear cell carci-

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

TABLE 4-2 Existing Registry Content and Sponsor Descriptions

Name

Content

Sponsor

INTERMACS

National registry of patients receiving mechanical circulatory support device therapy to treat advanced heart failure. (Membership required for Medicare clinical site approval)

Joint effort: National Heart, Lung and Blood Institute (NHLBI), Centers for Medicare & Medicaid Services (CMS), and FDA

Cardiac Surgery Reporting System

Detailed information on all CABG surgeries performed in New York State for tracking provider performance. (Reporting mandated for all hospitals in New York State performing CABG)

New York State Department of Health

ICD Registry (Implantable Cardioverter Defibrillator Registry)

Detailed information on implantable cardoverter defibrillator implantations. (Meet CMS coverage with evidence development policy)

The Heart Rhythm Society & American College of Cardiology Foundation

ICG G (International Collaborative Gaucher Group)

Information on clinical characteristics, natural history, and long-term treatment outcomes of patients with Gaucher Disease, a rare disorder.

Genzyme Corporation

CASE S-PMS (Carotid Artery Stenting with Emboli Protection Surveillance Post-Marketing Study)

Evaluation outcomes of carotid artery stenting in periapproval setting.

Cordis Coporation

Alpha-1 Antitrypsin Deficiency Research Registry

Regsitry of patients with alpha-1 antitrypsin deficiency for purposes of recruiting them to clinical trials.

Alpha-1 Foundation

noma of the vagina only appeared in daughters of the women taking the drug to prevent premature birth. The realization of its side effects subsequently led to a registry for those exposed to DES.

Another important use of registries is to gather information on the outcomes achieved as a technology spreads to a wide range of practitioners and institutions. As such, registries can measure the quality of care provided. Administrative datasets, which are less costly in terms of data collection, also lend themselves to this purpose. Using the Medicare dataset, for exam-

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 4-3 Survival after open ruptured AAA by hospital volume quintiles (1995– 2004, Medicare, n = 41,969).

FIGURE 4-3 Survival after open ruptured AAA by hospital volume quintiles (1995– 2004, Medicare, n = 41,969).

SOURCE: Reprinted from the Journal of Vascular Surgery, Vol. 48/No. 5, Egorova et al. 2008. National outcomes for the Treatment of ruptured abdominal aortic aneurysm: Comparison of open versus endovascular repairs, pp. 1092-1100, with permission from Elsevier.

TABLE 4-3 Endovascular Repair AAA Patients (2000–2004, Medicare, n = 39,815)

Risk Factor

Parameter

Odds Ratio and 95% CL

P-Value

Renal Failure w/ Dialysis

1.95

7.06 [5.23–9.53]

<.0001

LE Ischemia

1.27

3.55 [2.65–4.75]

<.0001

Age ≥85

1.13

3.10 [1.57–2.37]

<.0001

Liver Disease

0.93

2.52 [1.54–4.12]

0.0002

CHF

0.80

2.23 [1.89–2.64]

<.0001

Renal Failure w/o Dialysis

0.65

1.91 [1.45–2.51]

<.0001

Age 80–84

0.65

1.92 [1.56–2.36]

<.0001

Female

0.52

1.68 [1.42–1.99]

<.0001

Neurological

0.45

1.59 [1.29–1.94]

0.0001

Chronic Pulmonary

0.45

1.57 [1.35–1.83]

<.0001

Hospital Annual Vol <7

0.37

1.45 [1.18–1.80]

0.0005

Age 75–79

0.34

1.40 [1.14–1.71]

0.001

Surgeon EVAR Vol <3

0.26

1.30[1.04–1.62]

0.002

NOTE: AAA = abdominal aortic aneurysm; CL = confidence limit.

SOURCE: Reprinted from the Journal of Vascular Surgery, Vol. 50/Issue 6, Egorova, Giacovelli et al. 2009. Defining high-risk patients for endovascular aneurysm repair, pp. 1271-1279, with permission from Elsevier.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

ple, we found a significant volume–outcome relationship for open repair in about 42,000 ruptured abdominal aortic aneurysm (AAA) patients treated between 1995 and 2004 (Figure 4-3; Egorova, 2008). AAA patients now increasingly receive an endovascular repair, which was approved for reimbursement in 2000. The same volume–outcome relationship holds for high-risk AAA patients treated by an endovascular approach between 2000 and 2006 (Table 4-3). Again, low volume, less than 7 procedures per year, is an independent predictor of mortality, increasing risk by 45 percent (Egorova, 2009). In comparison to administrative datasets, however, registries are able to offer the clinical details needed to create richer statistical models that better characterize patient risk factors and process of care variables to predict outcomes. To expand on our aneurysm case, for example, a clinical registry would have been able to provide important information about the anatomical features of the aneurysm, which are not captured in administrative datasets and yet may have an important influence on outcomes. Moreover, in administrative datasets, it is often hard to distinguish between baseline co-morbidities and adverse events (e.g., myocardial infarction or heart failure) during the hospitalization of interest. Registries do not have this problem, and if they capture the whole population they are an important tool for measuring quality of care. If registries are used to measure quality of care among providers than it is obviously important that they appropriately adjust for differences in risk of the populations among these providers, and risk-adjustment techniques are improving (see below).

Just as registries are able to capture a broadening of providers, they also can capture the use and outcomes of a technology in a broader set of patients. To return to the LVAD case, the pivotal trial for destination therapy patients demonstrated a significant survival and quality-of-life benefit of the HeartMate (HM) XVE LVAD over optimal medical management. In fact, Kaplan-Meier survival analysis showed a 48 percent reduction in the hazard of all-cause mortality in the LVAD group (hazard ratio = 0.52; 0.34–0.78; p = 0.001). In the 2 years following CMS approval (2003) for reimbursement, an analysis of an industry-sponsored postmarketing registry showed that the overall survival rate of LVAD patients remained similar to the trial. However, a multivariable regression analysis of the larger population captured by the registry (n = 262) identified that baseline risk factors, such as poor nutrition, hematological abnormalities, and markers of end-organ dysfunction, distinguish patient risk groups (Lietz et al., 2007). Stratification of destination therapy candidates into low, medium, high, and very high risk on the basis of a risk score corresponded with very different 1-year survival rates (81 percent, 62 percent, 28 percent, and 11 percent, respectively; see Figure 4-4). The broader experience of clinical registries, as such, can provide important information to stratify patients on the basis of baseline risk factors, and, thereby, help to refine patient selection criteria.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 4-4 Probability of survival after LVAD implantation.

FIGURE 4-4 Probability of survival after LVAD implantation.

Finally, an important objective for clinical registries is their ability to provide comparative effectiveness information. In New York State, for example, registries exist for all patients undergoing CABG surgery (Cardiac Surgery Reporting System) or interventional cardiac procedures (Percutaneous Coronary Intervention Reporting System). Over time, numerous randomized trials have compared CABG surgery to percutaneous transluminal coronary angioplasty (PTCA). However, both procedures have been characterized by a high level of ongoing incremental change (e.g., most trials pre-dated the use of stents) as well as ongoing changes in patient selection criteria, raising questions about the clinical effectiveness of these approached in particular patient groups. An analysis of nearly 60,000 patients captured by the above-mentioned New York State registries showed that for patients with two or more diseased coronary arteries CABG is associated with higher adjusted rates of survival than stenting (Hannan et al., 2005).

Strengthening Registries

Enhancing the value of registries for clinical effectiveness research requires obtaining “trial quality” data at low cost and low burden, and

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

here we will review some opportunities for strengthening data elements and data collection.

Target Population

The target patient population needs to be clearly defined, and data should capture its characteristics in terms of medical history and severity of illness. In the case of LVADs, for example, the National Institutes of Health (NIH) provided financial support for the creation of a registry, with close involvement of CMS, FDA, the clinical community and industry, called INTERMACS. This registry targets patients who receive durable mechanical circulatory assist devices (either for bridge to transplantation or destination therapy). The data elements were designed to capture important baseline characteristics of LVAD patients and have resulted in patient profiles that are useful for clinical communication and treatment planning that correlate with mortality risk (INTERMACS, 2008). Even though registries are more apt to capture broader populations than randomized efficacy trials, there is always a risk that patients are entered selectively. Statewide hospital discharge datasets (such as SPARCS in New York State) may offer a means for monitoring the completeness of patient population capture. Linking payment for patient care to data entry is another way to improve capture. By participating in INTERMACS, for example, clinical centers can meet CMS and Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) reporting requirements, necessary for certification, which stipulate that centers submit data to a nationally audited registry that tracks life-time outcomes of all destination LVAD patients (INTERMACS, 2008).

Outcomes

In terms of outcome measures, mortality is a relatively unambiguous end-point, but adverse events (AEs) require standardization of definitions that should not be unique to a registry but should be more generally accepted in the clinical community. INTERMACS, for example, offered much needed standardization of AE definitions, and facilitated the comparisons of different circulatory support devices, which until recently defined important events differently, including stroke and major bleeding. Registries can improve data quality by adjudicating adverse events and implementing a monitoring process to ensure data integrity. Functional status and quality of life are critical end-point measures, but difficult to capture and analyze longitudinally, even in randomized trials. But as with randomized trials, using instruments that are self-administered or administered by phone, such the Rankin scale in stroke, can increase feasibility. For some

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

diseases, such as heart failure, there is a correlation between patient-derived measures of functional status and hospitalizations, which facilitates using hospitalizations as a proxy measure.

Control Group

Critical for measuring comparative effectiveness is defining the control group, which optimally will be internal to the registry being analyzed. While device registries, for example, may facilitate comparing the effectiveness of alternative devices, such registries are unlikely to provide a medical therapy control group needed for evaluating new indications for device use. Such questions are better addressed in a broader disease-based registry; for example, defining the appropriate role of LVADs in managing slightly less sick heart failure patients would require a comparison to patients receiving optimal medical management and expansion of the LVAD registry to an overall heart failure registry. One weakness of observational studies (i.e., nonrandomized studies) is that clinical judgment is the basis for treatment assignment and clinical characteristics of the comparison groups may differ substantially, affecting the ability to make fair comparisons. Rigorous techniques to adjust for these differences, such as propensity score-based analyses, have become more common over time. However, our ability to create such adjustment models requires that we have an understanding of the prognostic factors that affect treatment outcomes. With newer forms of treatment, this is not always the case. If there is very rapid technological change, the evolution to major new patient populations and/or little know-how about prognostic factors, observational studies may no longer be sufficient and randomized trials may be in order.

Data Collection Burden and Cost

Improving the efficiency of data collection for registries is crucially dependent on advances in the use of informatics. With the growth and improvement of electronic health records, institutions have the capability of automated transfer of patient, process of care, and outcome data into registries, which may address some of the data collection and cost burden. In the same manner, administrative datasets can be linked to patient record, which would improve their usefulness for clinical effectiveness studies.

The Role of Registries in Improving the Clinical Trials Process

An under-examined benefit of registries may be their potential to increase the efficiency of conducting RCTs. First, registry data can provide a prior estimate of the success distribution in the control group that gets

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

updated by prospectively collected data in a randomized trial (through Bayesian analysis), or concurrent control data can be directly pooled with randomized data. The benefit of either approach could be to allow a higher likelihood of randomization into the experimental group, say a 3:1 or 4:1 randomization ratio (Neaton et al., 2007). This is especially important when there are strong physician and patient preferences for an experimental therapy. This is often the case with major surgical interventions for life-threatening diseases and may constitute a major deterrent to enrollment in a randomized trial.

Registries also may offer a means to eliminate the need for collecting a new control group altogether, which has relevance to premarketing efficacy trials for orphan diseases and small populations. Here registries can provide an empirically derived performance goal or objective performance criterion to facilitate a single arm study. The use of LVADs for bridge to transplantation, for instance, is a so-called orphan indication, with around 500 patients being implanted in the United States annually. INTERMACS is now providing data to establish a performance goal in terms of “survival to transplantation or being alive at 180 days and listed for transplantation” for newer generation devices that are seeking approval for use in Bridge-to-Transplant (BTT) patients. More recently, INTERMACS has been the source for providing a matched control arm.

Finally, the existence of a robust postmarketing infrastructure can balance the acceleration of premarketing trials. This is especially important if drugs or devices are approved under the FDA fast track mechanism. For example, of the 60 cancer drugs that have been approved between 1995 and 2004, a third of these compounds received accelerated approvals based on surrogate measures of clinical benefit (Roberts and Chabner, 2004).

Concluding Observations

In conclusion, the often underappreciated dynamics of medical innovation, where much of innovation and downstream learning takes place in actual clinical practice itself, argues for capturing the changing outcomes throughout the lifecycle of medical interventions. Registries offer the means to do so, and recently, new opportunities for addressing their traditional weaknesses have emerged in the realm of informatics, analytical techniques, and new models of financing. With the expansion and enhancement of electronic health records comes the possibilities of utilizing the clinical encounter to directly populate research registries and decreasing the burden of primary data collection. Moreover, efforts to address the traditional weaknesses of observational registry-based studies have led to the increased use of propensity score techniques to adjust for differences in baseline differences between nonrandomized comparison groups. Finally, an important

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

issue concerns the financial support for clinical registries. Traditionally, registries have been supported either by the public and non-for-profit sectors, such as foundations, or the private sector (especially the device, biotech, and drug industries). The information generated by registries, in many respects, can be characterized as a public good. Public–private partnerships offer an interesting new model for registry support. A case in point is the recently created INTERMACSs registry for LVAD therapy, which brings together three major government agencies, the NIH, which provides funding for an independent coordinating center as well as oversight, Medicare, and the FDA, which are involved in planning and oversight as well. The participating hospitals provide in-kind support for data collection and analysis efforts. Industrial firms are heavily involved in the design and implementation of the registry, and the expectation is that over time these firms will assume the financial responsibility for the registry.

There has been a long and often heated debate about the value of randomized versus observational, registry-based studies. In this paper, we argue that the data of registries tend to be not only complementary to, but also, in some circumstances, alternative to randomized trials. Moreover, clinical registries offer many untapped opportunities for improving the efficiency of the randomized trials enterprise itself, both of premarketing trials as well as so-called pragmatic trials of diagnostic or treatment modalities. Registries, as such, are, and will remain, critical to the conduct of clinical effectiveness research, particularly if we capitalize on emerging opportunities in the informatics, analytical, and financial realms.

DISTRIBUTED DATA NETWORKS

Richard Platt, M.D., M.S.

Harvard Medical School, Harvard Pilgrim Health Care

The Case for Distributed Networks

The information created by the delivery of medical care—about individuals, their health status, the treatment they receive, and their health outcomes—also can teach us a great deal about how well treatments work, the risks they entail, and the cost of better health. This information also can provide information about the health of the population and the adequacy of healthcare delivery, illuminate gaps in care, and support clinical research. Additionally, it will be possible to assess the quality of health care if we add information about providers and the organizations that deliver care.

This information exists for a substantial fraction of the U.S. population, although it takes many forms and is held by many organizations. Examples include ambulatory practices’ and hospitals’ separate electronic medical

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

records, health plans’ and insurers’ membership and administrative claims systems, pharmacy benefits managers’ dispensing records, and, increasingly, individuals’ personally controlled health records. Additional information like public health agencies’ birth, death, and cancer registries and research organizations’ special purpose datasets also may play an important role.

This discussion focuses on distributed data networks to allow the secondary use, i.e., not direct patient care, of different organizations’ data. “Distributed data network” is used here to mean a collection of separate data repositories that can function as if they were linked in a single combined dataset by executing and responding to electronic queries posed in a standard format. The critical notion is that it is not necessary to create a large pooled dataset containing enough information to answer a wide range of potential questions, since nearly all of the goals described above can be accomplished by having the separate data sources provide limited amounts of information on a just-in-time basis to answer specific questions. This makes an important distinction between the data, e.g., all of a person’s drug dispensings and diagnoses, and the answer to a question, such as whether a specific drug is associated with a specific adverse outcome among individuals with various characteristics. A fully developed distributed data network will be able to address efficiently essentially any question that could be answered by a pooled dataset.

Maintaining the information in a distributed network has advantages over a pooled dataset with regard to protection of confidential and proprietary information, local decision making regarding participation in specific activities, and the ongoing involvement of individuals with expertise in interpretation of the data. With regard to privacy, the distributed approach minimizes, and often eliminates, sharing of confidential personal information that is increasingly difficult to fully de-identify without compromising its utility. Because of this privacy concern, avoiding creation of large pooled datasets conforms to widely held public views about the use of personal information (The Markle Foundation, 2006). It is also easier to satisfy the privacy requirements of the Health Insurance Portability and Accountability Act (HIPAA) (U.S. Department of Health and Human Services, 2008). For data owners, a network structure in which the data originators maintain physical control over the data reduces the barrier to participation. For private organizations, it allows them to weigh any risk associated with sharing of proprietary value against the public health utility of participation. Private organizations are more likely to participate in networks that allow them to decide on a case-by-case basis whether to join a specific activity, e.g., assessment of postmarketing drug safety. Such organizations are more likely to assent to such uses than to uses that will be determined after a pooled dataset is created. Keeping the data in the possession of the data developers also means there will be better ability to interpret the information. Both

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

clinical and administrative data systems are evolving rapidly in ways that may not be apparent but which can profoundly influence the interpretation of the information they contain. For example, an undocumented increase in the number of diagnoses that a clinical data system stored per encounter led to a spurious signal of an influenza outbreak (Buehler et al., 2007). There is thus a need for ready access to local expertise to interpret content. Finally, centralized data systems entail greater risks of catastrophic security breaches.

Initiatives to Build Large Distributed Research Networks

Several initiatives are currently underway to develop distributed networks that are intended eventually to have access to the health information of a substantial fraction of the U.S. population.

The Institute of Medicine stimulated current efforts to develop a network to assess postmarketing experience with drugs by recommending that the FDA develop an active postmarketing surveillance program (IOM, 2006). The Food and Drug Administration Amendments Act (FDAAA) of 2007 (FDA, 2007b) mandated the FDA to develop a postmarketing evidence system that can evaluate the experience of 100 million people. The FDA announced plans for a Sentinel Network (FDA, 2007a, 2008), which it describes as a distributed network, rather than a single database.

The Agency for Healthcare Research and Quality is supporting development of a prototype for a scalable national network to support research on the comparative effectiveness and safety of therapeutics (Agency for Healthcare Research and Quality, 2008). This initiative is part of the Agency’s Developing Evidence to Inform Decisions about Effectiveness (DEcIDE) program and is led by the HMO Research Network Center for Education and Research on Therapeutics (CERT) and the University of Pennsylvania. It builds on the HMO Research Network’s experience in using distributed data methods for therapeutics research (Andrade et al., 2006; Raebel et al., 2007; Wagner et al., 2006).

The Robert Wood Johnson Foundation has funded an initiative to create a distributed data capability to provide national information about the quality and cost of health care (Robert Wood Johnson Foundation, 2007). One of the components of this activity involves development of a distributed data network, led by America’s Health Insurance Plans.

Models for Organizing Distributed Networks

Existing distributed networks provide some idea about approaches that may be successful as larger networks are developed. Each of these examples relied on several common features: (1) the organizations that developed the

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

data (HIPAA’s “covered entities”) extracted a common set of data elements from their information systems, transformed them into a common format, and stored the data so they could access it easily for repeated queries; (2) to function as a distributed network, they executed identical computer programs that were developed by an agreed-upon process to which all participants provided input; (3) they typically shared summary data with a coordinating center, rather than person-level analysis files; and (4) they provided detailed, patient-level data, sometimes to a health department, only in the event of a specific need to know more about the individual.

The National Bioterrorism Syndromic Surveillance Demonstration Program used a distributed network approach to surveillance for bioterrorism events and clusters of naturally occurring illness, in five HMO Research Network health plans (Lazarus et al., 2006; Yih et al., 2004). This demonstration program used a fully distributed automated method to identify clusters of illness. It accomplished this by having the health plans execute computer programs that created daily extracts of the preceding days’ encounters, put them into a standard format, and identified new episodes of illness that met Centers for Disease Control and Prevention’s (CDC’s) criteria for syndromes of interest, such as influenza-like illness or lower gastrointestinal illness. The programs assigned the new episodes to the patients’ zip codes of residence, and then each site automatically communicated the daily totals of new episodes for each syndrome in each zip codes to a coordinating center that used a space and time scan statistic to identify unusual clusters of illness. Notice of these clusters was sent from the coordinating center back to the originating site and to the relevant health department. If the health department wanted more information about the individuals who were part of the cluster, it contacted the health plan, which retained full information about the individuals and could provide identifying information as well as the full clinical detail available in the patients’ electronic medical records (Figure 4-5). This program illustrates the ability of a distributed system to provide immediate information to support public health needs. Although the health plans used information from their entire populations, they only shared person-level information about individuals in whom the health department was specifically interested.

The CDC-sponsored Vaccine Safety Datalink (VSD), founded in 1991, has operated since 2000 as a distributed data network in eight health plan members of the HMO Research Network. The VSD’s distributed network operates a real-time active postmarketing surveillance system for new vaccines. It relies on weekly automated submission to a coordinating center of counts of vaccine exposures and prespecified outcomes of interest in a total analyzable population of approximately 8 million individuals. It uses sequential analysis methods to identify signals of excess risk, which are validated by review of full text medical records (Lieu et al., 2007). This dis-

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 4-5 Schematic view of data flow for the National Bioterrorism Syndromic Surveillance Demonstration Program.

FIGURE 4-5 Schematic view of data flow for the National Bioterrorism Syndromic Surveillance Demonstration Program.

tributed method of active surveillance recently identified a signal of excess seizures associated with a quadrivalent measles-mumps-rubella-varicella vaccine, prompting a change in the Advisory Committee for Immunization Practice’s recommendation for use of the vaccine (Centers for Disease Control and Prevention, 2008). The Vaccine Safety Datalink’s general approach to real-time postmarketing surveillance also should be applicable to drugs, although additional development will be required (Brown et al., 2007).

An ad hoc distributed network assembled to evaluate the risk of Guillain-Barré syndrome, a potentially life-threatening neurologic condition following meningococcal conjugate vaccine (ClinicalTrials.gov, 2008) is notable both because of the size of the covered population and because it uses a hybrid data model that incorporates both distributed and pooled data methods. Five health plans with a combined membership exceeding 50 million people—half the number required by the FDAAA—are collaborating in this study. The health plans operate as a distributed network insofar as they create standard data files and execute shared computer programs that perform the large majority of the analyses, which are shared in tabular form and then pooled. The health plans also obtain detailed clinical information about potential cases of Guillain-Barré syndrome identified through diagnosis codes by obtaining full text medical records. Final case status is determined by an expert panel that reviews these records after the health plans redact personal identifying information. The study includes both an analysis of the full cohort, which is performed in a fully distributed fashion, plus a nested case control study that uses multivariate methods

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

requiring creation of a pooled dataset involving 0.2 percent of the entire cohort (12,000 individuals). To support the case control study health plans create analysis-level files containing one record for each case or control. The only protected health information that the covered entity shares with the coordinating center is the month and year in which individuals were immunized.

These examples of distributed networks illustrate the potential for distributing much of the data processing as well as the data storage. Distributed processing minimizes the need to create pooled person-level datasets, and is thus an important contributor to minimizing the amount of patient level data that must leave the covered entities.

Organizational Models for Distributed Networks

Distributed networks can operate in several ways. Figure 4-6 shows a schematic of the network design that is planned as part of the AHRQ prototype distributed network mentioned above. The system will accommodate

FIGURE 4-6 Distributed research network prototype using central coordinating center.

FIGURE 4-6 Distributed research network prototype using central coordinating center.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

different kinds of data and is planned eventually to include claims data, inpatient and outpatient EMR data, registry data, and other information that are not part of the current prototype. It also will be able to integrate information in personally controlled health records (Mandl and Kohane, 2008), to the extent that these become widespread and that both individuals and the organizations that hold the records make them available.

In this system, a common query system will send queries to participating organizations. Queries go to participating sites through their firewalls, as much processing as possible takes place behind the firewalls, and then responses to the queries are sent back from participating organizations. As noted above, the network will emphasize the sharing of results of analyses, rather than patient-level datasets.

Another organizational model is the peer-to-peer design used by the Shared Pathology Information Network (SPIN) (Drake et al., 2007). This model has been generalized to apply to other uses, including public health surveillance and clinical research (McMurry et al., 2007). The peer-to-peer approach also underlies the planned Shared Health Research Information Network (SHRINE) (Brigham and Women’s Hospital, Harvard Medical School, 2008), developed at Harvard to support research uses of separate data warehouses maintained by different healthcare institutions. This networking capability is an extension of software created for Informatics for Integrating Biology and the Bedside (i2b2), to support clinical research using health care institutions’ clinical data warehouses (Partners Healthcare, 2008).

Governance

Developing effective governance models for distributed networks to improve population health and healthcare delivery will be a major challenge. Figure 4-7 illustrates a potential governance model for a multipurpose network that accommodates participation by multiple users. In this model the development and maintenance of infrastructure is largely separate from, though informed by, the users. Governance of infrastructure would focus on the creation of data standards and infrastructure that allow the same resources to support separate user groups and uses. In such a model, decisions about the availability of the network’s information to public and private users would rest most naturally with the holders of the data, who could choose as individual organizations whether or not to participate in individual activities or categories of activities on a case by case basis. However, since certain types of uses are likely to recur, individual data holders or groups of data holders may develop standards that apply generally to their participation. Such standards might address issues such as the amount of participation of data holders in the development and execu-

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 4-7 Potential schema for organization and governance of a multipurpose national distributed network. In this arrangement, the distributed network serves multiple users, which would include both public agencies, such as the FDA, CDC, NIH, or AHRQ, and private organizations, such as academic research organizations or industry. Different priorities and rules of access would apply depending on the use and the user.

FIGURE 4-7 Potential schema for organization and governance of a multipurpose national distributed network. In this arrangement, the distributed network serves multiple users, which would include both public agencies, such as the FDA, CDC, NIH, or AHRQ, and private organizations, such as academic research organizations or industry. Different priorities and rules of access would apply depending on the use and the user.

tion of studies, ensuring confidentiality of personal information, secondary use of data, transparency regarding the specific studies being performed, and commitments to dissemination of results.

Specific examples of activities the network might support include the following. The FDA might use relevant parts of the network to support postmarketing surveillance, CDC might use the same or other parts to support prevention initiatives, AHRQ might use it to support comparative effectiveness research, and the NIH might use it to support clinical research. Private organizations would also be logical users of the network to support a wide range of inquiries. Each activity, or category of activity, could be led by the separate user groups, usually in collaboration with the data holders, and have separate governance mechanisms and funding models.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Summary

We will need distributed networks to assess medical care and its outcomes because this is almost certainly a more realistic way to develop and maintain these data than large pooled databases. Experience to date makes clear that it is technically feasible to build and use distributed networks, although considerable investment will be needed to develop additional resources and to create more efficient methods of using the networks. Furthermore, it appears feasible to develop distributed networks so that a common infrastructure can support an array of different uses in the public interest. Creation of effective governance mechanisms will be a considerable challenge, as will development of a sustainable mechanism to fund development and maintenance of infrastructure for both technical issues and governance.

REFERENCES

Agency for Healthcare Research and Quality. 2008. Developing a Distributed Research Network to Conduct Population-based Studies and Safety Surveillance. http://effectivehealthcare.ahrq.gov/healthInfo.cfm?infotype=nr&ProcessID=54 (accessed March 30, 2008).

Andrade, S. E., M. A. Raebel, A. N. Morse, R. L. Davis, K. A. Chan, J. A. Finkelstein, K. K. Fortman, H. McPhillips, D. Roblin, D. H. Smith, M. U. Yood, R. Platt, and J. H. Gurwitz. 2006. Use of prescription medications with a potential for fetal harm among pregnant women. Pharmacoepidemiology and Drug Safety 15(8):546-554.

Berwick, D. M. 2008. The science of improvement. Journal of the American Medical Association 299(10):1182-1184.

Blumenthal, D., and C. M. Kilo. 1998. A report card on continuous quality improvement. Milbank Quarterly 76(4):511, 625-648.

Brigham and Women’s Hospital. Harvard Medical School. 2008. Decision Systems Group, Weekly Seminars. http://www.dsg.harvard.edu/index.php/Main/Seminars2007#d51 (accessed April 19, 2008).

Brook, R. H., and K. N. Lohr. 1985. Efficacy, effectiveness, variations, and quality. Boundary-crossing research. Medical Care 23(5):710-722.

Brown, J. S., M. Kulldorff, K. A. Chan, R. L. Davis, D. Graham, P. T. Pettus, S. E. Andrade, M. A. Raebel, L. Herrinton, D. Roblin, D. Boudreau, D. Smith, J. H. Gurwitz, M. J. Gunter, and R. Platt. 2007. Early detection of adverse drug events within population-based health networks: Application of sequential testing methods. Pharmacoepidemiology and Drug Safety 16(12):1275-1284.

Buehler, J. W., Sosin, D. M., and R. Platt. 2007. Evaluation of Surveillance Systems for Early Epidemic Detection, in Infectious Disease Survellance. Edited by N. M. M’ikanatha, R. Lynfield, C. A. Van Beneden, and H. de Valk. Malden, MA: Blackwell Publishing. Centers for Disease Control and Prevention. 2008. Vaccines and Immunizations: Recommendations and Guidelines. http://www.cdc.gov/vaccines/recs/ACIP/slides-feb08.htm#mmrv (accessed March 31, 2008).

ClinicalTrials.gov. 2008. Safety Study of GBS Following Menactra Meningococcal Vaccination. http://clinicaltrials.gov/ct2/show/NCT00575653?term=NCT00575653&rank=1 (accessed March 31, 2008).

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

The Commonwealth Fund Commission on a High Performance Health System. 2005. Framework for a High Performance Health System for the United States. New York: The Commonwealth Fund.

de Koning, J. S., N. S. Klazinga, P. J. Koudstaal, A. Prins, G. J. Borsboom, and J. P. Mackenbach. 2005. The role of ’confounding by indication’ in assessing the effect of quality of care on disease outcomes in general practice: Results of a case-control study. BMC Health Services Research 5(1):10.

Drake, T. A., J. Braun, A. Marchevsky, I. S. Kohane, C. Fletcher, H. Chueh, B. Beckwith, D. Berkowicz, F. Kuo, Q. T. Zeng, U. Balis, A. Holzbach, A. McMurry, C. E. Gee, C. J. McDonald, G. Schadow, M. Davis, E. M. Hattab, L. Blevins, J. Hook, M. Becich, R. S. Crowley, S. E. Taube, and J. Berman. 2007. A system for sharing routine surgical pathology specimens across institutions: The shared pathology informatics network. Human Pathology 38(8):1212-1225.

Egorova, N., et al. 2008. National outcomes for the treatment of ruptured abdominal aortic aneurysm: Comparison of open versus endovascular repairs. Journal of Vascular Surgery 48(5):1092.e2-1100.e2.

Egorova, N., J. Giacovelli, A. Gelijns, L. Mureebe, G. Greco, N. Morrissey, R. Nowygrod, A. Moskowitz, J. McKinsey, and K. C. Kent. 2009. Defining high risk patients for endovascular aneurysm repair. Journal of Vascular Surgery 50(6):1271-1279.

Eng, P. M., J. D. Seeger, J. Loughlin, C. R. Clifford, S. Mentor, and A. M. Walker. 2008. Supplementary data collection with case-cohort analysis to address potential confounding in a cohort study of thromboembolism in oral contraceptive initiators matched on claims-based propensity scores. Pharmacoepidemiology and Drug Safety 17(3):297-305.

Epic Systems Corporation. 2008. EpicCare. http://www.epicsystems.com/ (accessed July 8, 2008).

FDA (Food and Drug Administration). 2007a. Food and Drug Administration Sentinel Network Public Meeting. http://www.fda.gov/oc/op/sentinel/transcript030707.html (accessed March 30, 2008).

———. 2007b. Law Strengthens FDA. http://www.fda.gov/oc/initiatives/advance/fdaaa.html (accessed March 30, 2008).

———. 2008. Sentinal Network. http://www.fda.gov/oc/op/sentinel/ (accessed March 30, 2008).

Flum, D. R., A. Morris, T. Koepsell, and E. P. Dellinger. 2001. Has misdiagnosis of appendicitis decreased over time? A population-based analysis. Journal of the American Medical Association 286(14):1748-1753.

Gawande, A. 2004. The bell curve: What happens when patients find out how good their doctors really are? The New Yorker. December 6, 2004.

Gelijns, A. C., N. Rosenberg, and A. J. Moskowitz. 1998. Capturing the unexpected benefits of medical research. New England Journal of Medicine 339(10):693-698.

Gelijns, A. C., L. D. Brown, C. Magnell, E. Ronchi, and A. J. Moskowitz. 2005. Evidence, politics, and technological change. Health Affairs (Millwood) 24(1):29-40.

Gliklich, R. E., and N. Dreyer. 2007. AHRQ Registries for Evaluating Patient Outcomes: A User’s Guide. AHRQ Publication 07-EHCOO1-1. Rockville, MD: U.S. Department of Health and Human Services, Public Health Service, Agency for Healthcare Research and Quality.

Greco, P. J., and J. M. Eisenberg. 1993. Changing physicians’ practices. New England Journal of Medicine 329(17):1271-1273.

Hannan, E. L., M. J. Racz, G. Walford, R. H. Jones, T. J. Ryan, E. Bennett, A. T. Culliford, O. W. Isom, J. P. Gold, and E. A. Rose. 2005. Long-term outcomes of coronary-artery bypass grafting versus stent implantation. New England Journal of Medicine 352(21):2174-2183.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Hartig, J. R., and J. Allison. 2007. Physician performance improvement: An overview of methodologies. Clinical and Experimental Rheumatology 25(6 Supl 47):50-54.

Hayward, R. S., M. C. Wilson, S. R. Tunis, E. B. Bass, and G. Guyatt. 1995. Users’ guides to the medical literature. VIII. How to use clinical practice guidelines. A. Are the recommendations valid? The evidence-based medicine working group. Journal of the American Medical Association 274(7):570-574.

Hlatky, M. A., K. L. Lee, F. E. Harrell, Jr., R. M. Califf, D. B. Pryor, D. B. Mark, and R. A. Rosati. 1984. Tying clinical research to patient care by use of an observational database. Statistics in Medicine 3(4):375-387.

Institute for Healthcare Improvement. Testing Changes. http://www.ihi.org/IHI/Topics/Improvement/ImprovementMethods/HowToImprove/testingchanges.htm (accessed July 8, 2008).

INTERMACS (Interagency Registry for Mechanically Assisted Circulatory Support). 2008. http://www.intermacs.org/ (accessed July 9, 2008).

IOM (Institute of Medicine). 2006. The Future of Drug Safety. Washington, DC: The National Academies Press.

Jick, S., J. A. Kaye, L. Li, and H. Jick. 2007. Further results on the risk of nonfatal venous thromboembolism in users of the contraceptive transdermal patch compared to users of oral contraceptives containing norgestimate and 35 microg of ethinyl estradiol. Contraception 76(1):4-7.

Juran, J. M. 1995. Managerial Breakthrough: The Classic Book on Improving Management Performance. 30th anniversary ed. New York: McGraw-Hill.

Krumholz, H. M., M. J. Radford, Y. Wang, J. Chen, A. Heiat, and T. A. Marciniak. 1998. National use and effectiveness of beta-blockers for the treatment of elderly patients after acute myocardial infarction: National cooperative cardiovascular project. Journal of the American Medical Association 280(7):623-629.

Lazarus, R., K. Yih, and R. Platt. 2006. Distributed data processing for public health surveillance. BMC Health Services Research 6:235.

Lietz, K., J. W. Long, A. G. Kfoury, M. S. Slaughter, M. A. Silver, C. A. Milano, J. G. Rogers, Y. Naka, D. Mancini, and L. W. Miller. 2007. Outcomes of left ventricular assist device implantation as destination therapy in the post-rematch era: Implications for patient selection. Circulation 116(5):497-505.

Lieu, T. A., M. Kulldorff, R. L. Davis, E. M. Lewis, E. Weintraub, K. Yih, R. Yin, J. S. Brown, and R. Platt. 2007. Real-time vaccine safety surveillance for the early detection of adverse events. Medical Care 45(10 Supl 2):S89-S95.

Mandl, K. D., and I. S. Kohane. 2008. Tectonic shifts in the health information economy. New England Journal of Medicine 358(16):1732-1737.

The Markle Foundation. 2006. The Common Framework: Overview and Principles. Connecting for Health. http://www.connectingforhealth.org/commonframework/docs/Overview.pdf (accessed March 19, 2008).

McMurry, A. J., C. A. Gilbert, B. Y. Reis, H. C. Chueh, I. S. Kohane, and K. D. Mandl. 2007. A self-scaling, distributed information architecture for public health, research, and clinical care. Journal of the American Medical Informatics Association 14(4):527-533.

Miller, L. W., K. E. Nelson, R. R. Bostic, K. Tong, M. S. Slaughter, and J. W. Long. 2006. Hospital costs for left ventricular assist devices for destination therapy: Lower costs for implantation in the post-rematch era. Journal of Heart and Lung Transplantation 25(7):778-784.

Mona Eng, P., J. D. Seeger, J. Loughlin, K. Oh, and A. M. Walker. 2007. Serum potassium monitoring for users of ethinyl estradiol/drospirenone taking medications predisposing to hyperkalemia: Physician compliance and survey of knowledge and attitudes. Contraception 75(2):101-107.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Neaton, J. D., S. L. Normand, A. Gelijns, R. C. Starling, D. L. Mann, and M. A. Konstam. 2007. Designs for mechanical circulatory support device studies. Journal of Cardiac Failure 13(1):63-74.

Oz, M. C., A. C. Gelijns, L. Miller, C. Wang, P. Nickens, R. Arons, K. Aaronson, W. Richenbacher, C. van Meter, K. Nelson, A. Weinberg, J. Watson, E. A. Rose, and A. J. Moskowitz. 2003. Left ventricular assist devices as permanent heart failure therapy: The price of progress. Annals of Surgery 238(4):577-583; discussion 583-585. Partners Healthcare. 2008. Informatics for Integrating Biology and the Bedside. http://www.i2b2.org/ (accessed April 19, 2008).

Porter, M. E., and E. Olmsted-Teisberg. 2006. Redefining Health Care—Creating Value-based Competition on Results. Boston, MA: Harvard Business School Press.

Poses, R. M., and M. Anthony. 1991. Availability, wishful thinking, and physicians’ diagnostic judgments for patients with suspected bacteremia. Medical Decision Making 11(3):159-168.

Raebel, M. A., D. L. McClure, S. R. Simon, K. A. Chan, A. Feldstein, S. E. Andrade, J. E. Lafata, D. Roblin, R. L. Davis, M. J. Gunter, and R. Platt. 2007. Laboratory monitoring of potassium and creatinine in ambulatory patients receiving angiotensin converting enzyme inhibitors and angiotensin receptor blockers. Pharmacoepidemiology and Drug Safety 16(1):55-64.

Robert Wood Johnson Foundation. 2007. National Effort to Measure and Report on Quality and Cost-effectiveness of Health Care Unveiled. http://www.rwjf.org/pr/product.jsp?id=22371&typeid=160 (accessed March 30, 2008).

Roberts, T. G., Jr., and B. A. Chabner. 2004. Beyond fast track for drug approvals. New England Journal of Medicine 351(5):501-505.

Rose, E. A., A. C. Gelijns, A. J. Moskowitz, D. F. Heitjan, L. W. Stevenson, W. Dembitsky, J. W. Long, D. D. Ascheim, A. R. Tierney, R. G. Levitan, J. T. Watson, P. Meier, N. S. Ronan, P. A. Shapiro, R. M. Lazar, L. W. Miller, L. Gupta, O. H. Frazier, P. Desvigne-Nickens, M. C. Oz, and V. L. Poirier. 2001. Long-term mechanical left ventricular assistance for end-stage heart failure. New England Journal of Medicine 345(20):1435-1443.

Sands, B. E., M. S. Duh, C. Cali, A. Ajene, R. L. Bohn, D. Miller, J. A. Cole, S. F. Cook, and A. M. Walker. 2006. Algorithms to identify colonic ischemia, complications of constipation and irritable bowel syndrome in medical claims data: Development and validation. Pharmacoepidemiology and Drug Safety 15(1):47-56.

Schneeweiss, S. 2007. Developments in post-marketing comparative effectiveness research. Clinical Pharmacology and Therapeutics 82(2):143-156.

Second International Conference on Improving Use of Medicines. 2004. Recommendations on Insurance Coverage. http://mednet3.who.int/icium/icium2004/Documents/ Insurance%20coverage.doc (accessed July 8, 2008).

Seeger, J. D., P. L. Williams, and A. M. Walker. 2005. An application of propensity score matching using claims data. Pharmacoepidemiology and Drug Safety 14(7):465-476.

Seeger, J. D., J. Loughlin, P. M. Eng, C. R. Clifford, J. Cutone, and A. M. Walker. 2007. Risk of thromboembolism in women taking ethinylestradiol/drospirenone and other oral contraceptives. Obstetrics and Gynecology 110(3):587-593.

Shewhart, W. A. 1939. Statistical Method from the Viewpoint of Quality Control. Dover Publications, December 1, 1986.

Stewart, W. F., N. R. Shah, M. J. Selna, R. A. Paulus, and J. M. Walker. 2007. Bridging the inferential gap: The electronic health record and clinical evidence. Health Affairs (Millwood) 26(2):w181-w191.

Tunis, S. R., D. B. Stryer, and C. M. Clancy. 2003. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association 290(12):1624-1632.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

U.S. Department of Health and Human Services. 2008. Medical Privacy—National Standards to Protect the Privacy of Personal Health Information. http://www.hhs.gov/ocr/hipaa/ (accessed March 30, 2008).

Wagner, A. K., K. A. Chan, I. Dashevsky, M. A. Raebel, S. E. Andrade, J. E. Lafata, R. L. Davis, J. H. Gurwitz, S. B. Soumerai, and R. Platt. 2006. FDA drug prescribing warnings: Is the black box half empty or half full? Pharmacoepidemiology and Drug Safety 15(6):369-386.

Walker, A. M., and R. P. Wise. 2002. Precautions for proactive surveillance. Pharmacoepidemiology and Drug Safety 11(1):17-20.

Walker, A. M., G. Schneider, J. Yeaw, B. Nordstrom, S. Robbins, and D. Pettitt. 2006. Anemia as a predictor of cardiovascular events in patients with elevated serum creatinine. Journal of the American Society of Nephrology 17(8):2293-2298.

Yih, W. K., B. Caldwell, R. Harmon, K. Kleinman, R. Lazarus, A. Nelson, J. Nordin, B. Rehm, B. Richter, D. Ritzwoller, E. Sherwood, and R. Platt. 2004. National bioterrorism syndromic surveillance demonstration program. Morbidity and Mortality Weekly Report (53 Supl):43-49.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

This page intentionally left blank.

Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 221
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 222
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 223
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 224
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 225
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 226
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 227
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 228
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 229
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 230
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 231
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 232
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 233
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 234
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 235
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 236
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 237
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 238
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 239
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 240
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 241
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 242
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 243
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 244
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 245
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 246
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 247
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 248
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 249
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 250
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 251
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 252
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 253
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 254
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 255
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 256
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 257
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 258
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 259
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 260
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 261
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 262
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 263
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 264
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 265
Suggested Citation:"4 Organizing and Improving Data Utility." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 266
Next: 5 Moving to the Next Generation of Studies »
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary Get This Book
×
 Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary
Buy Paperback | $85.75 Buy Ebook | $69.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Recent scientific and technological advances have accelerated our understanding of the causes of disease development and progression, and resulted in innovative treatments and therapies. Ongoing work to elucidate the effects of individual genetic variation on patient outcomes suggests the rapid pace of discovery in the biomedical sciences will only accelerate. However, these advances belie an important and increasing shortfall between the expansion in therapy and treatment options and knowledge about how these interventions might be applied appropriately to individual patients. The impressive gains made in Americans' health over the past decades provide only a preview of what might be possible when data on treatment effects and patient outcomes are systematically captured and used to evaluate their effectiveness. Needed for progress are advances as dramatic as those experienced in biomedicine in our approach to assessing clinical effectiveness.

In the emerging era of tailored treatments and rapidly evolving practice, ensuring the translation of scientific discovery into improved health outcomes requires a new approach to clinical evaluation. A paradigm that supports a continual learning process about what works best for individual patients will not only take advantage of the rigor of trials, but also incorporate other methods that might bring insights relevant to clinical care and endeavor to match the right method to the question at hand.

The Institute of Medicine Roundtable on Value & Science-Driven Health Care's vision for a learning healthcare system, in which evidence is applied and generated as a natural course of care, is premised on the development of a research capacity that is structured to provide timely and accurate evidence relevant to the clinical decisions faced by patients and providers. As part of the Roundtable's Learning Healthcare System series of workshops, clinical researchers, academics, and policy makers gathered for the workshop Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches. Participants explored cutting-edge research designs and methods and discussed strategies for development of a research paradigm to better accommodate the diverse array of emerging data resources, study designs, tools, and techniques. Presentations and discussions are summarized in this volume.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!