5


Implementation Priorities

INTRODUCTION

Significant gains in the efficiency, effectiveness, and value of health care delivered in the United States are possible with a greater system focus on developing and applying insights on what works best for whom. The near-term needs for an expanded and broadly supported capacity for comparative effectiveness research (CER) include infrastructure for the requisite work (e.g. methods, technical support, coordinating capacities), information networks, and workforce. Identification of the highest-priority implementation needs will guide strategic and coordinated development of needed capacity. Consideration is also needed of how infrastructure development might best build upon existing capacity. Papers in this chapter focus on five key areas for work: (1) information technology (IT) platforms, (2) data resource and analysis improvement, (3) clinical research infrastructure, (4) health professions training, and (5) building the training capacity. Each paper offers suggestions for prioritization and staging of policies, as well as possible approaches to increasing the scale of activities. Also discussed are opportunities to take advantage of existing manufacturer, insurer, and public capacities through public–private partnership.

The first three papers focus on developing information acquisition and exchange tools as well as the research approaches essential to speeding evidence development. Based on his experiences developing a regional health information exchange in Tennessee (the Memphis Exchange), Mark E. Frisse of Vanderbilt University suggests several implementation priorities for the development of an IT platform that will realize significant



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 241
5 Implementation Priorities INTRODUCTION Significant gains in the efficiency, effectiveness, and value of health care delivered in the United States are possible with a greater system focus on developing and applying insights on what works best for whom. The near-term needs for an expanded and broadly supported capacity for com- parative effectiveness research (CER) include infrastructure for the requisite work (e.g. methods, technical support, coordinating capacities), information networks, and workforce. Identification of the highest-priority implemen- tation needs will guide strategic and coordinated development of needed capacity. Consideration is also needed of how infrastructure development might best build upon existing capacity. Papers in this chapter focus on five key areas for work: (1) information technology (IT) platforms, (2) data resource and analysis improvement, (3) clinical research infrastructure, (4) health professions training, and (5) building the training capacity. Each paper offers suggestions for prioritization and staging of policies, as well as possible approaches to increasing the scale of activities. Also discussed are opportunities to take advantage of existing manufacturer, insurer, and public capacities through public–private partnership. The first three papers focus on developing information acquisition and exchange tools as well as the research approaches essential to speed- ing evidence development. Based on his experiences developing a regional health information exchange in Tennessee (the Memphis Exchange), Mark E. Frisse of Vanderbilt University suggests several implementation pri- orities for the development of an IT platform that will realize significant 2

OCR for page 241
22 LEARNING WHAT WORKS societal benefit at a realistic marginal cost. With appropriate design and integration, current collections of databases, health record systems, health information exchanges, financing, workforce, policies, and governance, it can be evolved into a system that addresses a range of needs in care deliv- ery, process improvement, and research. T. Bruce Ferguson from the East Carolina Heart Institute discusses clinical database work in the field of car- diology and identifies key opportunities to apply data resource and analy- sis infrastructure toward the development of dynamic, real-time learning systems, centered on the patient and decisions at the point of care. Finally, Daniel E. Ford of Johns Hopkins University discusses opportunities to improve the efficiency and effectiveness of clinical research by streamlining and standardizing processes and policies, increasing investments in practice- based networks and training and retaining research support personnel. Two papers focus on the workforce at the front lines of evidence application and development—health professionals and clinical researchers. Benjamin K. Chu from Kaiser Permanente describes changes to the healthcare delivery system that will shape the future practice environment and illustrates how training and practice environments for health professions education should seek to emulate and improve upon current models of best care. Steven A. Wartman of the Association of Academic Health Centers describes a needed expansion of medical research to a multidisciplinary approach that addresses all aspects of health. He offers some suggestions on how the train- ing capacity might be developed to accelerate a shift to research focused on the discovery, dissemination, and optimized adoption of practices that advance the health of individuals and the public. This chapter concludes with discussion highlighting opportunities to take best advantage of existing infrastructure elements—such as data resources, expertise, and technology platforms. Speaking from key sector perspectives, Carmella A. Bocchino from America’s Health Insurance Plans, Rachael E. Behrman from the Food and Drug Administration (FDA), and William Z. Potter from Merck Research Laboratories, discuss how public– private partnerships can create needed space for cross-sector collaboration around common areas of interest and expertise. INFORMATION TECHNOLOGY PLATFORM REQUIREMENTS Mark E. Frisse, M.D., M.Sc., M.B.A., Professor of Biomedical Informatics, Vanderbilt University Overview The overarching intent of this publication is to better understand the requirements necessary to transform our fragmented healthcare infrastruc-

OCR for page 241
2 IMPLEMENTATION PRIORITIES ture into a learning health system. This system must be structured in a way that draws on the best evidence, delivers the best value, adds to learning throughout the system of care, leads to improvements in the nation’s health, and ensures that “each patient receives the right care at the right time” (IOM, 2007, 2008). Where IT platform requirements are concerned, with thought and cau- tious action, it is possible to realize the aims of a learning health system through an evolution of our current collection of databases, health record systems, health information exchanges, financing, workforce, policies, and governance. Properly designed and integrated, the composite system would be able to address a wide range of needs at a manageable marginal cost for each. However, the status quo without thoughtful attention to the ends and means may actually impede long-term progress at the expense of short-term expedience. A recent report by the National Research Council provides some guid- ance. Among the principles for change espoused in this report is the asser- tion that health technologies should “record available data so that today’s biomedical knowledge can be used to interpret them to drive care, process improvement, and research” (NRC, 2009). All too often, the design of current systems emphasizes administrative transactions and episodic care at the expense of other priorities. Data are often embedded into specific applications and not represented in a way that clarifies their context or allows reinterpretation as both our analytic techniques and our needs change (NRC, 2009). An Infrastructure Framework IT platforms should be based on a clear framework that enables prog- ress toward a wide range of scientific, clinical, and policy aims, while allow- ing for these aims to evolve over time. The framework should be guided by the analysis and prioritization of initiatives according to their value, difficulty, and requirements for data sharing. The framework should iden- tify potential outcomes according to their impact on effectiveness, quality, safety, and efficiency. In practice, this framework would provide a means of assembling governance, policy, technology, and processes into a series of components that work with one another and that can evolve incrementally over time toward the primary goal of supporting and improving our ability to create and use healthcare knowledge. Such an infrastructure focuses on components that must be assembled to realize specific outcomes. It is these components that should be the focus of activity. Instances of component collections—including various forms of electronic health records (EHRs), personal health records, and health information exchanges—should be viewed not as monolithic products but instead in terms of what their com-

OCR for page 241
2 LEARNING WHAT WORKS ponents contribute separately and collectively to meeting a specific clinical need. There are many discrete components and functions, including digital connectivity, source identification, data integrity checking, record location, data aggregation, audits, data collections, and computer–human interfaces. A system is composed of multiple instances of each component (e.g., data- bases and record locator services) originating in a diverse array of local and national settings and designed for different primary purposes. Each instance of a component can in theory be funded through different means and managed under different governance and operational controls. Each component’s means of representing data can differ as long as two charac- teristics are met: (1) ways to combine data in order to achieve practice aims must be implemented, and (2) original data elements must be maintained in their original format and, to the greatest extent possible, coupled with the context in which they were obtained. What unites the disparate instances of components and creates a true system is a clear separation of data from application, a retention of source and context, and a common minimal set of governance structures and poli- cies that address appropriate uses, performance, financing, and responsibil- ity. Governance, policy, and standards are coordinated only to the minimal extent necessary to achieve results, to gain trust, to demonstrate value, and to support incremental progress. System value is recognized not through successful implementation but rather through the impact the system and its components have on measurably improved outcomes. Lessons from Memphis The work necessary for developing a regional health information exchange in Memphis, Tennessee (the Memphis Exchange), demonstrates the feasibility of applying these principles and the practicality of this approach. The Memphis Exchange is based on technologies and practices in use for over a decade at the Vanderbilt University Medical Center and described elsewhere (Stead, 2006; Stead and Starmer, 2007). This system produces short-term system-based results, supports incremental improvements, and fosters evolutionary change (Frisse et al., 2008; Johnson et al., 2008). Many lessons have been learned during its 3 years of use and operation. First, trust and policy—not technology—are the primary barriers to realizing a desired IT platform. Developing data-sharing agreements gov- erning use and oversight was arguably the most challenging initial task. This effort was accelerated considerably by efforts made through the Mar- kle Foundation’s Connecting for Health initiative (Connecting for Health, 2006). Second, information from many different systems and encoded in many

OCR for page 241
2 IMPLEMENTATION PRIORITIES different acceptable standards can be combined inexpensively. These data are “liquid” and are not tied to a specific application but instead to a source, a context, and a unique individual. Each clinical or administra- tive data element is “wrapped” with a meta-level tag that provides a gen- eral description while the original data element—in whatever format it is received—is retained. Currently, the exchange receives data from multiple systems at over 20 major healthcare institutions. Some data elements—like laboratory results—can be presented in a uniform format using Logical Observation Identifiers Names and Codes (LOINC) (Porter et al., 2007). Such an approach can be generalized and can provide intermediate results while the long-term process of standards convergence takes place. Third, identification and matching of data can be achieved with a degree of precision if attention is devoted to measuring performance using a “gold standard” data set of 5,000 to 10,000 patients. Such a matching approach is not a master patient index in a traditional sense because no unique patient identifier is generated and linkages are represented as data clusters rather than as absolute mappings. Fourth, perceptions of ownership are more important than the local- ity often embodied in the “centralized vs. decentralized” debate. In the Memphis Exchange, each participating institution publishes its data to its own “vault.” A vault in this context is a logical database that may be housed in a central or distributed cluster of databases. What is important is that each institution providing data maintains control of its data until they are combined and used to treat an individual patient. When data are used, actual use is recorded in logs, and efforts to assure nonrepudiation are enforced. Our contention is that no system is completely centralized, and many significant queries can only be answered through a collection of loosely coupled systems. Fifth, confidentiality and privacy can be achieved through a relatively absolute “opt in” or “opt out” decision made at each institution. The pri- mary focus of our confidentiality efforts is on developing a network of trust that is heavily audited and rigorously enforced. This approach ensures that the only individuals examining data are those who have rights (by law or consent). Emphases on selective data, drugs, or other disorders are not eas- ily manageable and cannot be absolutely enforced unless all free-text docu- ments are excluded. Unfortunately, these text documents (e.g., transcribed medical histories) often provide the most meaningful information both for patient care and for chart review. Finally, based on the Vanderbilt experience, loosely coupled data sets from disparate resources seem capable of supporting a wide range of research efforts. Using technologies and methods similar to those of the Memphis Exchange, Vanderbilt researchers have developed a deoxyribo- nucleic acid (DNA) biobank linked to phenotypic data derived from the

OCR for page 241
2 LEARNING WHAT WORKS Vanderbilt EHR (Roden et al., 2008). Employing an opt-out consent model, these researchers have developed a statistically de-identified mirror image of the electronic medical record (EMR) called a “synthetic derivative.” These records are linked to DNA extracted from discarded blood samples. In one test, the de-identification algorithm removed 5,378 of the 5,472 identifiers, with an error rate for complete Health Insurance Portability and Account- ability Act (HIPAA) identifiers of less than 0.1 percent. The aggregate error rate—which includes any potential error, including non-HIPAA items, par- tial items, and items that are not inherently related to identity—was 1.7 percent. The ability of these de-identification procedures to discover and suppress identifiers was sufficient for institutional review boards to judge the research done with this system to be consistent with an Office of Human Research Protections “nonhuman subjects” designation. It should be possible to apply such a process equally well to health information exchanges or other ways of accessing information from dis- parate sources. Such applications will be powerful tools in biosurveillance, public health research, quality improvement, and comparative effectiveness studies. Applicability to Information Technology Platform Requirements This approach is very affordable. The total operational costs for a region of 1 million people are under $3 million a year. Even with additional expense incurred by increasing connectivity to smaller care settings and enhancing data-analytic capabilities, the overall cost will be less than $5 million (or $5 dollars per capita per year). This expense should be com- pared with overall healthcare expenditures, which are estimated at $7.4 billion, or $7,400 per capita, per year. Thus the expense would amount to less than 0.07 percent of per capita healthcare expenditures. Because the costs are largely offset by reductions in duplicate testing, efficiencies in quality metrics, public health reporting, and other functions, the costs that could be allocated to knowledge management and development of a learn- ing health system are insignificant by almost any degree. Extrapolating to a population of 350 million, our cost estimates ($1.7 billion) are less than estimates provided in Chapter 3 of this publication, but our cost models may be based on different assumptions (Miller, 2008). The Role of Electronic Health Records The Memphis Exchange is but one part of a larger health information technology (HIT) platform. Clearly, the choice and effectiveness of care delivery technologies (such as EHRs) are critical. Using Miller’s estimates, marginal annual operating expenditures (per capita per year) would be in

OCR for page 241
2 IMPLEMENTATION PRIORITIES the range of $50 (Miller, 2008). As expected, the costs for systems to deliver the details of care exceed the cost estimates for integrating EHRs into a broad IT platform. EHR costs will likely be offset by efficiencies or driven by other practice imperatives, so the question is not so much what a system costs but the extent to which such a system improves practice performance and the extent to which it can send and receive data from other sources to achieve desired results. If the systems are properly designed, their marginal cost to achieve broader aims is very low. Properly designed, the marginal benefit of a connected system is quite substantial, and the marginal cost of creating such a system (in context to overall healthcare technology costs or to healthcare expenditures overall) can be very low. Thus the greatest risk to realizing great benefit at low financial and societal cost is likely to be the inclination to create monolithic systems that overengineer and promise more than they can deliver. Additional Initiatives and Decisions Some national investment decisions can be made that would simplify the integration of data across disparate systems. Although the Memphis Exchange argues that much can be done without the monolithic standard- ization efforts and privacy initiatives espoused by many, much more can and must be done to make this experience more applicable. Among the most valuable steps that could be taken are an immediate acceleration of knowledge representations that could be quickly applied to clinical use (e.g., RxNorm, unified medical language system), decisions about the extent to which payment and administration coding standards can reflect disease states and contexts required of learning health systems (e.g., International Classification of Diseases [ICD]-9, Systematized Nomenclature of Medicine, ICD-10), enforcement of a few—and only a few—selective standards (e.g., LOINC, SCRIPT), promotion of efforts that make laboratory and medica- tion history more portable in a secure and affordable way, and selection of a few simple high-quality initiatives that can guide improvement of any interventions enabled by IT (Frisse, 2006). Focused trials with immediate findings are essential to ensure that IT expenditures are made wisely. Proposed legislation to accelerate the adop- tion of HIT does not assure an optimal outcome. Applying more funds to technologies that are not coupled to system improvements may help, may hurt, or may do both.1 1 U.S. Senate Committee on Finance. 2009. American Recovery and Reinvestment Act of 2009.

OCR for page 241
2 LEARNING WHAT WORKS DATA RESOURCE DEVELOPMENT AND ANALYSIS IMPROVEMENT T. Bruce Ferguson, Jr., M.D., Chairman, Department of Cardiovascular Sciences, East Carolina Heart Institute and Brody School of Medicine at East Carolina University; and Ansar Hassan, M.D., Ph.D., Brody School of Medicine at ECU Overview Enormous challenges face U.S. healthcare stakeholders if the 2020 goal of the Roundtable on Value & Science-Driven Health Care—that 90 percent of clinical decisions will be supported by accurate, timely, and up- to-date clinical information that reflects the best available evidence—is to be met. Among the most complex of these challenges is the issue of the data and data analysis that will be used to drive those clinical decisions. Knowl- edge about the comparative effectiveness of (1) diagnostics and treatments, (2) providers choosing and administering diagnostics and treatments, and (3) the direct value and benefit to individual patients of (1) and (2) is what must be assembled from data and data analysis going forward. Within the context of CER, using cardiovascular disease as an example, this paper will address the data resource development and the data analysis improvement necessary for the migration of health care toward these 2020 goals. Data as Knowledge Despite a multiplicity of potential information resources, there is no cogent framework for selecting and using these resources. Within cardio- vascular disease, each of the major stakeholder groups has independently developed, financed, and extensively used data generated from systems that are mostly perceived to be proprietary. These data types include the following: • Data from the medical product (pharmaceutical and device) com- panies, which are incentivized to collect safety and efficacy data from pivotal randomized clinical trials (RCTs) for FDA approval of their technologies. The knowledge generated from these studies is critical to the regulatory process. Because equipoise is necessary to randomize patients, particularly in noninferiority trial designs, this body of knowledge is scientifically valid but limited in its applicability to overall care delivery evaluation of effectiveness. Controversy surrounds the application of these trial findings to patients beyond the trial design and beyond the FDA labeling for

OCR for page 241
29 IMPLEMENTATION PRIORITIES the technologies or pharmaceuticals. Investment in postmarket data collection and analysis, except as required for physician and hospi- tal reimbursement (e.g., Centers for Medicare & Medicaid Services [CMS] Pay with Evidence Development program), has generated an important data void in our healthcare system (Bach, 2007). • Healthcare data available from the public domain and through federal agencies such as CMS, Centers for Disease Control and Prevention (CDC), Agency for Healthcare Research and Quality (AHRQ), and the Social Security Administration require analytical expertise and may be expensive. These data provide knowledge on the administrative, financial, and quality characteristics of care delivery based on claims and administrative data that may be some- what limited in describing actual clinical care delivery. • Payers have developed robust administrative and claims-based pro- prietary systems that extend up to—but as yet do not include— whether a patient actually ingested the medication that was prescribed and filled. These systems are relatively unique in that they give a longitudinal documentation of care with data, some of which have been risk adjusted. These data provide knowledge about longitudinal care processes delivered by multiple providers but are confined to specific payer groups for defined periods of time. • Practitioners in cardiovascular disease have developed robust clinical observational databases, such as the Society of Thoracic Surgeons’ National Adult Cardiac Surgery Database (Ferguson et al., 2002), the American College of Cardiology Foundation’s National Cardiovascular Data Registry (ACCF, 2008), and the American Heart Association’s Get with the Guidelines (Giugliano and Braunwald, 2007). In addition, regional databases, such as the Northern New England Cardiovascular Consortium (Malenka et al., 2005) and the New York State Cardiac Surgery and Percutane- ous Coronary Intervention Registries, have been collecting data for over 15 years. These clinical registries have developed methods to describe risk-adjusted outcomes that, along with processes of care, describe care delivery specific to the procedure-based episode of care. They have independently validated the processes and out- comes of care that are linked to quality improvement. These sys- tems provide knowledge about those care episodes that is clinically relevant but limited in its scope. • Providers have also devoted considerable effort to the development of guidelines to direct clinical care (ACC, 2008). This is a resource- intensive effort, and much of the data available for guideline devel- opment falls short of class I data. The knowledge contained in

OCR for page 241
20 LEARNING WHAT WORKS the guidelines represents what expert consensus suggests should be done in clinical scenarios that fit into the guideline construct; however, this may limit their usefulness in comparative effective- ness analyses. More recently, the specialty societies have developed guidelines for appropriateness of care, which may become more useful (Douglas et al., 2008). The fifth stakeholder—the patients and their families—in part desires that this knowledge be integrated in such a way that care delivery centered on the needs and medical conditions of the patient is always available. This requires knowledge about processes and preferably risk-adjusted outcomes of care, as well as administrative and financial data. This cannot be accom- plished by using data from just one stakeholder’s system or by employing just one type of knowledge data. Figure 5-1 illustrates the reason for this. For a patient with a medical condition for which there are two potentially applicable therapies, clinical trials data are unlikely to differentiate between the two therapies because of trial design issues (panel A). A more accurate representation of potential therapeutic effectiveness for that patient is derived from the pool of “appli- cation” data, or knowledge gained from data describing the ongoing appli- cation of health care to patients. In fact, this is the data domain in which most patients and providers reside and which represents the real challenge regarding data resources and data analysis for comparative effectiveness. A slightly different way of looking at this is represented in panel B of Figure 5-1. Wennberg et al. (2002, 2007) have described a recommendation for Medicare reform based upon three categories of medical services and their direct links to health care spending in the Medicare program. In fact, the majority of health care delivered is either preference- or supply-sensitive care, where the knowledge for these decisions comes from application data. For example, in the United States over 75 percent of patients cur- rently undergoing coronary artery bypass grafts (CABGs) wouldn’t have been eligible for enrollment in the surgical arms of the major randomized trials of percutaneous coronary interventions (PCIs) vs. CABGs based on National Adult Cardiac Surgery Database data (Taggart, 2006), while at the same time an estimated 70 percent of drug-eluting stent (DES) use in this country is currently presumed to be “off-label” (Tung et al., 2006). In terms of comparative effectiveness between these two therapies, in a recent systematic review of PCI vs. CABG by an AHRQ-sponsored evidence-based practice center, observational analyses were excluded from the principal meta-analysis of trials, which concluded that survival at 10 years was similar between the two therapies (Bravata et al., 2007). Until recently, data from RCTs of PCIs with or without DESs vs. CABGs have not dem- onstrated any difference in outcomes at 1- or 5-year follow-up (Daemen et

OCR for page 241
2 IMPLEMENTATION PRIORITIES Application Data RCT Data Application Data RCT Data Preference- and Supply-Sensitive Care FIGURE 5-1 Panel A shows the hypothetical relationship between information gen- erated from RCT data and application data (data generated through the application of health care to patients) on two different therapeutic interventions. As a result of trial design and equipoise for randomization, an outcome such as mortality is un- likely to be measured as discernibly different. Over time, however, application data may highlight differences in that outcome. Some controversy exists as to whether data from RCTs is appropriate for making decisions in the application data space, and vice versa. Panel B relates this construct to the utilization of medical services as described by Wennberg et al. (2002). The majority of service utilization is in the preferences- and supply-sensitive categories; these activities fall under the applica- tion data categorization and constitute the primary target area for comparative effectiveness research going forward. NOTE: RCT = randomized controlled trial.

OCR for page 241
0 LEARNING WHAT WORKS resolved. For example, research methods and data analysis tools will need to be developed to ensure the production and validation of timely, reliable, and secure information. The distributed network approach of the Sentinel Initiative addresses some concerns about patient privacy, but other chal- lenges remain. It is imperative to engage parties that collect, aggregate, and market data and to illustrate the critical need and business case for a sharper focus on outcomes research to improve the nation’s health. For these issues, developing the appropriate governance structures and policies will be critical. Another challenge will be to ensure that the infrastructure developed considers and meets the needs of all parties while putting appro- priate safeguards into place. Questions related to data access, use, and stewardship will have to be resolved. These activities highlight many issues that will also be of central impor- tance in the development of infrastructure for comparative effectiveness. Priority setting is critical in order to provide a common focus for all stakeholders as well as to identify key opportunities to develop smart and small pilot projects. Financing is a continual challenge, particularly given that infrastructure development is a long-term and expensive proposition. Continued attention is also needed to the governance of collaborations. A fourth and crucial area for work is data transparency. Progress in these areas is needed to ensure that analyses are conducted and reported responsibly and to avoid the development of unvetted, low-quality information. Finally, issues about how to handle proprietary data and patentable tools or pro- cesses will remain key areas of importance for all potential participants. Public–Private Partnerships and Comparative Effectiveness Infrastructure Development Public–private partnerships will be critical for the successful develop- ment of a national infrastructure for expanded CER as part of the IOM EBM effort. As with the Sentinel Initiative, the government alone cannot lead us to where we need and could be as a nation with respect to health. The FDA has focused on partnering with others because collaboration pro- vides the best opportunity for substantial engagement by key stakeholders on issues of common interest and, therefore, a greater likelihood of success. The IOM effort is a large and complex project, and no one entity has the expertise, the resources, or the energy to carry it out alone. In addition, it will be important to create a nimble infrastructure to respond to dynamic and evolving research needs. Such an effort will require the engagement and participation of all sectors across the healthcare system. A government approach, possibly relying on legislation, may only slow progress. Lessons learned from the Sentinel Initiative may be very useful for the IOM effort. Of particular benefit might be small collaborative pilots, simi-

OCR for page 241
0 IMPLEMENTATION PRIORITIES lar to those under way as part of the Sentinel Initiative, that are making use of existing large databases to identify and test the tools and processes that will be needed to perform postmarket monitoring. Similar tools will be needed for comparative evidence analyses. Additional considerations that may be useful as the CER project evolves (or as pilots are identified that could inform the project) include the following: • What specific tools need to be developed? • What are the specific goals of a particular collaboration? • How should specific projects or tasks be prioritized? And who should be tasked with setting priorities? • Which stakeholders would be most beneficial to and interested in a particular collaborative project? • Which organization or organizations can best take the lead on a specific project? • How can needed short- and long-term resources be obtained? • How can research results be made available to the community without undermining proprietary or patent interests? • How do specific collaborations contribute to the larger effort? • What time frames can realistically be set for short- and long-term goals? Partnership formations will require careful vetting by all parties so that everyone involved has confidence in the successful operation of the partner- ship. With each partnership comes added confidence in what it will take to make a successful partnership. However, each partnership will be different, raising new questions and unique hurdles. Health Product Developers William Z. Potter, M.D., Ph.D., Vice President, Franchise Integrator Neuroscience, Merck Research Laboratories Two examples of public–private partnerships that have productively linked industry, government, academia, and other stakeholders to address issues of common concern in health care are the Biomarkers Consortium (BC) and the Alzheimer’s Disease Neuroimaging Initiative (ADNI). This paper briefly describes the processes of developing and sustaining these partnerships, as well as some of the key lessons learned that can inform the development of infrastructure for expanded CER. Some suggestions for priority areas for work and opportunities for greater engagement by the health product developer sector are also discussed.

OCR for page 241
0 LEARNING WHAT WORKS Biomarkers Consortium The BC, founded in 2006, was established to advance the discovery, development, and approval of biological markers to support new drug development, preventive medicine, and medical diagnostics. The consor- tium is a major public–private biomedical research partnership with broad participation from stakeholders across the health enterprise, including gov- ernment, industry, academia, and patient advocacy and other nonprofit private-sector organizations. In addition to the Foundation for the NIH, founding members include the NIH, the FDA, and the Pharmaceutical Research and Manufacturers of America. Other partners in the consortium include CMS and the Biotechnology Industry Organization. Imperative to a successful partnership is the careful delineation of spe- cific areas of research focus that protect individual interests of consortium members, and, after some discussion, consortium organizations agreed to work together to accelerate the identification, development, and regulatory acceptance of biomarkers in four areas: cancer, inflammation and immunity, metabolic disorders, and neuroscience. Additional goals of the consortium include the conduct of joint research in “precompetitive” areas with part- ners that share common interest in advancing human health and improving patient care; that speed the development of medicines and therapies for detection, prevention, diagnosis, and treatment of disease; and that make project results broadly available to the entire research community. An example from neuroscience illustrates another key to the consor- tium’s success. As an initial focus, the group looked at the placebo response, a fundamental issue of common concern to all stakeholders. An important question for the field is the relative efficacy of antidepressants, but even the efficacy of antidepressants vs. placebo is often unclear. Consider the physician, or any other caretaker, who diagnoses and would like to treat a patient for depression. Trial results demonstrate that the placebo response is often enormously variable, ranging from some 20 percent up to as much as 60 percent in very large trials with up to 100 to 150 patients per arm. These findings raise significant questions about the validity of these data, the study design, or the diagnosis. Healthcare providers are interested in developing and using their data to clarify the quality of treatment and to determine the best possible course of care, but health product manufactur- ers also have an intense competitive interest in such data—particularly in improving the quality of data in this space and the analyses needed to inform critical healthcare decisions. The BC addressed this set of issues by creating a metadata set. As outlined in the Foundation for the NIH’s Consortium Placebo Data Shar- ing proposal, ideal characteristics for implementation include identical study design; extensive characterization of each subject (e.g., more than

OCR for page 241
0 IMPLEMENTATION PRIORITIES FDA requirements); data elements stored in standard, easily shared data systems; and appropriate informed consent. Such ideals are of course dif- ficult to realize regularly on a macro level, and the consortium decided to focus initially on an area in which common public and private study design were likely: antidepressant trials conducted since the introduction of selec- tive serotonin reuptake inhibitors. Around this focus, the consortium has initiated several collaborative efforts, including a Depression Rating Scale Standardization Team (DRSST), a Placebo Response Collaborative Study Group, the National Institute of Mental Health Placebo Database Work- shop, and placebo databases from Alzheimer’s disease trials. Discussions leading to the development of these projects began in 2000, and the group is beginning to put the needed infrastructure in place through the Foundation for the NIH. Many of the lessons learned from these dis- cussion, will help to accelerate the development of infrastructure for CER work. Key barriers include the need for an internal champion within each company to work a proposal; meeting costs of full-time equivalent and data management; skepticism by industry, NIH, and academic leadership that learnings of value can be gained; and variable legal opinion as to intellectual property and medicolegal risks. Alzheimer’s Disease Neuroimaging Initiative Another noteworthy public–private partnership is the ADNI. Started in 2004, this large research project seeks to define the rate of progress of mild cognitive impairment and Alzheimer’s disease in order to develop improved methods for clinical trials in this area and also to provide a large database that will improve design of treatment trials. It is hoped that the project will provide information and methods that will help lead to effec- tive treatments and prevention efforts for Alzheimer’s disease. The project has funding from the National Institute of Aging, the National Institute of Bioimaging and Bioengineering, Pharmaceutical Research and Manufactur- ers of America, and several foundations. ADNI brings together organizations from the public and private sec- tors, including government agencies, corporations, consumer groups, and other stakeholders, to work collaboratively to determine the right tools to understand the efficacy and effectiveness of drugs for Alzheimer’s disease. Participants in the initiative collaborate via an infrastructure that, while complex, enables cross-sector communication and work and has produced promising initial results. For example, both complex clinical data and intricate brain imaging data are now readily accessible using the Web, in close to real time. Anyone who is interested in developing ways to look at complex data can mine these data, and this approach has begun to return remarkable findings. Underlying the success of this partnership is how

OCR for page 241
0 LEARNING WHAT WORKS it addresses the important issue of data transparency. Through different portals, the data are available both to researchers and, with unprecedented access, to the general public. Given that researchers can manage this complexity of data with existing tools in the realm of Alzheimer’s disease, these results imply that similar applications are likely for other sets of data. An important lesson from this work is that real data can be made accessible, in real time, in the public domain, and yield useful results. More broadly, the success of this project underscores and justifies the benefits of a consortium approach, particularly when the scientific methodologies employed are adequately rigorous and the questions are sufficiently important. Public–Private Partnerships and Comparative Effectiveness Infrastructure Development As a national infrastructure for CER is being developed, leadership will be needed from the federal government to develop the incentive structures, through legislation and regulation, that are important to advance issues related to data standards and data sharing; however, despite the important “pull” provided by legislation, there are opportunities for immediate work that do not require legislation. Some possible focus areas include • engagement of industry leadership (e.g., identifying and encourag- ing industry champions; fostering collaboration of industry, NIH, and academic leadership around common issues and concerns), • making the case for broad stakeholder participation around key questions and issues, • developing national research priorities, and • establishing methods and collaborative agreements for data collec- tion and use. REFERENCES AAHC (Association of Academic Health Centers). 2008a. The academic health center: Evolv- ing organizational models. http://www.aahcdc.org/policy/reddot/AAHC_Evolving_ Organizational_Models.pdf (accessed September 13, 2008). ———. 2008b. HIPAA creating barriers to research and discovery. http://www.aahcdc.org/ policy/reddot/AAHC_HIPAA_Creating_Barriers.pdf (accessed September 15, 2008). ———. 2008c. Out of order, out of time: The state of the nation’s health workforce. Washing- ton, DC: AAHC. http://www.aahcdc.org/policy/AAHC_OutofTime_4WEB.pdf (accessed September 5, 2008). AAP/ACP/AOA (American Academy of Pediatrics and American College of Physicians and American Osteopathic Association). 2007. Joint principles of the patient-centered medi- cal home. http://www.medicalhomeinfo.org/Joint%20Statement.pdf (accessed August 5, 2010).

OCR for page 241
09 IMPLEMENTATION PRIORITIES ACC (American College of Cardiology). 2008. Clinical statements/guidelines. http://www.acc. org/qualityandscience/clinical/statements.html (accessed September 15, 2008). ACCF (American College of Cardiology Foundation). 2008. NCDR: National Cardiovascular Data Registry. http:www.ncdr.com/WebNCDR/ANNOUNCEMENTS.ASPX (accessed September 15, 2008). AHA (American Hospital Association). 2007. Continued progress: Hospital use of informa- tion technology. http://www.aha.org/aha/content/2007/pdf/070227-continuedprogress. pdf (accessed September 10, 2008). AHRQ (Agency for Healthcare Research and Quality). 2009. Horizon scan: To what extent do changes in third-party payment affect clinical trials and the evidence base? http:// www.cms.hhs.gov/determinationprocess/downloads/id67aTA.pdf (accessed September 20, 2008). Arvantes, T. 2007. North Carolina seeks expansion of primary care program. http://www.aafp. org/online/en/home/publications/news-now/government-medicine/2 (accessed September 15, 2008). Bach, P. B. 2007. Coverage with evidence development. In The learning healthcare system: Workshop summary, edited by L. Olsen, D. Aisner, and J. M. McGinnis. Washington, DC: The National Academies Press. Pp. 39-45. Bawa, K. S., G. Balachander, and P. Raven. 2008. A case for new institutions. Science 319(5860):136. Bodenheimer, T. 2008. Coordinating care—A perilous journey through the health care system. New England Journal of Medicine 358(10):1064-1071. Bravata, D. M., A. L. Gienger, K. M. McDonald, V. Sundaram, M. V. Perez, R. Varghese, J. R. Kapoor, R. Ardehali, D. K. Owens, and M. A. Hlatky. 2007. Systematic review: The comparative effectiveness of percutaneous coronary interventions and coronary artery bypass graft surgery. Annals of Internal Medicine 147(10):703-716. Califf, R. M., R. A. Harrington, L. K. Madre, E. D. Peterson, D. Roth, and K. A. Schulman. 2007. Curbing the cardiovascular disease epidemic: Aligning industry, government, pay- ers, and academics. Health Affairs 26(1):62-74. Coleman, E. A. 2006. The care transition intervention results of a randomized controlled trial. Archives of Internal Medicine 166:1822-1828. The Commonwealth Fund. 2009. Commission on a high-performance health system. http:// www.commonwealthfund.org/About-Us/Commission-on-a-High-Performance-Health- System.aspx (accessed August 5, 2010). Connecting for Health. 2006. The Connecting for Health common framework. http://www. connectingforhealth.org (accessed February 23, 2008). Crosson, F. J. 2005. The delivery system matters. Health Affairs 24(6):1543-1548. CTSA (Clinical and Translational Science Awards). 2008. Clinical and Translational Science Awards. http://www.ctsaweb.org/ (accessed September 13, 2008). Daemen, J., E. Boersma, M. Flather, J. Booth, R. Stables, A. Rodriguez, G. Rodriguez-Granillo, W. A. Hueb, P. A. Lemos, and P. W. Serruys. 2008. Long-term safety and efficacy of percutaneous coronary intervention with stenting and coronary artery bypass surgery for multivessel coronary artery disease: A meta-analysis with 5-year patient-level data from the arts, ERACI–II, MASS–II, and SOS trials. Circulation 118(11):1146-1154. Davis, K., and S. Schoenbaum. 2007. Medical homes could improve care for all. http:// www.commonwealthfund.org/Content/From-the-President/2007/Medical-Homes-Could- Improve-Care-for-All.aspx (accessed August 5, 2010). de Brantes, F., and A. Rastogi. 2008. Evidence-informed case rates: Paying for safer, more reliable care. New York: The Commonwealth Fund.

OCR for page 241
0 LEARNING WHAT WORKS Dilts, D. M., A. Sandler, S. Cheng, J. Crites, L. Ferranti, A. Wu, R. Gray, J. MacDonald, D. Marinucci, and R. Comis. 2008. Development of clinical trials in a cooperative group setting: The Eastern Cooperative Oncology Group. Clinical Cancer Research 14(11):3427-3433. Dougherty, D., and P. H. Conway. 2008. The “3T’s” road map to transform U.S. health care: The “how” of high-quality care. Journal of the American Medical Association 299(19):2319-2321. Douglas, P. S., B. Khandheria, R. F. Stainback, N. J. Weissman, E. D. Peterson, R. C. Hendel, M. Blaivas, R. D. Des Prez, L. D. Gillam, T. Golash, L. F. Hiratzka, W. G. Kussmaul, A. J. Labovitz, J. Lindenfeld, F. A. Masoudi, P. H. Mayo, D. Porembka, J. A. Spertus, L. S. Wann, S. E. Wiegers, R. G. Brindis, M. R. Patel, M. J. Wolk, and J. M. Allen. 2008. ACCF/ASE/ACEP/AHA/ASNC/SCAI/SCCT/SCMR 2008 appropriateness criteria for stress echocardiography: A report of the American College of Cardiology Foundation Appropriateness Criteria Task Force, American Society of Echocardiography, American College of Emergency Physicians, American Heart Association, American Society of Nuclear Cardiology, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, and Society for Cardiovascular Magnetic Resonance endorsed by the Heart Rhythm Society and the Society of Critical Care Medi- cine. Journal of the American College of Cardiology 51(11):1127-1147. Draycott, T., T. Sibanda, L. Owen, V. Akande, C. Winter, S. Reading, and A. Whitelaw. 2006. Does training in obstetric emergencies improve neonatal outcome? BJOG: An Interna- tional Journal of Obstetrics and Gynaecology 113(2):177-182. FDA (Food and Drug Administration). 2004. Innovation/stagnation: Challenge and oppor- tunity on the critical path to new medical products. Washington, DC: Department of Health and Human Services. ———. 2006. Critical path opportunities list. Washington, DC: Department of Health and Human Services. Ferguson, T. B., Jr. 2008. On the evaluation of intervention outcome risks for patients with ischemic heart disease. Circulation 117(3):333-335. Ferguson, T. B., Jr., B. G. Hammill, E. D. Peterson, E. R. DeLong, and F. L. Grover. 2002. A decade of change—Risk profiles and outcomes for isolated coronary artery bypass graft- ing procedures, 1990-1999: A report from the STS National Database Committee and the Duke Clinical Research Institute. Society of Thoracic Surgeons. Annals of Thoracic Surgery 73(2):480-489; discussion, 489-490. Ferguson, T. B., Jr., E. D. Peterson, L. P. Coombs, M. C. Eiken, M. L. Carey, F. L. Grover, and E. R. DeLong. 2003. Use of continuous quality improvement to increase use of process measures in patients undergoing coronary artery bypass graft surgery: A randomized controlled trial. Journal of the American Medical Association 290(1):49-56. Frisse, M. E. 2006. Comments on return on investment (ROI) as it applies to clinical systems. Journal of the American Medical Informatics Association 13(3):365-367. Frisse, M. E., J. K. King, W. B. Rice, L. Tang, J. P. Porter, T. A. Coffman, M. Assink, K. Yang, M. Wesley, R. L. Holmes, C. Gadd, K. B. Johnson, and V. Y. Estrin. 2008. A regional health information exchange: Architecture and implementation. American Medical In- formatics Association Annual Symposium Proceedings 212-216. Giugliano, R. P., and E. Braunwald. 2007. The year in non-ST-segment elevation acute coro- nary syndrome. Journal of the American College of Cardiology 50(14):1386-1395. Goroll, A. H., R. A. Berenson, S. C. Schoenbaum, and L. B. Gardner. 2007. Fundamental reform of payment for adult primary care: Comprehensive payment for comprehensive care. Journal of General Internal Medicine 22(3):410-415. Grover, F. L. 2008. The bright future of cardiothoracic surgery in the era of changing health care delivery: An update. Annals of Thoracic Surgery 85(1):8-24.

OCR for page 241
 IMPLEMENTATION PRIORITIES Hall, B. L., M. Hirbe, B. Waterman, S. Boslaugh, and W. C. Dunagan. 2007. Comparison of mortality risk adjustment using a clinical data algorithm (American College of Surgeons National Surgical Quality Improvement Program) and an administrative data algorithm (Solucient) at the case level within a single institution. Journal of the American College of Surgeons 205(6):767-777. HHMI (Howard Hughes Medical Institute). 2009. Med into Grad Initiative: Integrating Medical Knowledge into Graduate Education. http://www.hhmi.org/grants/institutions/ medintograd.html (accessed September 13, 2008). Hlatky, M. A., D. B. Boothroyd, K. A. Melsop, M. M. Brooks, D. B. Mark, B. Pitt, G. S. Reeder, W. J. Rogers, T. J. Ryan, P. L. Whitlow, and R. D. Wiens. 2004. Medical costs and quality of life 10 to 12 years after randomization to angioplasty or bypass surgery for multivessel coronary artery disease. Circulation 110(14):1960-1966. IOM (Institute of Medicine). 2001. Crossing the quality chasm: A new health system for the 2st century. Washington, DC: National Academy Press. ———. 2006. Performance measurement: Accelerating improvement. Washington, DC: The National Academies Press. ———. 2007. The learning healthcare system: Workshop summary. Washington, DC: The National Academies Press. ———. 2008. Learning healthcare system concepts v.200: Annual report of the Roundtable on Evidence-Based Medicine. Washington, DC: The National Academies Press. James, B. 2007. Feedback loops to expedite study timeliness and relevance. In The learn- ing healthcare system: Workshop summary, edited by L. Olsen, D. Aisner, and J. M. McGinnis. Washington, DC: The National Academies Press. Pp. 152-162. Johnson, K. B., C. S. Gadd, D. Aronsky, K. Yang, L. Tang, V. Estrin, J. K. King, and M. Frisse. 2008. The Midsouth eHealth Alliance: Use and impact in the first year. American Medical Informatics Association Annual Symposium Proceedings 333-337. Jones, L., and K. Wells. 2007. Strategies for academic and clinician engagement in com- munity-participatory partnered research. Journal of the American Medical Association 297(4):407-410. King, J. S., and B. W. Moulton. 2006. Rethinking informed consent: The case for shared medi- cal decision-making. American Journal of Law and Medicine 32:429-501. Krumholz, H. M., Y. Wang, J. A. Mattera, L. F. Han, M. J. Ingber, S. Roman, and S. L. Normand. 2006. An administrative claims model suitable for profiling hospital perfor- mance based on 30-day mortality rates among patients with an acute myocardial infarc- tion. Circulation 113(13):1683-1692. Landon, B. E., L. S. Hicks, A. J. O’Malley, T. A. Lieu, T. Keegan, B. J. McNeil, and E. Guadagnoli. 2007. Improving the management of chronic disease at community health centers. New England Journal of Medicine 356(9):921-934. Malenka, D. J., B. J. Leavitt, M. J. Hearne, J. F. Robb, Y. R. Baribeau, T. J. Ryan, R. E. Helm, M. A. Kellett, H. L. Dauerman, L. J. Dacey, M. T. Silver, P. N. VerLee, P. W. Weldner, B. D. Hettleman, E. M. Olmstead, W. D. Piper, and G. T. O’Connor. 2005. Comparing long-term survival of patients with multivessel coronary disease after CABG or PCI: Analysis of BARI-like patients in northern New England. Circulation 112(9 Suppl.): I371-I376. Marmot, M., and R. G. Wilkinson. 2006. Social determinants of health, 2nd ed. Oxford, UK: Oxford University Press. McCannon, C. J. 2007. Miles to go: An introduction to the 5 million lives campaign. Joint Commission Journal on Quality and Patient Safety 33(8):447-484. McGlynn, E. A., S. M. Asch, J. Adams, J. Keesey, J. Hicks, A. DeCristofaro, and E. A. Kerr. 2003. The quality of health care delivered to adults in the United States. New England Journal of Medicine 348(26):2635-2645.

OCR for page 241
2 LEARNING WHAT WORKS MedPAC (Medicare Payment Advisory Commission). 2008. Report to the Congress: Reform- ing the delivery system. http://www.medpac.gov/documents/Jun08_EntireReport.pdf (ac- cessed September 14, 2008). Meier, B. 2008. A call for a warning system on artificial joints. The New York Times, July 29. http://www.nytimes.com/2008/07/29/health/29iht-29hip.14863139.html (accessed September 15, 2008). Miller, R. 2008. The information networks required: Information technology requirements. Presented at the Institute of Medicine Workshop Learning What Works: Infrastructure Required for Comparative Effectiveness Research. Washington, DC, July 30, 2008. National Health Policy Forum. 2008. Health information technology adoption among health centers: A digital divide in the making. http://209.85.173.104/search?q=cache:obn8 UTmh9Y4J:www.nhpf.org/pdfs_bp/BPHealth (accessed September 13, 2008). Nesbitt, T. S., S. L. Cole, L. Pellegrino, and P. Keast. 2006. Rural outreach in home tele- health: Assessing challenges and reviewing successes. Telemedicine Journal and eHealth 12(2):107-113. NRC (National Research Council). 2009. Computational technology for effective health care: Intermediate steps and strategic directions. Washington, DC: The National Academies Press. Paulus, R. A., K. Davis, and G. D. Steele. 2008. Continuous innovation in health care: Implica- tions of the Geisinger experience. Health Affairs 27(5):1235-1245. Porter, J. P., J. Starmer, J. King, and M. E. Frisse. 2007. Mapping laboratory test codes to LOINC for a regional health information exchange. American Medical Informatics As- sociation Annual Symposium Proceedings 1081. Porter, M. E., and E. O. Teisberg. 2006. Redefining health care: Creating value-based competi- tion on results. Boston, MA: Harvard University Press. ———. 2007. How physicians can change the future of health care. Journal of the American Medical Association 297(10):1103-1111. Premier and CMS (Centers for Medicare & Medicaid). 2007. Centers for Medicare & Med- icaid Services (CMS)/Premier Hospital Quality Incentive Demonstration project: Project findings from year two. http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/ resources/hqi-whitepaper-year2.pdf (accessed September 13, 2008). Rittenhouse, D. R., L. P. Casalino, R. R. Gillies, S. M. Shortell, and B. Lau. 2008. Measuring the medical home infrastructure in large medical groups. Health Affairs 27(5):1246-1258. Roden, D. M., J. M. Pulley, M. A. Basford, G. R. Bernard, E. W. Clayton, J. R. Balser, and D. R. Masys. 2008. Development of a large-scale de-identified DNA biobank to enable personalized medicine. Clinical Pharmacology & Therapeutics 84(3):362-369. Rubin, R. 2008. CDC launches campaign to make the USA a healthier nation. USA Today 4D. Schoen, C., K. Davis, S. K. How, and S. C. Schoenbaum. 2006. U.S. health system perfor- mance: A national scorecard. Health Affairs 25(6):w457-w475. Singh, M., B. J. Gersh, S. Li, J. S. Rumsfeld, J. A. Spertus, S. M. O’Brien, R. M. Suri, and E. D. Peterson. 2008. Mayo clinic risk score for percutaneous coronary intervention predicts in-hospital mortality in patients undergoing coronary artery bypass graft surgery. Circulation 117(3):356-362. Smith, P. K., R. M. Califf, R. H. Tuttle, L. K. Shaw, K. L. Lee, E. R. Delong, R. E. Lilly, M. H. Sketch, Jr., E. D. Peterson, and R. H. Jones. 2006. Selection of surgical or percutaneous coronary intervention provides differential longevity benefit. Annals of Thoracic Surgery 82(4):1420-1428; discussion, 1428-1429. Stead, W. 2006. Providers and EHR as a learning tool. In The learning healthcare system: Workshop summary, edited by L. Olsen, D. Aisner, and J. M. McGinnis. Washington, DC: The National Academies Press. Pp. 268-275.

OCR for page 241
 IMPLEMENTATION PRIORITIES Stead, W., and J. M. Starmer. 2007. Beyond expert based practices. In Evidence-based medicine and the changing nature of health care, edited by M. B. McClellan, J. M. McGinnis, E. G. Nabel, and L. M. Olsen. Washington, DC: The National Academies Press. Pp. 95-105. Taggart, D. P. 2006. Thomas B. Ferguson lecture: Coronary artery bypass grafting is still the best treatment for multivessel and left main disease, but patients need to know. Annals of Thoracic Surgery 82(6):1966-1975. Tarlov, A. R., and R. F. St. Peter. 2000. The society and population reader: A state and com- munity perspective (Vol. 2). New York: New Press. Tung, R., S. Kaul, G. A. Diamond, and P. K. Shah. 2006. Narrative review: Drug-eluting stents for the management of restenosis: A critical appraisal of the evidence. Annals of Internal Medicine 144(12):913-919. Wartman, S. A. 2008. Toward a virtuous cycle: The changing face of academic health centers. Academic Medicine 83(9):797-799. Weinstein, J. N., K. Clay, and T. S. Morgan. 2007. Informed patient choice: Patient-centered valuing of surgical risks and benefits. Health Affairs 26(3):726-730. Wennberg, J. E., E. S. Fisher, and J. S. Skinner. 2002. Geography and the debate over Medicare reform. Health Affairs (Suppl. Web Exclusive):w96-w114. Wennberg, J. E., E. S. Fisher, J. S. Skinner, and K. K. Bronner. 2007. Extending the p4p agenda, part 2: How Medicare can reduce waste and improve the care of the chronically ill. Health Affairs 26(6):1575-1585. WHO (World Health Organization). 2008. Social determinants of health. http://www.who. int/social_determinants/en/ (accessed September 13, 2008). Wilper, A. P., S. Woolhandler, K. E. Lasser, D. McCormick, D. H. Bor, and D. U. Himmelstein. 2008. A national study of chronic disease prevalence and access to care in uninsured U.S. adults. Annals of Internal Medicine 149(3):170-176. Woolf, S. H. 2008. The power of prevention and what it requires. Journal of the American Medical Association 299(20):2437-2439.

OCR for page 241