Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
3 Circumstances Accelerating the NeedÂÂÂ INTRODUCTION The current pace of development in science and technology is unprec- edented. In health care, these innovations have produced a variety of new therapies to treat everything from heart disease to joint injury; and product development is increasingly informed by advances in areas such as genetics that are providing a wealth of new information on how genes influence dis- ease. These developments have the potential to dramatically improve care and treatment options for many patients, but they also introduce a level of complexity and cost in medical interventions that will impact the healthcare system. Healthcare providers are increasingly pressed to find new, reliable and rapid ways to evaluate the effectiveness of new medical technologies. To illustrate the challenges posed by emerging technologies to the cur- rent established evidence paradigm, Molly J. Coye explores the substantial diversity and complexity of new medical products. Biopharmaceuticals, information technology (IT), and hybrid devices are often not suited to the evaluation approaches geared towards more traditional technologies such as pharmaceuticals, devices, and imaging modalities. In addition trans- formative technologies, which are often information-based and have the potential to improve healthcare processes and delivery, are often not rec- ognized or supportedâresulting in waste and inefficiency, as well as missed opportunities to significantly transform medical care. Acutely needed are new approaches to the generation of evidence regarding the benefits, cost- effectiveness, and appropriate application of new technologies. 71
72 EVIDENCE-BASED MEDICINE The need for more and better evidence is echoed in David M. ÂAltshulerâs paper on the implications for healthcare providers of the increasing amount of new knowledge about human genetics. Past studies of genetic contribu- tions to diseases focused on rare Mendelian diseases in which identified genes were highly predictive of disease development. However, in recent years, hundreds of genes have been identified as ârisk factorsâ for more common diseases such as diabetes and heart disease. These findings offer important insights into pathophysiology but are currently of limited clinical utility. In addition, very little is known about how people interpret and use information about genetic risks. While there is enormous potential for per- sonalized medicine, there are also great risks for unintended consequences. As genetic tests are developed and made available for clinical use, better evi- dence is needed on how their application impacts the healthcare system. NEW HEALTHCARE PRODUCT INTRODUCTION Molly J. Coye, HealthTech Over the past three decades, the central preoccupation of health tech- nology policy has been to cope with the rising tide of new pharmaceutical and imaging products introduced each year (McGivney and Hendee, 1990; Willems and Banta, 1982). The impact of these new technologies on cost and total expenditures has been an important and continuing challenge to national goals of cost containment and to the efficient use of health re- sources (Eisenberg and Zarin, 2002). In the last decade, a new understand- ing of the role and value of medical technology has evolved. Influential studies by Cutler, McClellan, and others have defined the benefits of medi- cal technologies (Cutler and McClellan, 2001) and urged consideration of how best to encourage innovation. Somewhat surprisingly, however, the actual number of new products presented for regulatory approval has not increased significantly over the past decade (Figures 3-1 and 3-2). The principal challenge for health technology policy is not the quan- tity of new products presented for assessment. Instead, the complexity and diversity of emerging technologies, including new biologics and hybrid pharmaceutical and device products, is increasingly straining the ability of developers and researchers to generate evidence adequate for regulatory and coverage decisions. National efforts to improve and accelerate the evalu- ation of new technologies by federal agencies may well be overwhelmed without restructuring and investments in technology assessment proposed recently by Wilensky, Reinhart, and others (Reinhardt, 2001; Wilensky, 2006). The first part of this review considers patterns among emerging tech-
CIRCUMSTANCES ACCELERATING THE NEED 73 180 160 Number of FDA Approvals 140 120 100 80 60 40 20 0 2000 2001 2002 2003 2004 2005 2006 Year # of Medical Device Approvals # of Biological Device Approvals # of Drug Approvals (NMEs and BLAs) # of Combined FDA Approvals FIGURE 3-1â Number of FDA approvals per year, 2000-2006. 3-1.eps SOURCE: U.S. Food and Drug Administration, Center for Biologics and Research. Data derived from Owens, 2006. $30 $ (in billions) $20 $10 $0 2000 2001 2002 2003 2004 2005 2006 Year Health IT Expenditure Healthcare Venture Capital Investment FIGURE 3-2â Health IT expenditure versus healthcare venture capital investment, 2000-2006. 3-2.eps SOURCE: National Science Foundation, Division of Science Resources Â Statistics. Data from http://www.altassets.com/casefor/sectors/2005/nz6545.php, and PwC/ NVCA MoneyTreeâ¢ Report, Thomson Financial, http://dorenfest.com/doc/ pressrelease_Feb2004.pdf.
74 EVIDENCE-BASED MEDICINE nologies that will complicate efforts to develop evidence for policy decisions about their benefits, cost effectiveness, and appropriate application. In the second part of this review, we consider two consequences of com- plexity that are less well understood. The first is the waste and inefficiency that prevails because we have not defined critical research targets for the developers of technology before they initiate clinical studies. For all but the largest and most sophisticated of firms, the understanding of what needs to be demonstrated in order to win coverage, reimbursement, and even sup- port from payers and providers varies widely. Developers need coordinated public and private efforts that will provide early identification of important research targets, including priority populations, side-by-side comparisons of effectiveness with competing technologies, cost effectiveness, and other aspects; and purchasers and providers, in turn, need the improved informa- tion for policy decisions and clinical guidance. The second consequence of complexity is the public and private failure to recognize and act on the potential of certain technologies to significantly transform medical care. The new concept of transformative technologies is proposed to distinguish technologies that enable a wide range of disruptive and positive changes in clinical care and administrative processes, reducing net expenditures and improving the value of health care. They constitute important opportunities for progress toward national goals of improved quality and efficiency in health care, andâin contrast to biologics and hybrid devicesâthey present only modest challenges to our capacities for evaluation. As Newhouse has pointed out, health care may be the most inef- ficient of all sectors in its ability to extract value from resources consumed (Newhouse, 2002). To meet Newhouseâs challengeâto extract the full value of emerging technologiesâit will be necessary to understand the barriers to evidence generation, evaluation, and policy formulation particular to transformative technologies. Despite a widespread impression that the pace of new technology in- troduction is escalating (Kessler et al., 2004), the most common measure of itâFood and Drug Administration (FDA) approvals of new productsâfinds the pace steady and relatively even (FDA, 2006) (Figure 3-1). This âsteady stateâ is reflected in the equally modest trends in total venture investments in health products and services (Figure 3-2). On the other hand, the FDA approves medium- and low-risk products each year in far greater numbers than approvals for novel products, and off-label extensions of drugs and devices proliferate without systematic evaluation (Gelijns et al., 2005). In the last half-decade, providers have also been contending with the rapid expansion of IT products, many of which do not require FDA approval. Unfortunately, there are no data available on the number of new IT prod- ucts introduced, but expenditures in this area have increased steadily over the past 5 years. In 2004, the Hospital Financial Management Association
CIRCUMSTANCES ACCELERATING THE NEED 75 survey report estimated that IT spending would grow by nearly 9 percent per year to reach $30.5 billion in 2006, a pattern that appears to have held (Figure 3-2). Substantial progress is under way in two broad areas that cut across the overlapping domains of emerging pharmaceuticals, biotechnology, devices, and IT. First, policy makers now agree that comparative effectiveness re- search is urgently needed to support coverage and reimbursement decisions (IOM, 2007b). Second, requirements for formal post-market surveillance have been developed as Coverage with Evidence Development (Tunis and Pearson, 2006; Tunis et al., 2007). Diversity, Complexity, and Cost The challenges for evidence generation and technology evaluation lie in the diversity, complexity, and cost of emerging technologies, rather than the pace of new product introduction. Requirements for evidence gen- eration and evaluation processes in effect today were largely designed to a Â ssess pharmaceuticals, devices, and imaging modalities. As biotech drugs, devices, and IT expand and converge, however, the difficulties these classes of technologies will pose for the established evidence paradigm become clearer (Keenan et al., 2006). These include the shorter development period for devices and IT, often one to three years rather than ten years or more for chemical or biotech drugs, and the additional sciences necessary to evaluate devices and IT, adding engineering and software to biology and chemistry. The cost of biotech drugs is generally much greater than that of pharma- ceuticals because they are often targeted to rare or life-threatening diseases and inspire urgent patient demand, making assessment more urgent but also making it difficult to induce patients to participate in randomized trials. Pharmaceutical and Biotechnology Drugs The decline in the introduction of new pharmaceutical drugs over the past decade is well recognized (Burns, 2005). More important for our consideration here, the processes for research and regulatory approval are relatively well understood. In comparison with biotechnology drugs, hybrid devices, and nanotechnology, the pipeline for traditional pharmaceutical products offers relatively few novel problems. The field of biotechnology drugs (âbiotech drugsâ), including genetic diagnostics and therapies, is growing at twice the rate of traditional (chemical) pharmaceuticals (Pauly, 2006), and the cost of the treatments as well as their efficacy has made evidence generation increasingly urgent. Recent approvals have permit- ted market entry of biotech drugs that cost as much as $300,000 per year for the life of the patient (Myozyme) or $100,000 per treatment episode
76 EVIDENCE-BASED MEDICINE ( Â Acthar). In 2007, the Biotechnology Industry Organization (BIO) reported that more than 400 biotech drug products and vaccines were currently in clinical trials (BIO, 2007). Biologically targeted drugs often carry very high prices because they target rare, severe, or fatal diseases with high physician and patient de- mand, and because there is no established pathway to generics (the FDA concept of bioequivalence for generics has yet to be developed for biotech drugs; see discussion of âfollow-on biologics,â below). This raises the stakes for evidence development, regulatory review, and patent protection significantly because the off-label uses of biotech drugs often target quite unrelated diseases (Calfee and Dupre, 2006) and may quickly become ac- cepted in practice with limited opportunity for clinical trials. Avastin, as a case in point, was originally approved for treatment of colon cancer. It was subsequently found to be effective in treatment of acute macular degenera- tion (AMD), and physicians began to use it off-label for this purpose. The company then developed a new formulation of Avastin, named Lucentis, approved and marketed for AMD at a much higher price. More recently, physicians have extended the use of Avastin to breast and lung cancer, and the company reportedly plans to develop new formulations for each of these diseases as well. As with most other biotech drugs, once physicians believed that Avastin was effective for off-label uses, it became very difficult to enroll patients in randomized trials. All of these factors complicate ef- forts to generate appropriate evidence on the clinical and cost effectiveness of biotech drugs. Biomarkers are measured as indicators of normal biological or patho- genic processes, or of pharmacological response to therapeutic interventions (IOM, 2007a). Nucleic acid testing is the most rapidly growing segment of the in vitro diagnostic laboratory business. Spending for DNA testing ex- ceeded $1 billion in 2001 and is estimated to be growing at 30-35 percent a year (Goldsmith, 2004). Combining knowledge of the human genome with developments in genomics, proteomics, and bioinformatics, research on biomarkers represents an important beachhead for personalized medi- cine. The number and complexity of biomarkers is expanding rapidly, and a coalition of pharmaceutical companies, the FDA, and other regulatory bodies has been established to study the role of biomarkers in predicting patient susceptibility to adverse drug events. In March 2007, the Institute of Medicine (IOM) issued a report, Can- cer Biomarkers: The Promises and Challenges of Improving Detection and Treatment, recommending a coordinated federal process to identify and evaluate biomarkers. While the IOM strongly endorsed the importance of biomarkers as a means of improving clinical effectiveness and protecting patients from the adverse consequences of unnecessary or inappropriate treatment, the report also described the many important uncertainties sur-
CIRCUMSTANCES ACCELERATING THE NEED 77 rounding the development of new test systems and new methodologies for their assessment. The report also called for new processes for modeling the effects of, evaluating, coding and pricing biomarkers, and for new ap- proaches to pricing and to conditional coverage that would require more data collection (IOM, 2007a). The intense interest from developers, regula- tors, and scientists in biomarkers reflects not only their clinical importance, but also the difficulty of designing appropriate research approaches and regulatory criteria (Burns, 2005). Most physicians are unfamiliar with gene expression profiles, of course, and will need more than the statistics on diagnostic accuracy used for regulatory decision making to interpret test results for their patients. Follow-on biologics are biotech products that are developed as similar versions of already approved protein products (also known as follow- on protein products, biogenerics, or biosimilars). Approval of biosimilars has proceeded in Europe, but developers in the United States strenuously o Â ppose these products because follow-on products may be able to enter the market earlyâwell before the patent period for the original biologic has expiredâwithout risk of patent infringement. While Congress, the Depart- ment of Health and Human Services and the FDA consider the complex scientific and legal issues involved in developing a regulatory framework for follow-on biologics, it is likely that the evidentiary basis for regulatory decisions will be fraught with controversy and legal actions. Devices The evidence requirements for most types of medical devices, as for pharmaceuticals, are reasonably well established. The links between Âinitial concepts, early investments, clinical results, regulatory approval, and reim- bursement and dissemination are much clearer than for biologics and many hybrid technologies that combine devices with drugs, biologics, and IT. Venture investors play an important role in financing the development of evidence to support emerging devices, and their smaller companies have been described by Burns as the âfarm teamsâ feeding larger companies such as Medtronic and Stryker (Burns, 2005; Coye, 2006; Iglehart, 2005). As larger device companies acquire smaller firms, greater investment in evidence generation becomes possible. Device manufacturers are also following the path cleared by pharma- ceutical firms in launching direct-to-consumer advertisingâmost recently for Strykerâs âJack Nicklaus Hipsâ and âTriathlon Kneesâ designed for women. As for pharmaceuticals and biologics, moreover, the overriding need is for assessments of the comparative effectiveness of devices, includ- ing cost effectiveness, and for broader and more accountable post-market
78 EVIDENCE-BASED MEDICINE surveillance programs. Post-market surveillance will also address the rapid extension of products to populations not covered in initial approvals. In a few cases the science underlying the development of medical tech- nologies appears to outstrip or even obviate the need for clinical evidence. Proton beam therapy has been approved, and favorable reimbursement assigned, in the absence of clinical data for effectiveness, based instead on the physics of the equipment and expected clinical benefit. For the first time, the FDA-based approval of a therapy on the physical properties of the product. Proton therapy facilities can cost up to $200 million to construct and equip and have a âfootprintâ equivalent to two football fields. Proton beam therapy has been reimbursed at a premium over other forms of radia- tion therapy since 2001. A growing proportion of devices are implants, including cardiac, ortho- pedic, and neurologic, and many of these are hybrids of pharmaceuticals, biologics, and IT (chips). Each implant has novel characteristics specific to the placement of the implant, the durability of the materials and mechani- cal processes, the reliability of electrical and chemical processes, and the clinical effects of the implants. Ample evidence of the complexities of these can be seen in the Coverage with Evidence Development model developed by the Centers for Medicare and Medicaid Services (CMS) for coverage of implantable cardiac defibrillators (Tunis and Pearson, 2006), or in con- cerns about the durability of neural stimulation devices implanted in the brain to treat benign essential tremor (approved by the FDA in 1997) and Parkinsonâs (2002). Information and Communication Technologies After many years of slow development and market penetration, invest- ments in information and communication technologies and provider adop- tion are finally accelerating. Federal leadership has played a unique role in driving adoption since the 2004 presidential executive order that declared a national goal of a paperless health system within a decade (Bush, 2004). Consumer demand for access to personal health information has even f Â ueled novel direct-to-consumer advertising of personal health records by health plans and technology vendors such as Microsoft. Spending on IT, including business processing and IT services is forecast to reach 23 percent of hospital budgets by 2011 (Modern Healthcare, no date). At the same time, relatively few studies have evaluated the effects of investments in information and communication technologies on healthcare quality, patient experience, and resource utilization. Because information and communication technologies are largely unregulated, there is little reason for the developers of these technologies to conduct formal studies. What research is conducted tends to consist of small, uncontrolled time
CIRCUMSTANCES ACCELERATING THE NEED 79 seriesâevidence useful for marketing but inadequate for policy decisions. Even in the case of telemedicine (a highly effective series of innovations and set of technologies that support provider-to-provider systems of re- mote consultation and medical management), most studies have been small and, although favorable as a whole, have not driven significant adoption over the past two decades. This research has also been inadequate to the task of influencing public policy; despite strong evidence that telemedicine could play a critical role in solving national problems of access to care and the scarcity of health professionals, including beneficial effects on Âquality, patient and provider experiences and cost, it was not until 2000 that federal and state policies began to invest in the necessary infrastructure, provide reimbursement, and lift barriers to the deployment of telemedicine ( Â Gutierrez, 2001). IT is not a product in the same sense that a pill or a device is a product. Instead, it enables processes. By collecting, storing, retrieving, and display- ing data, IT enables a wide range of important clinical and support or administrative processes. In many cases, therefore, the development of evi- dence regarding the impact of IT on clinical care, cost, patient and provider satisfaction, and other endpoints must measure not just whether a particu- lar IT application is employed, but how the process has been adopted and adapted. Extraneous issues such as physician or nurse resistance, the degree of fragmentation of community providers, or the existence of other systems such as hospitalists, may influence results as much as the characteristics of the technology itself. Although these challenges are daunting, the need for evidence development is urgent because of the potential for systemic improvements in quality and cost (Middleton, 2005). Wireless technologies are an excellent example of the challenges inher- ent in evaluation of IT that are designed for clinical or for administrative purposes, but result in improvements for both. Over the next half-decade, for example, wireless technologies will increasingly support critical clini- cal care processes and administrative tasks. Radio-frequency identification (RFID) was initially marketed for tracking equipment in hospitals and is now being extended to clinical targets such as medication management, the tracking of surgical sponges, and, in combination with computerized white-board systems, patient flow through emergency departments. In re- search designed to improve operating room productivity, RFID has also reduced the time required for surgeries and increased the proportion of staff time available for direct patient care in surgery and recovery. Other wire- less communication devices enable improved response times and enhanced coordination of care. Most promisingly, new developments suggest that RFID in combination with electronic medical records can provide real-time assessments of compliance with clinical protocols and generate prompts to guide clinical practice. RFID is even useful in monitoring elderly patients
80 EVIDENCE-BASED MEDICINE with cognitive disorders and will be marketed to caregivers for home use as well as to institutions. Yet published, peer-reviewed evaluations of the impact of RFID- enabled systems are almost nonexistent. The companies that develop and sell RFID systemsâand many emerging technologiesâare generally small, with limited resources, and the slow penetration of these systems means that revenue expectations do not justify investment in larger stud- ies. As Garber has pointed out, the expense of high-quality clinical trials is an âinsuperable barrierâ for many of these small companies, deterring them from entering the market despite superior products and creating a growing domination by large firms (Garber, 2006). Because more compre- hensive studies are not available, furthermore, it is difficult to convince health plans and providers that the results will justify the costs and the challenges of managing disruptive change. From a health policy perspec- tive this is a regrettable and self-reinforcing pattern of lost opportunities to improve the safety, quality, and timeliness of care, as well as patient satisfaction and efficiency. As RFID applications are developed for clinical processes such as medication compliance, biosensor monitoring, and behavioral tracking, the complexity of assessments in a field of rapidly evolving and multiple functionalities will challenge policy makers. This is exacerbated by the con- tinued reengineering and improvement of products, particularly of IT and devices, while clinical trials are proceeding. When a product does not âhold still,â evidence regarding the effect of an earlier version of the product is of limited utility. Transformative Technologies RFID is only one of a growing number of products that enable, and depend for their effect upon, the redesign of important clinical and sup- port processes. Some developers of emerging technologies, such as those that combine IT and devices for the monitoring and coaching of chronic disease patients, assert that their products should not be described as âtech- nologiesâ at all, but rather as âservices,â because they require the ongoing participation of vendor-deployed or customer staff in reorganized care processes in order to achieve optimal impact. It is easy to understand how such technology-enabled services differ from traditional pharmaceuticals or devices that are expected to perform in a defined manner, independent of ongoing provider care processes. This also explains why the results of such technology-enabled services often vary quite dramatically, even when the technology itself is consistently installed across a number of provider sites. The variation results from differences in the capabilities of the organizations using the technologies to restructure
CIRCUMSTANCES ACCELERATING THE NEED 81 their care processes and implement the necessary changes, which may include new roles for professionals and other workers, remotely operated systems, decision support, changes in care settings, and many other aspects of care. Telemedicine, RFID, remote monitoring of chronic disease, the tele- ICU (remote monitoring of acute care), pharmacogenomics, hemofiltration for congestive heart failure, and remote video interpretation are just a few of the emerging transformative technologies that, if fostered, promise to advance the quality and efficiency of health care in America. While these technologies disrupt care processes and business models, they are not âdis- ruptiveâ as Christensen has used this term (Christensen, 2006) because they may be more expensive than the work processes or products they replaceâwhile reducing net expenditures on health careâand are higher performing, because they frequently improve patient experience and clinical outcomes as well as economic performance. Remote in-home monitoring of patients with chronic disease has yielded remarkable improvements in measures of self-management, in patient sat- isfaction, and in decreases in emergency services and hospitalizations. In studies of these devices used for uninsured, high-risk people with diabetes, for asthma in high-risk pediatric patients, and for adults with chronic dis- eases including hypertension, heart failure, chronic obstructive pulmonary disease, and diabetes, emergency room visits and hospitalization rates have dropped by one-third to one-half. In a recent report, Partners HealthCare found cost reductions of 25 percent. The New England Healthcare Institute estimates that utilization of remote monitoring by just 25 percent of ap- propriate patients with congestive heart failure would result in national sav- ings of $500 million, and the Veterans Health Administration has deployed remote monitoring for more than 25,000 patients with multiple chronic diseases, and is continuing to expand the program (Pare et al., 2007). Remote monitoring of chronic disease patients fundamentally enables providers to redesign care processes in order to comply more closely with evidence-based medicine. Like the âe-ICUâ (tele-ICU) system for remote â Christensen uses âdisruptiveâ for technologies that are newer, less expensive, and lower performing than established products. The disruptive technology slowly encroaches upon and eventually replaces the established vendor through product enhancements. In contrast, transformative technologies are new, result in lower net health expenditures per episode or patient-year (although the new technology itself may be more expensive than established products), and are higher performing, that is, produce similar or superior quality (including the possibility of greater convenience or satisfaction for patients) while reducing net expenditures. A similar revision of the concept of disruptive technology was introduced in Nanotechnology Now in an ongoing discussion of the concept on Internet magazines and blogs: âAny new technology that is significantly cheaper than current, and/or is much higher performing, and/or has greater functionality, and/or is more convenient to use.â See http://www.nanotech-now. com/disruptive-technology.htm.
82 EVIDENCE-BASED MEDICINE management of intensive care units (ICUs), RFID tracking of care, and many other transformative technologies, its successful use depends greatly on organizational capacities for change management, clinician support, and reimbursement systems that can protect the providers and provider organizations against financial losses. For pharmaceuticals, biotechnology, and devices, the FDA and CMS have markedly improved the clarity of evidence requirements and thresholds for approval and coverage. In the case of most transformative technologies, however, there is no concomitant delineation of research targets and crite- ria for public and private coverage and reimbursement. Many companies developing transformative technologies are small, and the limited resources they can devote to trials of their products are frequently wasted on stud- ies that provide convincing evidence of quality and cost improvements but fail to address key issues of concern to employers, health plans, hospitals, or physicians. There is a critical need, therefore, for new processes that will link purchasers, payers, clinicians, and the developers of technology in efforts to define research targets and to pursue an iterative agenda of evidence generation, product and process improvement, and coverage and reimbursement. In defining research targets, such a collaborative process will also stimulate even greater innovation to improve the performance of technolo- gies against those targets, spurring the proliferation of products within each category and the rate at which each individual technology evolves. Of course, technologies within a single category often vary enough to make studies of one product only suggestive of the effect that others in the same category might have. Green has an interesting perspective on this, based on evolutionary theory: âEvolution passes, at times, through innovative cycles of progressâwhen diversification of design leads to perfections of formâwith the concomitant production of many unsuccessful models.â In the evolution of device designs for total knee replacements that began in the 1960s, for example, âsome of these implants are, by modern standards, bizarre-looking.â Yet, ânot surprisingly, all total knee replacement implants now resemble the normal knee and consequently are difficult to distinguish from each otherâ (Green, 2001). This has useful implications for research and regulatory policy, par- ticularly with regard to the evaluation of transformative technologies. Our current processes of research, regulatory approval, and coverage and reimbursement decisions are highly linear. With an evolutionary approach to evidence generation and assessment, this would be supplemented in cer- tain cases by iterative, coordinated, and proactive efforts to evaluate and foster technologies that will advance national goals of quality, efficiency, and improved patient experience. An early demonstration of this approach has been piloted as the Fast
CIRCUMSTANCES ACCELERATING THE NEED 83 Adoption of Significant Technologies (FAST) by the New England Health- care Institute, in collaboration with the Health Technology Center and the Massachusetts Technology Collaborative (New England Healthcare Institute, 2007). FAST was established to create and test methods by which healthcare payers and providers can actively accelerate the adoption of advanced technologies that lower costs and improve quality. Stakeholders, including federal and private policy makers, purchasers, payers, and us- ers, have been convened to investigate the potential of classes of emerging technologies to enable major improvements in the quality and efficiency of health care. Because technologies and services typically evolve rapidly in early stages, the initiative focuses on classes of technology rather than single products, seeking technologies that address a substantial patient population; significantly improve patient outcomes; reduce the overall costs of care; manifest low market penetration for high-value uses; and are con- strained by barriers to broader dissemination that can be addressed by the stakeholder group. In the first two rounds of screening for candidate technologies, more than 30 technologies were reviewed; five have been selected for further analysis to determine whether the evidence regarding clinical benefit and cost reduction is sufficient to cause the stakeholders to engage in efforts to reduce barriers to its adoption. In the case of one such technology, the tele- ICU, stakeholder advocacy has resulted in a statewide initiative to imple- ment e-ICU networks across Massachusetts. In another, the New England Healthcare Institute has joined with the Partners HealthCare Telemedicine Department and public and private payers in a demonstration of home- based remote patient management to provide additional quality and cost outcomes data and illustrate how to effectively structure reimbursement to providers who offer the service. As progressively more rigorous tests of these evolving technologies demonstrate their contributions to net savings, as well as quality enhancements, the strategies to support coverage and reimbursement can become more focused and intensive. Stakeholders should expect multiple products to emerge in each cat- egory of transformative technologies and individual products to evolve rap- idly as research suggests opportunities for redesign. As in earlier proposals for post-market surveillance, research will have to be iterative in order to test the potential of emerging technologies across a variety of care settings and applications. Active involvement by stakeholders, their clear definition of targets that correspond to value, and their commitment to support the coverage and reimbursement for technologies that reach target thresholds will create a market for those technologies and stimulate further investment in research and development. Transformative technologies have often been âorphan technologies,â because they are disruptive, developed by small companies with limited re-
84 EVIDENCE-BASED MEDICINE sources for research and marketing, and ignored by purchasers. In the case of orphan pharmaceuticals, the public realized that technology evaluations designed to protect them from harm did not reliably advance valuable tech- nologies through testing and deployment, largely because of weak business incentives for investment in research and development. Few transformative technologies rise to the level of urgency felt by individual patients who be- lieve that they are prevented from accessing lifesaving treatments, despite the fact that some transformative technologies actually do save lives (tele- ICU reductions in mortality rates) and lessen the burden of illness (remote monitoring of chronic disease reductions in hospitalization and improve- ments in functionality and satisfaction). It falls to purchasers, payers, providers, and policy makers, then, to craft new approaches to evaluate and simultaneously foster the develop- ment of transformative technologies. The critical role that technologies will play in enabling the delivery of evidence-based medicine was anticipated by the IOM in Crossing the Quality Chasm: âCarefully designed, evidence- based care processes, supported by automated clinical information and decision support systems, offer the greatest promise of achieving the best outcomes from care for chronic conditionsâ (IOM, 2001). In the decade ahead, the most notable contributions to improvements in healthcare quality and efficiency will be theseâinnovations in IT or com- binations of IT with portable or implanted devices such as sensors or drug dispensing systems. Pharmacogenomics or biomarkers will be the leading exception to this phenomenon, and presage the emergence of biotechnology as the dominant source of enhanced value in health care in the next decade. In both cases, the challenge remains: to generate the evidence needed for critical evaluations of emerging technologies, without failing to identify and foster technologies that may be significant sources of benefit. RAPIDLY DEVELOPING INSIGHTS INTO GENETIC VARIATION David M. Altshuler, Broad Institute, Massachusetts Institute of Technology The underlying biological processes responsible for individual risk of disease remain poorly understood. It was recognized early in the 20th century that inheritance contributes substantially to the risk of diseases in the population. In the 1980s tools and methods made it possible to identify genes for rare disorders caused by mutation of a single gene. These studies taught us that an unbiased genetic approach most often identifies causal processes that were unexpected based on other methods of investigation. Understanding the genetic basis of common diseases remained intractable, however, until recent approaches from genomics and genetics made possible the systematic study of genetic variation across large patient samples. In
CIRCUMSTANCES ACCELERATING THE NEED 85 2007 alone, more than 75 genetic risk factors for common diseases were found, tripling the number known going into that year. Progress in identifying the genetic causes of common diseases holds great promise for catalyzing a new understanding of pathophysiology: the information is unbiased with regard to prior hypotheses, is based on Âhuman rather than model systems, and reflects a causal rather than a reactive pro- cess (genotypes are assigned during the randomization event of meiosis and are unchanged by lifestyle and disease). In contrast, genetic testing for such variants is of uncertain value for the individual patient and the healthcare system, because in vivo biological relevance does not necessarily imply clinical utility. The role of genetics in common diseases is going to be quite dif- ferent from the traditional experience with Mendelian diseases such as H Â untingtonâs disease. In the case of single-gene disorders, the mainstay of genetic medicine for many years, prediction is powerful. Mendelian dis- eases were selected for study precisely because mutation of a single gene was necessary and sufficient to cause the disease. They were studied not for their genetic similarity to common diseases (it has long been clear that most diseases are polygenic and environmentally mediated), but because they were of great importance to the rare families that carried them and were amenable to research. Thousands of Mendelian discoveries were made before general progress in common diseases, leading physicians and the public to attribute great explanatory power to genetic information. Moreover, there is a sense in the population that genetic causes of disease are hard to modify with lifestyle or treatment. This, too, is circular reasoning: the diseases were targeted for study precisely because they are Mendelianâthat is, that modifying genes, environment, and behavior has at most modest ability to alter the outcome. These two factors have led to widespread overestimation of both the power of genetic prediction and its intractability to therapy. The Role of Genetics in Common Diseases In contrast, common diseases are not solely determined by genes, but are modifiable by other factors. Moreover, it has long been clear that the inherited contribution to disease is itself divided across multiple genes. Initially, researchers used the same methods so successfully applied to Mendelian diseases, hoping that only a few genes would explain a sizable percentage of the risk for diseases such as diabetes, schizophrenia, and heart attack. The failure of this method indicated that there must be many individual genetic contributors that together explain the fraction of risk attributable to genotype. With many genes each contributing, it was clear
86 EVIDENCE-BASED MEDICINE that the genetic causes of common diseases would never be predictive in the manner of Mendelian diseases. The polygenic nature of common diseases made it difficult to iden- tify any single genetic variant that was reproducibly contributing to risk. Initially, investigators tried to stack the deck in their favor, focusing on studies of candidate genesâthose previously hypothesized to play a role in pathophysiology. While there were many claims of success, few proved reproducible. This was due to some combination of underpowered studies, overly loose statistical standards for claiming a positive result, and (as it has turned out) the presence of few true genetic risk factors in previously identified candidate genes. Based on the failure of these earlier approaches, a variety of tools were developed to enable a simple but comprehensive association study approach for the role of common genetic variants in common disease. The approach was built on the human genome project and efforts to characterize and cata- log human genetic variation. Using these methods, it is possible to compare the genetic makeup of patients with a particular disease to that of people who lack the disease. Samples can be drawn from populations or from families, and researchers can try to determine particular genetic variants that track with the disease. These approaches were first proposed in the mid-1990s, but a decade passed before they could be tested in their generality. In the last 2 years, as these tools have come online, more than 75 bona fide genetic contributors to common disease were reported in the literature. These were spread across more than 20 common diseases, including Type I and Type II diabetes; cholesterol levels; heart attack; rheumatoid arthritis; lupus; age-related macular degeneration; prostate, breast, and colorectal cancer; and many others. In multiple diseases, 5, 10, or even more indi- vidual genetic variations have been found. Despite the fact that only the most common genetic variants have been tested, many new clues have been identified in terms of genetic risk factors for common diseases. In some cases, the discoveries have instantly yielded biological clues. For example, age-related macular degeneration is a typical, common disease among perhaps 5 percent of the population. Previous approaches of human genetics research had not yielded any results for specific genes or mutations that were robust and reproducible. However, in the last few years, at least four or five common genetic variants that been identified that have twofold or greater effects on population risk. Everybody carries some combination of these genetic variations. In the aggregate, more than half of the risk for age-related macular degeneration is inherited. These genetic variations explain a difference in risk among individuals of more than a hundredfold (Figure 3-3). The importance of these findings is deeper than simply a predictive test: four out of the five genetic risk factors found are in fact complement
CIRCUMSTANCES ACCELERATING THE NEED 87 1,000 100 Relative risk 10 Number of risk alleles present at rs1410996 C2-CFB Protective (rare) 2 1 allele present His402 homozygote 2 0 1 1 Ser69 His402 2 0 homozygote heterozygote 1 1 Ser69 0 0 Tyr402 heterozygote 1 homozygote Ala69 CFH homozygote LOC387715 FIGURE 3-3â Age-related macular degeneration and common variants in comple- ment factors. SOURCE: Maller et al., 2006. factors, providing a new insight into the pathophysiology of the disease. Many thousands of papers have been published on age-related macular degeneration and on complement factors. Yet prior to these unbiased ge- netic studies, none of those papers had proposed that complement factors were the underlying biological cause of age-related macular degeneration. Common single-letter changes in one complement factor influence risk by fivefold or more. This biological insight suggests that targeting the comple- ment pathway might be valuable in arresting the underlying pathophysiol- ogy and even preventing the disease in the first place. Similarly in Crohnâs disease, more than a dozen genetic risk factors have been identified. Two themes have emerged: defects in innate immunity and in autophagy. Autophagy is a well-studied process, but its relevance to inflammatory bowel disease was not appreciated. With unbiased, genome- wide scans highlighting the role of autophagy, investigators immediately had a target pathway with in vivo human relevance. Colleagues at Massa- chusetts General Hospital, the Broad Institute, and Dana Farber Cancer In- stitute were already working on autophagy drug development for myeloma when this discovery was made in Crohnâs disease. The new link provided by genetics allowed investigators to very rapidly bring their forces together in a way that otherwise might never have occurred.
88 EVIDENCE-BASED MEDICINE Genetic research surrounding another common disease, Type II diabe- tes, suggests a primary role for beta-cell dysfunction. Both the Mendelian and the common forms of Type II diabetes are characterized by defects in insulin secretion, but to date not in insulin resistance. While this in no way argues against the role of insulin resistance, it has highlighted for many investigators the central role of beta-cell function in the pathophysiology of Type II diabetes. Determining the Value of Genetic Information Each of the newly localized genetic variants is common, so they are present in a substantial proportion of patients. Moreover, it is simple to test a patient to determine whether or not he or she carries these genetic risk factors. Whether this information is useful is much less clear. First, the risk attributable to the newly found genetic variations is typically very modest. For the most part, the individual risk factors found have effects of between 10 and 50 percent increase in risk per copy. Because there are multiple genetic variances, the aggregate risk is greater than any individual one, but nonetheless it is a far cry from the hundredfold or thousandfold risks of Mendelian mutations. A key challenge will be to determine whether and how clinical testing for such genetic variations can be used to improve patients care. For Type II diabetes, we set out to evaluate this question with colleagues in the Diabetes Prevention Program (DPP) and Massachusetts General Hospital. The DPP is a landmark study of diabetes prevention, involving 5,000 people with impaired glucose tolerance. This randomized trial showed that intensive lifestyle change, or treatment with metformin, could substantially reduce the rate at which patients developed Type II diabetes over the course of the trial. The study examined in the DPP a gene called TCF7L2, originally identified by deCODE Genetics, which has the largest effect of any single common variant yet described in Type II diabetes. In the DPP we found that patients with the high-risk homozygous genotypeâabout 5 to 10 percent of participantsâhad about double the risk of contracting Type II diabetes than otherwise identical patients who did not have the high-risk genotype. Because the DPP was both a multiethnic sampling of the U.S. popula- tion and prospective, this result validates that measurement of the TCF7L2 genotype does convey predictive information above and beyond standard clinical measures. However, an even more interesting finding was that the lifestyle intervention was as effective in the high-risk genotype group as in the population as a whole (Figure 3-4). These results offer a hopeful messageâthat genes are not destiny and that a patient should not give up on lifestyle changes because he or she
CIRCUMSTANCES ACCELERATING THE NEED 89 20 Cases/100 person-yr 15 CC CT TT 10 5 0 Placebo Metformin Lifestyle FIGURE 3-4â TCF7L2 and risk of T2D in Diabetes Prevention Program. SOURCE: Data derived from Florez et al., 2006. 3-3 thinks the genes will defeat them. In addition, the value in testing for this gene becomes questionable, since the vast majority of patients with Type II diabetes do not have the high-risk genotype and they would benefit from the lifestyle intervention as well. This past summer, however, the discover- ers of TCF7L2 began advertising it to physicians as a tool for patients. They arguedâbased in part on the DPP studyâthat patients who got the TCF7L2 test were at greater risk genetically for developing Type II diabetes and would change their lifestyle once they learned of this risk. They further assumed that those who tested negative for TCF7L2 would not change their behavior; thus, only good would come of it. Of course, these assumptions apply to some people. If told they are at higher risk, they will work harder to prevent the disease. If they are told they are not at higher risk, they will understand that they are not fully protected and will maintain their lifestyle as well. Yet others are likely to respond in an unpredictable manner. For example, some people may over- interpret a positive result, believing that genes are destiny, and perhaps feel discouraged from working hard to prevent the disease. Others who have a negative test may overestimate how protected they are. There is simply no way to know how people will react without more research. Clearly, if evidence-based use of genetic tests is desired, clinical research is needed in particular, to determine how exposure to such information im- pacts individual behavior, health outcomes, and healthcare utilization. The most rigorous design would be clinical trials in which study participants are randomized to receive genetic information or standard care and outcomes are then compared. Performing such trials is going to be extremely difficult, particularly because of the lack of incentives and the rapidly changing nature of the tests. Traditionally, clinical trials are driven by incentives such as being able
90 EVIDENCE-BASED MEDICINE to sell a drug. It is not clear that the financial rewards for selling genetic tests could support expensive clinical trials. Moreover, genetic information is changing rapidly, and any given clinical trial of genetic information will likely be out of date (superceded by a more informative version of the test) before it is complete. These are difficult challenges, and it is hard to think of solutions that do not carry their own risk of slowing down progress and the availability of information that the public may want. In conclusion, there has been rapid progress in identifying genes and DNA variations that influence common human diseases. The long-term value of this research will, I believe, be in its unique ability to take an unbiased look at causes of common diseases in humans. The biological understanding gained will be a basis for progress, and my hope is that it will lead to improved prevention and therapy. In the meantime, however, the typing and hyping of genetic information and so-called personalized medicine have already begun. In my view, this is a much more uncertain enterprise, and if we are not careful we may never know its real value, because it will become a routine part of health care before we know whether it actually helps to improve peopleâs lives. REFERENCES BIO (Biotechnology Industry Organization). 2007. Biotechnology industry facts. http://www. bio.org/speeches/pubs/er/statistics.asp (accessed January 22, 2008). Burns, L. R. 2005. The business of healthcare innovation. Cambridge, UK: Cambridge Uni- versity Press. Bush, G. W. 2004. Executive order: Incentives for the use of health information technology and establishing the position of the national health information technology coordina- tor. http://whitehouse.gov/news/releases/2004/04/20040427-4.html (accessed January 22, 2008). Calfee, J. E., and E. Dupre. 2006. The emerging market dynamics of targeted therapeutics. Health Affairs 25(5):1302-1308. Christensen, C. 2006. The innovatorâs dilemma. New York: HarperCollins. Coye, M. J. 2006. Confessions of a serial entrepreneur: A conversation with Alfred E. Mann. Health Affairs 25(3):w104-w113. Cutler, D. M., and M. McClellan. 2001. Is technological change in medicine worth it? Health Affairs 20(5):11-29. Eisenberg, J. M., and D. Zarin. 2002. Health technology assessment in the United States. Past, present, and future. International Journal of Technology Assessment in Health Care 18(2):192-198. FDA (Food and Drug Administration). 2006. 2006 accomplishments. http://www.fda.gov/oc/ accomplishments/healthcare.html (accessed January 22, 2008). Florez, J. C., K. A. Jablonski, N. Bayley, T. I. Pollin, P. I. de Bakker, A. R. Shuldiner, W. C. Knowler, D. M. Nathan, and D. Altshuler. 2006. TCF7l2 polymorphisms and progression to diabetes in the diabetes prevention program. New England Journal of Medicine 355:241-250.
CIRCUMSTANCES ACCELERATING THE NEED 91 Garber, A. M. 2006. The price of growth in the medical-device industry. New England Journal of Medicine 355(4):337-339. Gelijns, A. C., L. D. Brown, C. Magnell, E. Ronchi, and A. J. Moskowitz. 2005. Evidence, politics, and technological change. Health Affairs 24(1):29-40. Goldsmith, J. 2004. Technology and the boundaries of the hospital: Three emerging technolo- gies. Health Affairs 23(6):149-156. Green, S. A. 2001. The evolution of medical technology: Lessons from the burgess shale. Clinical Orthopaedics and Related Research (385):260-266. Gutierrez, G. 2001. Medicare, the Internet, and the future of telemedicine. Critical Care Medicine 29(8 Suppl):N144-N150. Iglehart, J. 2005. Grasping the role of technology: A conversation with Ron Dollens. Health Affairs w5.296-w5.313. IOM (Institute of Medicine). 2001. Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press. âââ. 2007a. Cancer biomarkers: The promises and challenges of improving detection and treatment. Washington, DC: The National Academies Press. âââ. 2007b. Learning what works best: The nationâs need for evidence on comparative effectiveness in health care. http://www.iom.edu/ebm-effectiveness (accessed January 22, 2008). Keenan, P. S., P. J. Neumann, and K. A. Phillips. 2006. Biotechnology and Medicareâs new technology policy: Lessons from three case studies. Health Affairs 25(5):1260-1269. Kessler, L., S. D. Ramsey, S. Tunis, and S. D. Sullivan. 2004. Clinical use of medical devices in the âBermuda triangle.â Health Affairs 23(1):200-207. Maller, J., S. George, S. Purcell, J. Fagerness, D. Altshuler, M. J. Daly, and J. M. Seddon. 2006. Common variation in three genes, including a novel non-coding variant in CFH, dramatically influences risk of age-related macular degeneration. Nature Â Genetics 38:1055-1059. McGivney, W. T., and W. R. Hendee. 1990. Regulation, coverage, and reimbursement of medical technologies. International Journal of Radiation Oncology, Biology, Physics 18(3):697-700. Middleton, B. 2005. Achieving U.S. Health information technology adoption: The need for a third hand. Health Affairs 24(5):1269-1272. Modern Healthcare. no date. http://www.modernhealthcare.com/apps/pbcs.dll/article?AID=/ 20071002/FREE/31002001/1029/FREE (accessed February 7, 2008). New England Healthcare Institute. 2007. Fast adoption of significant technologies: The FAST initiative briefing. http://nehi.net/CMS/viewPage.cfm?pageId=7 (accessed January 22, 2008). Newhouse, J. P. 2002. Why is there a quality chasm? Health Affairs 21(4):13-25. Owens, J. 2006. Drug approvals: Finding the niche. Nature Reviews Drug Discovery 6, 99-101 (February 2007). http://www.nature.com/nrd/journal/v6/n2/fig_tab/nrd2247_F1.html. Pare, G., M. Jaana, and C. Sicotte. 2007. Systematic review of home telemonitoring for chronic diseases: The evidence base. Journal of the American Medical Informatics As- sociation 14(3):269-277. Pauly, M. V. 2006. Managing the discovery factory: Can innovation be organized? Health Affairs 25(2):564-565. Reinhardt, U. E. 2001. Perspectives on the pharmaceutical industry. Health Affairs 20(5): 136-149. Tunis, S. R., and S. D. Pearson. 2006. Coverage options for promising technologies: Medicareâs âCoverage with evidence development.â Health Affairs 25(5):1218-1230.
92 EVIDENCE-BASED MEDICINE Tunis, S. R., T. V. Carino, R. D. Williams, 2nd, and P. B. Bach. 2007. Federal initiatives to support rapid learning about new technologies. Health Affairs 26(2):w140-w149. Wilensky, G. R. 2006. Developing a center for comparative effectiveness information. Health Affairs 25(6):w572-w585. Willems, J. S., and H. D. Banta. 1982. Improving the use of medical technology. Health Af- fairs 1(2):86-102.