In the final session of the workshop, an emerging model for accelerating evidence generation for genomic technologies in a learning health care system was discussed. Several of the workshop panelists considered the benefits, costs, and harms of such models and addressed the policies and infrastructure needed to enable the sharing of genomic data across institutions. Individual speakers shared their thoughts on actionable next steps that could support the implementation of genomics-based programs in health care systems, and co-chairs Feero and Veenstra captured and summarized key themes that were discussed during the day on topics including evidence generation, data sharing, and genomics-based program design.
A MODEL FOR ACCELERATING EVIDENCE GENERATION FOR GENOMIC TECHNOLOGIES IN A LEARNING HEALTH CARE SYSTEM
There is limited evidence available on the clinical utility of most genomic tests (Phillips et al., 2017). To date, there have been very few randomized controlled trials looking at the clinical utility of genomic technologies, said Christine Lu, an associate professor in the Department of Population Medicine at Harvard Medical School. Randomized controlled trials are costly and lengthy and are often not suitable for the study of precision medicine. To address this problem, Lu described a proposed model for generating evidence of clinical utility. The model includes the assumption that the genetic tests under assessment for utility have proven analytical and clinical validity, she said. Although the model is designed to generate clinical
utility evidence, Lu said, some of the data generated by the model could also be relevant for demonstrating economic utility. The model is focused on the Tier 2 genomic applications in the classification system devised by the Centers for Disease Control and Prevention (Dotson et al., 2014). Tier 1 applications, Lu explained, already have sufficient evidence of clinical utility to support adoption in clinical practice. By contrast, for Tier 2 applications there is early evidence of potential utility, but the evidence is not sufficient. Embedded in the model is the concept of a learning health care system, in which new data will inform continuous improvement of clinical practice and the larger health care system (Chambers et al., 2016; IOM, 2013, 2015).
Building Blocks for Rapid Evidence Generation
The model for rapid generation of evidence of clinical utility is based on three building blocks: temporary coverage, leveraging data networks, and stakeholder engagement and endorsement (Lu et al., 2017). Temporary coverage encourages the use of genomic tests. Temporary coverage is enabled by risk-sharing agreements and value-based contracts between manufacturers and payers. Clinical genomic test orders and results are captured by claims and electronic health record (EHR) data systems. The proposed model calls for the cost of evidence generation to be shared by manufacturers and payers. Lu brought up the Biologics and Biosimilars Collective Intelligence Consortium (BBCIC) as an example of a comparable model that could provide information and lessons learned.1 The BBCIC is a nonprofit, collaborative, scientific public service initiative intended to address post-market evidence generation needs for novel biologics, biosimilars, and related products. The proposed model for rapid generation of clinical utility evidence for genomic tests is not the same as the Centers for Medicare & Medicaid Services’s (CMS) Coverage with Evidence Development program, Lu noted, in that the CMS program requires that patients participate in a registry or trial, which is slow with regard to recruitment and subsequent data collection.2 The proposed model is based on a risk-sharing contract between payers and manufacturers so data collection would happen in real time during clinical practice, not through a study or trial, which is the CMS model.
Stakeholder engagement and endorsement is also an important aspect
2 For more information about the Centers for Medicare & Medicaid Services’ Coverage with Evidence Development program, see https://www.cms.gov/Medicare/Coverage/Coverage-with-Evidence-Development (accessed January 19, 2018).
of the two other components of the model. For example, reaching a temporary coverage agreement requires collaboration and engagement among stakeholders, including manufacturers, diagnostic companies, clinical laboratories, payers, and employers. Leveraging data networks will require engagement and endorsement by the many data stakeholders, including manufacturers, payers, health care systems, EHR vendors, providers, patients, researchers, and government agencies, Lu said.
Leveraging large existing data networks and analytical toolboxes, such as the U.S. Food and Drug Administration’s Sentinel Initiative, which includes 223 million individuals in its dataset, or the National Patient-Centered Clinical Research Network (PCORnet), which includes 10 million individuals in its dataset, would help avoid major limitations of multisite research (time and resources), Lu said. Networks can share infrastructure, data curation, analytics, lessons, software development, and other elements. Each organization could participate in multiple data networks. At the same time, each network would still control the governance and coordination of its data (i.e., they would not be “giving data away”).
The structure of a data network—PCORnet, for instance—offers a unique opportunity to create a rapid evidence generation program. PCORnet is an initiative that uses large amounts of health data and patient partnerships to make it faster, easier, and less costly to conduct multi-site clinical research. It is a collaboration consisting of a coordinating center and 35 networks including 13 clinical data research networks, 20 people-powered research networks, and 2 health plan research networks. An evidence-generation program based on a PCORnet-like model might have a coordinating center that would produce a computer or statistical program designed to address a particular research query and send that program to each health system within the participating network, Lu said (see Figure 6-1). The participating systems of the network could then run the program against their own data and return the results to the coordinating center to be aggregated (individual patient information is not shared). While many health care systems have records of genetic testing being done, at present many are missing data about which genetic test was administered and the test results, Lu said. Consortiums (e.g., the IGNITE network described by Orlando in Chapter 4) are working to address this and other challenges, such as the lack of interoperability between EHR systems and other data networks and test results that are not in a readily accessible format.3 Through leveraging
3 For more information on efforts to capture genetic test results in a structured format in the EHR, see DIGITizE, an action collaborative of the Roundtable on Genomics and Precision Health. More information on DIGITizE can be found at http://www.nationalacademies.org/hmd/Activities/Research/GenomicBasedResearch/Innovation-Collaboratives/EHR.aspx (accessed January 23, 2018).
such data networks, Lu concluded, the focus of the model is to capture the missing pieces of genetic test and results data and measure associated patterns of care, clinical outcomes, adverse events, and costs of care in a rapid fashion to generate the clinical and economic utility evidence that is needed to inform clinical practice and policy development.
The policies and infrastructure needed to enable the sharing of genomic data across institutions were discussed by workshop speakers Goddard, Isham, Kho, Leonard, Lu, Murray, and Peterson. Feero also asked them
to suggest how the Roundtable could help advance the process of genomic data sharing and contribute to the broader implementation of genomics-based programs in health care systems (see Box 6-1).
As genomic screening programs are implemented across the United States, people will have more enthusiasm for certain kinds of testing, Peterson said. It should be possible to learn from ongoing clinical genetic testing in the community, he said. However, the data that pertain to the results of genetic tests that are being entered into the EHRs are not discrete (i.e., not in a structured format). The data are presumably in a discrete for-
mat on the laboratory testing side, but are not reaching the health system side in that same format. The Roundtable may want to consider how to create incentives for the entry of discrete data from routine testing into the health system and repositories, Peterson said. Such data could be mined for new information about how genomic testing is taking place across the United States and for new genotype–phenotype relationships. One barrier, he suggested, is that institutions are reluctant to invest in ways to receive those data. There are small pilot programs, particularly in academic centers, that integrate data from testing into their health system and data repositories, and the question is how to scale up these models so that this data entry becomes commonplace. Goddard agreed about the need for structured data on whether the test happened and what the test result was. These two pieces of information are missing from many existing networks. Work being done by ClinGen on assessing actionability of gene variants is impeded by a lack of consensus in the field on what is meant by actionability, Goddard noted.
There are many challenges involved in capturing information from a genomic test, including the test results, and use of subsequent health care services, Lu said. Further complicating the situation is the fact that there are many different products in use. For example, there are a variety of different panels and sequencing approaches that include the BRCA genes, making comparisons challenging. She also said that gathering clinical utility data from a global payment system could be challenging because many items are lumped into one billing code. It might not be possible to discern what kind of test was done or what services were utilized, because the tests and services would be consolidated into a code for cancer care, for example. In response, Leonard said that in a global payment system like the one at the University of Vermont, payment codes would still be tracked, but they would not be submitted to payers.
Leonard proposed several potential activities for the Roundtable to explore. The information needed for analyzing health outcomes and cost effectiveness is not well defined, and Leonard suggested convening a workgroup to define the data and metrics needed for the assessment of health outcomes, cost effectiveness, and other relevant outcomes such as personal utility and family utility. Once the data needed are defined, another workgroup could discuss how best to aggregate the data. Leonard suggested that this process could be informed by groups that are already aggregating data (e.g., PCORnet, HealthConnect, ClinGen, GenomeConnect, Vizient). Finally, she suggested that the Roundtable explore the laws and regulations that might limit the value of genomic medicine outcomes. The Health Insurance Portability and Accountability Act (HIPAA) of 1996, for example, could potentially limit cascade communications with, and testing of, at-risk family members. In addition, there are many individuals who fear potential repercussions from genomic testing, and the Genetic Information Nondis-
crimination Act (GINA) of 2008 does not provide full protection from discrimination under all scenarios (i.e., discrimination related to schools, mortgage lending, housing, or life insurance).
The Roundtable could focus on defining what the highest value types of data are, Kho said. Another potential issue for the Roundtable to explore is examining what is needed for a more nuanced consent process, including addressing privacy concerns, and considering the technical structures that would need to be in place to enable individuals to have a more nuanced consent. Finally, Kho said, data sharing is in many ways an economic or value issue. There are opportunities to conduct natural experiments in places where genomic testing is already taking place and to collect information on what people value and are willing to pay for.
Regarding data harmonization and evidence sharing, Murray emphasized the need to motivate the for-profit genomic medicine industry to share its data as they are developed. He suggested that the Roundtable consider how the industry could be incentivized to do so. Another issue for consideration is the need for data standards around penetrance. Currently, Murray observed, if someone has an incidental finding for a monogenic disease, different practitioners around the country would provide different followup evaluations. There is also a need to better understand the performance characteristics of phenotyping, he noted. EHR phenotyping has an uncertain negative predictive value, he said, particularly for some genomic conditions. Just because something is not in the EHR does not mean it is not a medical problem for the patient. Self-reporting of data varies according to patients’ perspectives, and the data quality and performance characteristics of expert evaluation are also unknown.
The issue of total health care costs was raised as Isham commented on the temporary coverage provision in the model discussed by Lu. There is tremendous chaos and pressure in the larger health system, Isham said, and the total cost of care is driving a lack of investment in other elements critical to health, such as education and economic development. The Roundtable may want to explore some of the tensions between the public–population health perspective and the genomic research perspective, he suggested. He also noted concern about training and the consistency of process outside of the research setting, and he highlighted the opportunity to discuss point-of-care algorithms and tools for helping patients understand the available treatments and courses of action. Another potential topic for Roundtable discussion, Isham suggested, would be practical financing, taking into account the real-world issues that health care systems are struggling with in the current mixed payment environment (i.e., fee-for-service, aggregate payment). More discussion on patient experience and attitudes would also be beneficial, he added.
Drawing from the presentations and panel discussions, workshop co-chairs Feero and Veenstra summarized the key messages that individual speakers delivered on the topics of evidence generation, genomic screening programs, and data sharing, and they highlighted some considerations for organizations that are thinking about implementing genomic screening programs. The field of genomics has come a long way in terms of understanding the systematic clinical integration of genomic information, Feero said, and a decade ago much of what has been achieved would have been incomprehensible; however, there are many evidence gaps that still need to be filled.
Considerations from Individual Speakers and Presented in Summary
The genomics field is still very much in the evidence-generation stage, Feero said, as opposed to being at the stage of broad implementation of applications with proven benefit. Clinical utility data will be important for facilitating the broader adoption of genomic medicine and the incorporation of genomic data as a routine component of care. Collecting data on personal utility (the amount of usefulness or benefit one can derive from a particular activity) or disutility (harmful or adverse effects associated with a particular activity) will also be very important, Feero said. Without this type of information, the field risks medical misadventures that may be very difficult to recover from, he said. As was emphasized in the discussions, it is important that any population screening program make a clear distinction between research and clinically proven interventions.
Engaging with Populations for Screening
Identifying and meaningfully engaging with typically under-included populations when developing genomics programs is important, Veenstra said, and there is an opportunity to disseminate tools and best practices for researchers interested in engaging diverse populations. Engagement is an ongoing activity throughout the process of genomic screening, and better engagement would help to determine the utility that meets the needs of a given population. Active management of inclusiveness is also important, he said. Enlisting participants from diverse backgrounds (racial/ethnic, socioeconomic) will help ensure that accurate knowledge is gained from genomic
screening programs, such as data on the clinical utility of genomic tests for all segments of the population.
Facilitating Data Sharing
It is critical that data sharing be advanced, Feero said, as most systems will not have sufficient sample sizes to answer the questions posed. Significant infrastructure, including common data models, needs to be developed in order to fully realize effective data sharing, Veenstra said. There are existing models for data sharing that might be adaptable for genomics-based programs. Some examples discussed were from genomics discovery science, and perhaps these models could be extended and leveraged to help answer questions concerning the integration of genomic screening into health care. It is not yet clear what data should be shared, or how, so in the near term efforts are needed to establish an agreement about what data (outcomes, metrics) need to be collected and shared, Veenstra said. Leonard suggested that the Roundtable could play a role in facilitating the discussion around defining data needs. For the longer term, Veenstra said, there will be a need to engage key decision makers to understand their evidence needs related to the value of genomics-based programs and to create incentives for participation in data sharing.
Designing the Approach
Care is needed in considering what technologies to adopt, what to test for, and how to report that information and for how long, Feero said. Smaller, high-yield panels based on population prevalence may be of more benefit than larger panels with much lower prevalence. Managing expectations is also very important. This includes making sure people understand that a negative result does not necessarily mean they do not have a pathogenic variant, especially if there is a strong family history. A multidisciplinary approach is needed, Feero said, and discipline-specific resources and additional support should be given to non-genetics providers to help them improve the care for patients they see who carry potentially harmful genetic variants (as opposed to getting non-genetics providers to adopt the geneticist perspective on the topic). Research teams should be integrated with programs that are more clinically oriented, Feero said. Early modifications to a study or program design could allow for the ability to answer more questions.
Developing Outcomes to Measure
When developing outcomes for a genomics-based program, Feero said, one should purposefully select outcomes and metrics and evaluate longitudinally and at different intervals in order to inform decisions about whether or not to continue. The outcomes being measured should not be limited to traditional trial metrics (e.g., efficacy), but should include important health-related outcomes more broadly defined, such as personal utility or financial aspects, he said, summarizing concepts discussed by Peterson (see Chapter 3). Electronic infrastructure (particularly the EHR) is lagging and needs attention, he said. When planning the implementation of a genomics program, one should carefully consider the vendor community and how amenable that vendor community will be to genomic testing and genomics data. When a genetic result is returned and an action is taken, it does not necessarily mean that the outcome is related, Feero said. It is important to understand the intermediate steps in order to effect change in the system and potentially improve outcomes.
Improving the Sustainability of Programs
There are multiple pathways by which programs can fund their activities, but long-term financial sustainability is still a work in progress, Feero said. The examples that were discussed at the workshop included a state-funded program, an industry-funded program, a health system–funded program, and federally-funded research. Several speakers noted that organizational leadership buy-in is essential and that organizations should consider evaluating the range of possible ways they could leverage existing systems and resources.
One of the major challenges facing the field of genomic medicine, Veenstra said, is how to integrate all of the efforts to collect genomic data that are happening across the United States. Developing a mechanism to bring all the stakeholders together and compile data in a single place to share and learn from will be an ongoing effort. Since its inception, the Roundtable on Genomics and Precision Health has made a great deal of progress in terms of understanding the systematic clinical integration of genomic information; however, Feero said, as the field continues to evolve, there will be new issues to address. Solving these new challenges will take a village, he said, and the Roundtable should continue to bring together all of the relevant stakeholders to identify ways to develop new collaborations, share information, and move the field forward.