aThis highlight was revised after the prepublication release.
Various different models for integrating genomic testing into health care systems were discussed at the workshop, but there is not a one-size-fits-all approach, said session moderator Marc Grodman, an assistant professor of clinical medicine at Columbia University. The genomics-based programs under discussion at this workshop test for different genetic variants, are performed by different people, and are being paid for through different mechanisms. Out of these many programs volumes of data are being generated, and there are challenges to sharing those data both within and across institutions and systems. Furthermore, Grodman said, sharing goes beyond the sharing of data to the sharing of experiences, methods, and approaches. Panelists in this session discussed approaches to information sharing across systems and organizations. Rex Chisholm, the vice dean for scientific affairs and graduate education at Northwestern University, described the Electronic Medical Records and Genomics (eMERGE) network as an example of sharing data across a consortium and linking genotypic information to the electronic health record (EHR). Eric Boerwinkle, the dean and M. David Low Chair in Public Health at the University of Texas Health Science Center at Houston, discussed optimizing the sharing of data, results, experiences, and resources. Richard Turner, a clinical research fellow in clinical pharmacology and therapeutics at the Royal Liverpool University Hospital and the University of Liverpool, described some of the incentives and challenges of data sharing in three implementation projects in Europe. Lori Orlando, an associate professor of medicine at Duke University School of Medicine, discussed the importance of applying implementation science when launching genomics-based programs, including defining and developing measures for genomic medicine implementation studies.
The eMERGE network1 is a National Human Genome Research Institute (NHGRI)-funded consortium consisting of nine active clinical sites, two sequencing centers, and a coordinating center, explained Chisholm, who is one of the principal investigators of eMERGE. The goal of the eMERGE network is to combine DNA repositories with EHR systems for large-scale, high-throughput genetic research that supports the implementation of genomic medicine. Peterson is the principal investigator of the coordinating center at Vanderbilt University, Chisholm said, and what has been done across the sites in the eMERGE network is a microcosm of what will need to be done in rolling out genomic medicine across the country.
eMERGE is a rich resource, Chisholm said, with genome-wide association studies data from over 100,000 participants to date. Genetic data from individuals at eMERGE sites are merged with their EHRs and used for genomic research. This linking of genotypic information to the EHR allows for very efficient use of the data, Chisholm said, and as part of the group’s efforts 84 important genes for drug metabolism have been sequenced in more than 9,000 participants. The commitment to enter that information in the EHRs and to use it to inform clinical decision support enables the assessment of the value of pharmacogenomics in a clinical setting, he said. eMERGE is currently recruiting 25,000 additional participants who will have a gene panel of 109 genes sequenced. These will include the 59 genes identified as medically actionable by the American College of Medical Genetics and Genomics, other genes of interest to the research project, and numerous single nucleotide polymorphisms, many of which are relevant to pharmacogenomics (Kalia et al., 2017). The clinical sequencing centers will then return actionable data to the EHRs.
In this system, data are transferred from the clinical sequencing centers to the clinical sites. Previously the sequencing center associated with eMERGE would send genomic data to the clinical site in a portable document format (i.e., as PDF files); however, Chisholm said, now the network is able to get an XML feed of the data, which allows for greater interoperability. Building on that accomplishment and continuing to establish data exchange standards would help facilitate the exchange of data, he said.
Sharing Data to Support Value Measurement
Data sharing can enable more robust and meaningful results, Chisholm said. The data from any one clinical site alone are unlikely to provide the
1 For more information on the eMERGE network, see https://www.genome.gov/27540473/electronic-medical-records-and-genomics-emerge-network (accessed January 10, 2018).
statistical power to draw meaningful conclusions regarding value. However, a cohort of more than 100,000 people like those in the eMERGE network offers increased statistical power to study many common diseases.
The use of data standards is important for data sharing, Chisholm continued. The meaningfulness of combining data together in one place is limited if systems are using different languages to collect those data. By using common data models and common standards, eMERGE has been able to facilitate data sharing. eMERGE participants are now working to convert all of the data to the Observational Medical Outcomes Partnership (OMOP) common data model,2 which will be instrumental in improving data sharing going forward, Chisholm said. The OMOP common data model allows for systematic analyses of disparate observational databases.
The sharing of phenotypic data from EHRs also presents unique challenges, and eMERGE has adopted a hybrid model with data standards to address this issue. eMERGE shares a collection of phenotypic data (mostly coded data) with the coordinating center, which makes it available through the eRecordCounter tool. This tool allows researchers to ask a specific question of the records, such as, How many people are there with type II diabetes and a body mass index over 40 who are not taking insulin? Exploratory data figures are then shared with researchers to help them with project planning and feasibility assessment.
Overcoming Obstacles to Data Sharing
When the eMERGE network began, one of the first processes was to develop a data use agreement outlining the principles for data sharing. The first attempt at drafting a data use agreement included legal language from the five initial sites in the consortium, and the result was a massive document that was not helpful to most stakeholders, Chisholm said. The process started over with a simple draft and a focus on what the consortium was trying to accomplish. eMERGE leadership worked with each site’s principal investigators and lawyers and explained the process and the need for a simple agreement. What developed was a standardized data use agreement that did not have a lot of extra language, and when additional sites joined the consortium, the agreement could be signed without the need for any changes.
As mentioned, one of the technological barriers to data sharing is a lack of data standards. When eMERGE began, Chisholm said, concerns were raised about using clinical data for research purposes. It was found, however, that there is value in using clinical data in a repeated, regular
2 For more information about the OMOP common data model, see https://www.ohdsi.org/data-standardization/the-common-data-model (accessed January 10, 2018).
way and that doing so can actually improve the quality of data for clinical care. As an example, Chisholm said, an initial analysis of birthweight in an obstetrics hospital with 13,000 deliveries each year revealed an odd bimodal distribution, which led to the realization that some entries were in grams, while others were in kilograms. Simply constraining the numbers that could be entered immediately led to an improvement in the data quality in the EHRs. Deploying standards that are shared across a variety of organizations is beneficial across health care and certainly for precision medicine and genomic medicine approaches, he said.
Common Data Elements
There are many data elements, such as the reactions of participants and providers, that it would be helpful to collect to inform the implementation of genomics-based programs. In implementing pharmacogenomics at Northwestern, Chisholm said, a lot of communication and training was required in order to demonstrate to primary care providers that there is value in putting pharmacogenomic information into the EHR.
Information is also needed about health care use to inform economic discussions, Chisholm said. One barrier to implementing precision medicine broadly is the fear that that it will overwhelm the health care system with additional work that brings little value. It is important to capture the type of usage data discussed by Goddard (see Chapter 2) and to share it broadly across organizations, Chisholm said.
Health care is an integral and growing part of the U.S. economy, Boerwinkle said. However, sharing information is often thought of as counter to profitability because many chief financial officers in large health care systems view sharing as an avenue to lose patients from their system, he said. It is important to consider sharing more broadly and take advantage of new business models emerging around the “sharing economy.” Boerwinkle suggested that patients drive the demand for data sharing as they take on the role of being gatekeepers for their medical record data.
As an example of sharing in a large, complex environment, Boerwinkle discussed optimizing the sharing of data, resources, and results and experience at the Texas Medical Center. The center includes 59 member institutions collectively logging 10 million patient visits each year. If the Texas Medical Center were to incorporate, he said, it would be the eighth largest economic zone in the country. As such, it is an ideal test bed for sharing data across health care systems.
Data Sharing: HealthConnect
Ideally a data sharing system would connect all health care providers. There is often a discussion about placing all health care data in the cloud so everyone can access it, Boerwinkle said, but this would not be the most efficient approach for health care or for research. There is also interest in health information exchanges (HIEs), which are quite effective in some parts of the country, he said. In an ideal HIE, all of the health care entities in the exchange share data in semi-real time, based on queries from any of the nodes. The entities include, for example, hospitals, radiology centers, pharmacies, clinics, laboratories, primary care providers, and specialists.
HealthConnect, a community master patient index, is used by the Texas Medical Center, Boerwinkle explained. The index receives real-time information about all patient visits and activity. Every individual in the health care system has a set of identifiers and can be mapped independently of the place where he or she is having a medical encounter. In practice, any one of the participating organizations (e.g., hospitals, health care systems) can make a data request to the HealthConnect system. In a matter of seconds, HealthConnect can confirm that a particular patient has consented to sharing his or her information, can locate information about that unique patient across the different organizations in the HealthConnect community, and can query a target organization if needed. Then, within hours, the target organization responds to the data query with additional information about the patient’s medical care. In this way, competing health care systems are sharing data for the benefit of the patient, Boerwinkle said, without fear of losing patients to a competing system.
Optimizing Results and Experience Sharing: Standards of Evidence
The ClinGen project is a venue for sharing genomic information that includes the vetting of information by experts, developing standards, and moving toward actionability (Rehm et al., 2015). However, Boerwinkle said, scaling the approach of ClinGen will be challenging because of the need to establish the clinical validity of the variants with groups of experts. The ability to scale the sharing of genomic information is essential for genomics to become integrated into the routine health care setting. Developing semi-automated clinical reporting platforms and machine-learning algorithms to help with establishing the clinical validity of variants and matching variant characteristics to phenotypic characteristics may be one useful approach. Another possible approach, Boerwinkle said, is crowd-sourcing the curation of genomic data and the interpretation of variants in order to tap into the tremendous expertise in the health community.
Resource Sharing: Developing an Analysis Commons
The successful academic health care centers will be those that move their discoveries into the translational space and make those data and their translational experiences available to researchers for further discovery, Boerwinkle said. This will create a cycle of clinical care and research, leading to a learning health care system.
In terms of a resource for sharing research, he said that researchers are not generally going to the EHR for health data. Rather, health data are moved to a data warehouse, outside of the EHR, where researchers can mine the information. This calls for the creation of an environment that brings the data (genomic and phenotypic) and the analytical tools in proximity, in a secure analysis commons. The data in the commons would be made available to authorized users, after appropriate vetting. DNAnexus is one example of an analysis platform that can be accessed by the research community, Boerwinkle said.
Ubiquitous Pharmacogenomics Consortium
Three ongoing initiatives in Europe can provide examples of the incentives and challenges of genomic data sharing, Turner said. The Ubiquitous Pharmacogenomics (U-PGx) consortium is a pan-European endeavor involving 16 beneficiaries across 10 European Union (EU) countries.3 U-PGx is funded for 5 years by a Horizon 2020 grant through the European Commission. The centerpiece of the project, Turner said, is a 3-year pharmacogenomic implementation study occurring at one or more sites in seven EU countries, including the Royal Liverpool Hospital in the United Kingdom. The study will evaluate implementation metrics, patient outcomes, and cost effectiveness. Over the 3-year study period, 8,000 participants will be recruited to either a standard-of-care arm or a pharmacogenomic-tailored care arm in which they will be preemptively genotyped for 50 variants in 13 pharmacogenes. A guideline will be given to their care practitioners (who may or may not follow the dosing recommendation). Participants will be followed for a minimum of 12 weeks in order to identify adverse drug reactions. Turner noted that, due to the nature of the funding call, this is not a randomized controlled trial but rather an implementation study. The 3-year study period is divided into two 18-month blocks. For each participating country, one 18-month block will be designated to standard of care, and
the other block will be designated to pharmacogenomic-tailored care. The order of these two arms has been randomized across the seven countries.
Turner described a number of operational factors involved in getting the study up and running. In obtaining ethical approvals, the Netherlands and the United Kingdom were instrumental in first going to their organizing regulatory bodies to demonstrate that this is an implementation study and not a randomized controlled trial. This helped facilitate ethical approvals on a similar basis for most of the other partners, he said. Another factor was that the Dutch Pharmacogenetics Working Group’s guidelines needed to be translated into the language of each participating country.4 It is not just the language, but the cultural acceptability that must be considered in translation, Turner said. For example, in trying to capture quality-of-life information, time trade-off questions were not acceptable to patients in Italy or the United Kingdom, and the questionnaire needed to be revised accordingly. Interestingly, he said, time trade-off questions were more acceptable to people in Austria and the Netherlands.
One of the benefits of working together is economies of scale, Turner said. Genotyping is being performed locally at the seven sites, on the same platform. All of the information is then sent to bio.logis (a genetic information management firm in Frankfurt) to carry out standardized genotype interpretation. As new evidence is accrued, the bio.logis site is updated, and standardized information is automatically returned to the sites. In the United Kingdom, for example, the university is responsible for the study, patients are recruited from the Royal Liverpool Hospital, genotyping will be carried out in a clinically accredited laboratory, and data will be submitted to bio.logis and then fed back to both the hospital and the study case report form. One challenge, Turner said, is the wide spectrum of current standards within health care systems across the EU. Greece is generally paper based, while the Royal Liverpool Hospital is paperless. As such, it has been necessary to allow the sites the flexibility to develop ways to make the genetic information available to their practitioners. In the Netherlands and the United Kingdom, the plan is to have an interruptive clinical decision support system. At other sites, practitioners may simply receive a PDF document. In an auxiliary approach, the genetic information will be associated with a quick response (QR) code on a credit card–sized “Safety-Code” card held by the patient, and primary care practitioners can easily access the information by scanning the QR code using a smartphone.5
Turner listed several outcomes that should be measured in pharmacoge-
nomic studies. For example, there is no mandate to follow the pharmacogenomic recommendations as part of this study, so it is important to look at guideline adherence by practitioners. Unfortunately, current systems are not designed to record this information. In addition, for pharmacogenomic studies it is important to take drug adherence into account. If a patient is not taking a prescribed drug, then the reasons for the non-adherence should be sought. Other outcomes to consider include surrogate markers, health care use and associated costs, prescription changes, clinical utility, and, ultimately, quality-of-life information.
Warfarin Pharmacogenomics Implementation
There are several factors that can affect the determination of clinical utility for genomics-based programs, including patient ethnicity, the baseline characteristics of the health care service, specific drug indications, and implementation knowledge and attitudes. Turner elaborated on these factors in the context of warfarin pharmacogenomics implementation. Warfarin remains the most commonly used anticoagulant in the United Kingdom. It is the third most common cause of adverse drug reactions leading to hospitalization, and approximately 40 percent of the variation in dose among patients is ascribed to two genes, VKORC1 and CYP2C9, Turner said.
He summarized the main findings of the three pivotal, randomized controlled trials of genotype-guided warfarin dosing. The EU-PACT study found a statistically significant benefit with a genotyping strategy versus a standard loading strategy (Pirmohamed et al., 2013). The simultaneously published Clarification of Optimal Anticoagulation through Genetics (COAG) study, however, did not find genotype-guided dosing to have greater benefit than clinically guided dosing (Kimmel et al., 2013). The more recent GIFT trial, which Turner noted was powered for clinical endpoints, found the genotype strategy to have a statistically significant reduction in the primary clinical composite endpoint versus standard dosing (Gage et al., 2017). Together, the balance of evidence is in favor of warfarin pharmacogenomics, he said.
One potential reason why the COAG trial did not show a benefit was that there was more racial heterogeneity among the COAG trial participants, Turner postulated. More than 97 percent of the EU-PACT participants were Caucasian. In contrast, the COAG trial participants were 67 percent Caucasian, 27 percent African American, and 6 percent Hispanic. African American participants actually fared worse in the genotype arm compared to the clinical dosing algorithm, he said, which might be due to the fact that the COAG study did not take into account genotype variants that are specific to African Americans (Cavallari and Perera, 2012). This
demonstrates the need to be mindful of evaluating genotypes that are pertinent to the specific patient population being treated, he said.
Another point to note is that the EU-PACT trial was carried out in Sweden and the United Kingdom, and genotyping was found to be likely more cost effective in the United Kingdom than in Sweden (Verhoef et al., 2016). It is plausible that this is due to Sweden being better at managing warfarin than the United Kindgom, in which case the incremental benefit of a pharmacogenomic strategy would probably be less in Sweden than in the United Kingdom, Turner suggested.
On this foundation, a small warfarin pharmacogenomics implementation initiative was launched in the northwest area of England. The initiative employed point-of-care testing to inform warfarin prescribing at three different hospitals. One of the sites was not as effective at recruiting participants as the other two sites, Turner said. Feedback from the research nurses indicated that the staff at that site felt they were too busy to take part in and learn the process. They felt that direct-acting oral anticoagulants were already better, and they did not seem to have much belief in pharmacogenomics, he said. This experience shows the need to become more inclusive and ensure that knowledge is being shared and education is being provided to practitioners up front to help overcome institutional cultural barriers.
100,000 Genomes Project
The last implementation initiative Turner described was the U.K.-wide 100,000 Genomes Project, which is conducting whole-genome sequencing of approximately 75,000 individuals to obtain 100,000 genomes: 75,000 germline genomes and 25,000 somatic genomes.6 Participants are being recruited through 13 genomic medicine centers throughout the United Kingdom, which are hubs for a total of more than 80 different health care trusts. The genomic information is being entered into a data storage center and is being supplemented with clinical information from both the hospital and the primary care environment, when available. Researchers can access this information by joining Genomics England Clinical Interpretation Partnerships. Information is accessible through a virtual private network, but individual-level data cannot be downloaded. All activities are monitored, which, Turner said, ensures that access to the data is provided on an equitable basis, while assuring patients that their data are being appropriately handled.
6 For more information on the 100,000 Genomes Project, see https://www.genomicsengland.co.uk/the-100000-genomes-project (accessed January 3, 2018).
Incentives to Collaborate and Share Data
One main incentive for collaboration and data sharing, Turner said, is funding, as was the case for the Ubiquitous Pharmacogenomics consortium. The ability to increase statistical power, enhance recruitment, and create economies of scale are other incentives for collaboration and data sharing. Working in collaboration can also offer risk mitigation and shared solutions (e.g., the ability to streamline ethical approvals by working together). Finally, as sample sizes increase, there is the potential for greater impact and greater ability to show clinical utility and cost effectiveness.
Understanding implementation is critical for moving genomics from research into clinical care, Orlando said. Genomics researchers generally have a project and corresponding funding, and they figure out how make the project work, handling challenges as they come along. The downside of this approach, Orlando said, is that no one learns from these one-off solutions. Implementation without structure provides no guidance on implementation in other settings. The solutions are not generalizable and provide no model for the development of sustainability, she said.
Implementation scientists focus on creating generalizable approaches. As an example, Orlando mentioned the work of Peter Pronovost and colleagues on reducing central line infections. Applying an implementation science approach, they used a checklist-based intervention to significantly reduce infections. The key to success, Orlando explained, was not what was on the checklist, but the process of creating the checklist at each clinical site. Each institution tailored the intervention to its own site based on its issues and workflow.
Applying an implementation science approach could help advance the field of genomics, Orlando said. Clinical trials use traditional measures to assess the outcomes of various interventions. However, clinical trials exist within a larger framework, and elements of that framework affect how those trials are conducted and how effective they are. Those implementation elements (including clinician behavior) are not frequently measured. Standardized implementation measures are needed to assess implementation outcomes that in turn will affect traditional clinical utility outcomes, Orlando said.
The IGNITE Network
The Implementing Genomics in Practice (IGNITE) network is currently funding six different genomic intervention projects.7 Each of the six research sites is implementing a different genomic intervention alongside a community partner. The goal is to create shared knowledge about the implementation experience and to facilitate knowledge transfer to others interested in implementing genomic interventions in their own health care settings.
The research sites in the IGNITE network have agreed to use an implementation science–based approach to their studies, and the Consolidated Framework for Implementation Research (CFIR) was used as the guiding framework for the network. The difference between a framework and a model, Orlando said, is that a framework essentially lists constructs, while a model describes relationships, such as how particular constructs inform an outcome. The CFIR compiled all of the existing models and data pertaining to implementation and presented them as a series of constructs. Overall there are 25 constructs and 13 sub-constructs, organized into five domains, outlined in Box 4-1 (Damschroder et al., 2009).
Using the CFIR constructs as a starting point, the IGNITE network’s Common Measures Working Group identified constructs that were particularly important for genomic medicine, Orlando said. The resulting list was used to help develop new measures and create a common dataset across all of the projects. The list has been revisited several times as new sites and affiliates have become involved. The CFIR constructs and sub-constructs that ranked the highest for relevance to genomic medicine included costs, evidence strength and quality, available resources, leadership engagement, and champions, she said. Constructs that were ranked second highest included relative advantage, adaptability, complexity, patient needs and resources, implementation climate, relative priority, internal implementation leaders, planning, and executing. These are the aspects that people conducting implementation projects should consider measuring, Orlando said. Not all of the constructs have established measures. Although the Common Measures Working Group has developed several measures, additional measures are still needed, she said.
Because the characteristics of the patient are not currently part of the CFIR, a list of non-CFIR constructs was also developed. Non-CFIR patient measures identified thus far include demographics, self-reported health, health care activation, the social determinants of health, information sharing, health literacy, family and community assessments, attitude toward
genomic intervention, and preference for who returns results. Additional patient measures will be added over time, Orlando said.
The working group also drafted and recently published a genomic medicine implementation research model, incorporating the constructs identified, how they interact, and how they might affect interventions (Orlando et al., 2017). Using an implementation science framework to guide genomic intervention implementations provides several additional benefits, Orlando said. First, it provides a broader frame for assessing health disparities. It also increases the reach of the intervention and allows for more generalizable interventions. Finally, it can increase the effectiveness of the intervention.
In summary, Orlando said that including system measures along with traditional measures and outcomes will help create sustainable interventions. The IGNITE network is a test bed for implementation research. A draft genomic medicine implementation research model is available, Orlando said, adding that her group is looking for opportunities to refine it. A method for identifying high-priority CFIR constructs has been developed
for others to use, and a list of non-CFIR high-priority constructs is in progress and should be updated with work that others are doing in this area.
The Role of EHRs
Several of the projects that were discussed earlier, such as eMERGE and IGNITE, rely on EHRs, Grodman noted, and he asked panelists to comment on the role of EHRs in the implementation of genomics-based programs and, specifically, on whether the incorporation of the EHR is necessary for implementation, or whether there are alternatives.
Association with the EHR is necessary, Chisholm said, and it is unlikely that an alternative would be developed at this point. There will be opportunities to rethink how EHR systems are constructed (e.g., cloud based, smartphone accessible, etc.) and how to improve the quality of the data being captured (e.g., standards). The rate-limiting step, he said, is that most people who enter data into EHRs have a very limited amount of time for the patient encounter and EHR data entry. A key question for genomic medicine implementation is how best to get the data out of the EHR, Chisholm continued. He acknowledged that most genomics researchers do not use the EHR, instead working with some sort of data extraction or data mining approach that captures the EHR data and reconfigures it to be more amenable to searching. An essential element for functionality is the ability to use natural language processing and other approaches to capture data that have been entered in the EHR as free text.
Another important aspect to consider is how best to enter genomic test results into the EHR, Chisholm said. Clinical decision support has been mentioned multiple times throughout the workshop, he noted. It is important to monitor how often the pharmacogenomic decision support tool is triggered and how often the physician overrides the recommendation (which, he added, is a significant percentage of the time). With regard to genomic results, it is unlikely that whole-genome sequences would be entered into the EHR, he said, as it would be an overwhelming amount of data. There is some precedent for not entering medical data into the EHR, he said. For example, medical imaging is not entered into the EHR, but instead is accessible through a separate picture archiving and communication system (PACS). The eMERGE network has been considering ancillary genomic systems (analogous to a PACS) and the rules that would be applied to move information from the ancillary system to the EHR. As ClinVar and ClinGen evolve, they might provide some of the rules that can be used to move those data.
The EHR is a necessary part of modern health care, Boerwinkle said,
but there are a lot of demands being placed on this relatively new technology, both in health care and in research. EHR tools are continuously evolving to become more useful, primarily for the quality of health care. There have been changes in the attitude of EHR vendors, who are now moving beyond the use of EHR for billing to using it to improve the quality of health care, Boerwinkle said, noting that the vendors seem much more engaged in trying to incorporate new information, including genomics.
Using the EHR on a daily basis is part of a clinician’s job, and it represents a significant improvement compared to prior approaches to managing patient data, Orlando said. However, it can be burdensome for a clinician to have to enter the numerous diagnosis codes requested by researchers. Natural language processing, common data models, and data standards may help both clinicians and researchers improve data collection, Orlando said. Her research project for IGNITE has used SMART on FHIR (Substitutable Medical Applications, Reusable Technologies on Fast Healthcare Interoperability Resources) to integrate a family history tool into the EHR, helping both clinicians and researchers.
To ensure equity and inclusivity for patients and to engage as many clinicians as possible, involvement in genomic or pharmacogenomic implementation endeavors should be limited to hospitals that already have EHR systems, Turner said. Some necessary information is still not routinely collected (e.g., quality of life, drug adherence), which can be frustrating for researchers, he said. Natural language processing might help, but there is also a need to educate clinicians to collect this information.
Implementation Science in Practice
Less than 2 percent of the National Institutes of Health genomics research portfolio currently includes implementation science–based approaches (Roberts et al., 2017). Implementation science frameworks may represent an opportunity to design genomics-based screening programs in health care systems in such a way that proper data can be obtained that would indicate if the routine use of genomics in clinical practice is appropriate (NASEM, 2016).
Within the IGNITE network, there is currently only one implementation scientist, so there is an opportunity to bring in additional expertise in this space. There are also opportunities for bringing eMERGE and the Clinical Sequencing Evidence-Generating Research (CSER) consortium together to consider using the implementation science–based framework for the return of results and to address multiple other questions specific to genomics that an implementation science approach could help to answer, Orlando said.
The time may be right to bring implementation science tools and
approaches to the return of results, Chisholm agreed. It will be important to conduct experiments to determine the appropriate way to proceed, rather than settling on a common framework up front, he said. Those involved with the All of Us research program plan to adopt a centralized model for storing data and providing access to those data. The data will be held at a central location, and the researchers will be brought to the data, rather than the data being taken to the researchers. This means that the data will be stored in a standardized format and common tools for analysis will be developed, Chisholm said. Still, there is space for experimentation and implementation science to better define approaches to querying data and returning results.
A traditional implementation science approach may not be working for genomic medicine, and it is not clear why, Boerwinkle said. There is a need to step back and ask why integrating genomics into routine health care is not happening, despite successful implementation science studies. One possible reason for the lack of widespread adoption is that there is not yet enough evidence accumulated on the clinical utility of such an integration. For a small part of the genome, such as variants found in the diseases designated as Tier 1 by the Centers for Disease Control and Prevention (e.g., hereditary breast and ovarian cancer, Lynch syndrome, and familial hypertension), it may be time for implementation. Within that implementation space, experimentation is important, Boerwinkle said, because it will be helpful to determine the best way to implement an evidence-based recommendation for cascade screening. Most pharmacogenomics, however, is in the Tier 2 space, where there is information about clinical validity but limited evidence about clinical utility.
Differences in Quality Among Genetic Testing Laboratories
There are more than 700 different laboratories across the United States doing genetic testing, and it is difficult to determine if the products coming out of these laboratories are equivalent in quality, said a workshop participant. When data are not shared, there is a risk of the testing being duplicated—for example, when a patient changes insurers. This can be wasteful, assuming that the quality of the product from different laboratories is the same, the participant added.
Like any clinical laboratory, there is a range of quality for genetic testing laboratories, Chisholm said. Data sharing may actually feed back into the system and improve quality over time. ClinGen has conducted analyses of different laboratories, including analyses of their annotation processes and the curation of the variants that they have labeled as pathogenic, likely pathogenic, or benign. Where discrepancies were found, ClinGen helped to adjudicate those discrepancies, and build tools to help resolve
them, Chisholm noted. Some of the discrepancies were simply due to addition errors in the score used to determine whether a variant is pathogenic or likely pathogenic. Some individuals have suggested that payers should cover testing only for those who are willing to have their data entered into ClinVar, so that it can be evaluated, Chisholm said, which would have huge impact on data quality from the laboratories.
In his experience as founder of a genetic testing laboratory and as a former chair of the American Clinical Lab Association, Grodman said, most clinical laboratories operate under the strict standards of the Clinical Laboratory Improvement Amendments and the College of American Pathologists, as well as stringent state requirements. Genetic testing takes place in both academic centers and clinical reference laboratories, and in both cases the goal is to provide a quality result. However, it is important to be aware that the knowledge about the pathogenicity of variants can change in the future, and that does not mean that a laboratory did the test wrong or that it did not work, Grodman said.
Data Sharing Incentives for the Long Term
NHGRI has funded innovative research programs such as eMERGE, CSER, and IGNITE, which facilitate information sharing among researchers and health care systems. It is important to identify the incentives for health care systems to participate in massive data sharing networks and to share data across systems, in the event that the research programs are no longer supported by government funding, Ginsburg said. Some forward-thinking health care systems are building capability, which clearly advances their own research agenda and perhaps their clinical agenda (to be competitive in their local environments), he said, but what happens to data sharing when IGNITE, CSER, and eMERGE cease to exist?
Demonstrating the value of data sharing is important, Chisholm said. For example, the Chicago Area Patient-Centered Outcomes Research Network is a clinical data research network that shares the movement of participants among different health care systems in the Chicago area (discussed by Kho in Chapter 5). There is value in understanding how porous a health care system is and how frequently people are moving between health care systems, he said. From both quality-of-care and cost-management perspectives, there is value in knowing that a patient who frequently presents at one emergency department is also presenting in emergency departments elsewhere in the area.
At the most basic level, institutions will be driven to promote data sharing when sharing becomes an integral part of quality, management, and reimbursement metrics, Boerwinkle said. Data from the HealthConnect experience show that the number of frequent users of health services
is much higher than previously thought. Frequent users were previously defined as those who were repeatedly using the same health care system. When systems are connected, it becomes clear that people are moving around among them. Patients are going to demand data sharing and really drive it through programs such as Sync for Science,8 Boerwinkle said. Because the funding situation is different in the United Kingdom, Turner said, data sharing is being driven by the pursuit of clinical utility and cost effectiveness. Evidence of cost effectiveness (not just in the United Kingdom, but worldwide) would drive governments to support it, because it would save money for the system overall.
Data Sharing by Individuals
The concept of allowing individuals to share their data as a potential solution was revisited by a workshop participant. Although there are advantages to this mechanism, there are also many practical barriers. For example, the infrastructure, data tools, and money are given to institutions that have intellectual property rights and goals for their programs. There are privacy and security concerns that are significant and costly. How could individual data sharing be implemented practically? The barriers are not insurmountable if individuals are empowered, Boerwinkle said, and they could be empowered by a clear policy decision such as a court decision that dictates that individuals have authority over their own data. The barriers will begin to wither after the first steps are taken. A first step of creating a data marketplace, or some incentive for people to focus on individual data sharing, could move this concept forward. The Genetic Alliance is one organization trying to do this, according to a workshop participant, but there are practical and infrastructure challenges. It is not useful if individuals have access to their data but have nowhere to share the data or no easy mechanism to do so.