3


Perspectives from Patients, Providers, and Laboratory Representatives

Important Points Highlighted by Individual Speakers

•   Implementing a learning health care system would allow for sharing of test data, save time and resources, and add value by facilitating a focus on patient outcomes.

•   A flexible regulatory process for utilizing next-generation sequencing for routine screenings will allow for a hypothesis-generating approach to diagnosis and treatment of patients.

•   Requiring that LDTs demonstrate their equivalence to IVDs through rigorous proficiency testing could establish uniformly high standards for companion diagnostics.

•   Next-generation sequencing will require a new approach to thinking about clinical trial designs, because every patient will in essence have to be treated as unique.

A variety of individuals use companion diagnostics and the results of these tests, including patients, health care providers, and clinical laboratory employees. Representatives of each of these three end-user groups described the value and problems with co-developed companion diagnostics along with the changes that can be expected as next-generation sequencing becomes a more prominent part of clinical practice.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 19
3 Perspectives from Patients, Providers, and Laboratory Representatives Important Points Highlighted by Individual Speakers • Implementing a learning health care system would allow for sharing of test data, save time and resources, and add value by facilitating a focus on patient outcomes. • A flexible regulatory process for utilizing next-generation sequencing for routine screenings will allow for a hypothesis- generating approach to diagnosis and treatment of patients. • Requiring that LDTs demonstrate their equivalence to IVDs through rigorous proficiency testing could establish uniformly high standards for companion diagnostics. • Next-generation sequencing will require a new approach to thinking about clinical trial designs, because every patient will in essence have to be treated as unique. A variety of individuals use companion diagnostics and the results of these tests, including patients, health care providers, and clinical labora- tory employees. Representatives of each of these three end-user groups described the value and problems with co-developed companion diagnostics along with the changes that can be expected as next-generation sequencing becomes a more prominent part of clinical practice. 19

OCR for page 19
20 THERAPEUTIC AND DIAGNOSTIC CO-DEVELOPMENT FOCUS ON Patients Thinking about patients at the beginning of the process rather than at the end focuses the discussion on outcomes, said Sharon Terry, president and chief executive officer of Genetic Alliance. Patients should not be con- sidered just the end users of genetic tests. From the patient perspective, health care providers do not always have a clear sense of what is most useful to patients. Providers may overtreat, undertreat, inappropriately treat, or not treat at all, based on the available information. What insurance will cover is often unclear, which can lead to disagreements over what should and should not be prescribed or performed. Patients also may make demands, some of which are appropriate and others of which may be inappropriate. “None of these are clear-cut,” Terry said. A fundamental problem, Terry said, is that the incentives to understand disease are low. Medicine is focused on trying treatment after treatment, but what is not captured during that process are data that could be used to determine what is effective and what is not effective. The key problem is finding an incentive to have a greater understanding of the biology of the disease, Terry said. What group will enforce assessments of value based on outcomes? In other industries, the consumer is empowered to do this, Terry noted, but in medicine, “all the stakeholders, including patients, make decisions that are disconnected from the consequences.” Developing com­ panion diagnostics may even be an interim solution toward what is actually needed for understanding disease. There may be no need for a companion diagnostic after acquiring this information, she said. Learning Health Care System as a Potential Solution To address the difficulty of thinking on a systems level about patient care, the nation needs a learning health care system, Terry said. This will enable people to “understand the disease, the progression of the disease, the treatment, and the reaction to the treatment, adverse or not.” In short, she said, a learning health care system “will help us to understand the outcomes that we seek.” Similarly, transparency in the performance of tests, the data generated by those tests, and the consequences of those tests can lead to best practices that can be shared within the system, which can save time and money. This new way of thinking about the health care system involves all stakeholders and provides an opportunity to create new metrics and new value chains tied to outcomes. People have to be willing to risk what those who are sick risk every day, which is changing the model, finding a new solution, and possibly destroying a current business model, she said. That will be difficult in medicine, particularly given the lack of empowerment among the people who receive care.

OCR for page 19
PERSPECTIVES FROM PATIENTS, PROVIDERS, AND LAB REPRESENTATIVES 21 Terry read a quote from the book How We Do Harm: A Doctor Breaks Ranks About Being Sick in America: “Proponents of science as a founda- tion for health care have not come together to form a grassroots movement, and until this happens, all of us will have to live with a system built on pseudo­ cience, greed, myths, lies, fraud, and looking the other way” (Braw- s ley and Goldberg, 2012, p. 27). “That’s pretty harsh, but I think it’s real and true,” Terry said. Health care is going to have to be like a civil rights issue, she said. People have a right to demand not just tests and treatments, but also solutions to the health problems they confront. Use of Tests in Oncology In oncology, biomarkers are used for a variety of purposes, including diagnosis, prognosis, and predictions of response, toxicity, risk of secondary cancers, and familial risk, noted Mark Robson, clinic director of the clinical genetics service in the Department of Human Genetics at Memorial Sloan- Kettering Cancer Center. Biomarkers also take a variety of forms and use dif- ferent technologies, including imaging, immunohistochemistry, and somatic or germline DNA sequencing. Companion diagnostic tests have been used for testing hypotheses rather than generating hypotheses, and this has sig- nificant implications for study design when trying to evaluate the benefit of the tests, Robson said. The main clinical problem, Robson said, is that no matter how good a biomarker is, the concordance between the biomarker and drug response is often incomplete. The state of a biomarker is just one piece of information in a much broader assemblage which includes such factors as the extent of disease and the results of prior therapies. This complex picture makes it difficult to assess the clinical utility of a test, which essentially becomes “a value judgment,” Robson said, and a matter of defining what makes the test worth using. “There doesn’t seem to be a consensus about the kinds of ­ etrics that we should use to establish sufficient clinical validity or sufficient m clinical utility to allow us to progress forward,” he said. Is overall survival more important than progression-free survival? What endpoints should be used? Are thresholds important? Those working on drug development have had these conversations for a long time, Robson said, and these discussions should be explored as they pertain to companion diagnostics as well. It is also not clear which trial designs are optimal for answering the sorts of questions that Robson raised about clinical validity and utility. Requiring randomized controlled trials may set too high a bar for sev- eral reasons, Robson said. In small subsets of patients, it is challenging to “design statistically robust studies without screening thousands and thousands of patients, which then becomes fiscally impossible,” he said. Furthermore, the randomized controlled trial population may not reflect

OCR for page 19
22 THERAPEUTIC AND DIAGNOSTIC CO-DEVELOPMENT clinical reality. From a clinician’s standpoint, Robson concluded, the major challenge is delineating clinical validity and the utility of a companion diagnostic in a way that informs clinical decision making in real-world circumstances. Some of the stakeholder solutions address that challenge, but they do not fix it. In silico methods that are used to predict functional responses to thera- pies can be useful, but the data are not always publically available, Robson said. If a laboratory collects and analyzes data but does not publicly release those data, other groups may waste time and resources studying a ques- tion that has already been answered. Also, the use of in silico methods to predict response may be application specific. This method may work where treatment response is dependent upon loss of function—for instance, with BRCA mutations and PARP inhibitors—but it will not necessarily be useful in more complicated settings. In oncology, sample composition offers additional diagnostic and treat- ment challenges, including the sample consisting of an admixture of tumor tissue with normal tissue, tumor heterogeneity, and the evolution from pri- mary tumor to metastasis. As Robson said, “Different tests from different sites and different points in a patient’s journey may have different meaning, and that needs to be accounted for.” Today, the clinical approach to treatment is fairly linear and based on hypothesis testing, Robson said. For example, after reviewing the clinical data for a patient, an oncologist may order a test for the BRAF V600E mutation to determine whether the patient is a candidate for Zelboraf, and then a decision is made concerning the appropriate treatment on the basis of that test. This model could evolve to account for NGS by using the tech- nique to gather data from the patient to generate hypotheses, not test them. The extra information that is obtained is part of the patient evaluation and should therefore be reimbursed, Robson said. The regulatory process that is established for companion diagnostics should be flexible so as to allow for the future accommodation of routine screening using next-generation sequencing. Challenges for Clinical Laboratories Patient care occurs in a world that is far from ideal, said John Pfeifer, vice chair for clinical affairs, pathology, and immunology and professor at Washington University School of Medicine. For example, patients present with advanced rather than early disease, and sometimes they do not adhere to treatment protocols. Health care providers and clinical laboratories have to deal with many such issues every day, Pfeifer said. Clinical laboratories have a different set of issues than health care pro- viders, Pfeifer continued. First, the laboratories typically face limitations in

OCR for page 19
PERSPECTIVES FROM PATIENTS, PROVIDERS, AND LAB REPRESENTATIVES 23 the quantity of tissues available for testing. Pfeifer’s laboratory is a “full- service genetics laboratory” which performs tests from conventional cyto- genetics to next-generation sequencing. A typical PCR-based test, whether a uniplex or a multiplex test, requires in the range of 25 to 50 nanograms of DNA, Pfeifer said. Many times in routine clinical practice, only small samples are available from biopsies or fine needle aspirates. About 7 percent of cases have less than 100 nanograms of DNA, 12 percent have less than 200 nanograms, and another 30 percent have between 200 and 750 nano- grams, Pfeifer said. As a result, laboratories generally need to make deci- sions about which tests they are going to perform. Laboratories also face demands for testing of numerous loci from the same specimen. For example, a patient who presents with non-small-cell lung cancer needs a number of loci tested for first-line therapy, including those for ALK, BRAF, EGFR, PTEN, and RAS. Limits on the amount of test substrate can have a major impact on the testing that is actually per- formed. Similarly, requirements for slide-based assays, such as interphase fluorescence in situ hybridization (FISH), further constrain testing, Pfeifer said, because producing samples to do such assays reduces the amount of t ­ issue available for other tests. Another challenge with tissue samples is that the type of sample and the preparation needed for a companion diagnostic test may not always align with the type of sample that is most non-invasive to obtain and that makes most sense from the patient care perspective. If the sample was preserved in ethanol or methanol instead of formalin, it may be less amenable to companion diagnostic tests. Cost considerations that are taken into account for testing are also concerns, Pfeifer said. For example, one way to reduce costs is to bring a patient into a cytology clinic and perform a fine-needle aspiration sampling of a lymph node rather than to operate on a patient and perform an excisional biopsy. Rapid advance- ments in technology and improved understanding of disease also affect testing. For example, new evidence may indicate that a mutation involved in one disease appears to be involved in a different disease as well, whether for diagnosis or treatment. This raises the question, In what circumstances should that mutation be tested? Formal Regulation of LDTs “Companion diagnostics have put the clinical laboratory in a very dif- ficult position,” Pfeifer said. “We’re in a catch-22. On one hand, there are companion diagnostics that have been approved for testing specific patient populations, but by definition, they are not applicable to a lot of the clinical testing that laboratories are asked to do, based on current paradigms. . . . So we are forced to use that companion diagnostic as an off-label use or an LDT.” But in some cases, it is advised to pursue a companion diagnostic

OCR for page 19
24 THERAPEUTIC AND DIAGNOSTIC CO-DEVELOPMENT model because the LDT is not appropriate for the testing. “Which model is it?” Pfeifer asked. FDA does not have the resources to regulate LDTs, he said, but LDTs need to be regulated. LDTs exist in part because the companion diagnostic model is not comprehensive. Thus, Pfeifer said, one option would be to change the paradigm by requiring demonstration of the equivalence of LDTs with FDA-approved IVD companion tests through rigorous proficiency testing. This proficiency testing requirement would cut both ways, Pfeifer said. An LDT will need to meet the standards of the approved companion diagnostic test, although Pfeifer stated that “plenty of laboratories hold themselves to a higher standard and will only use an LDT if it meets the standard of the companion diagnostic.” At the same time, a companion diagnostic is only as good as the laboratory using it. Regulatory oversight of LDTs should be formalized, Pfeifer said, adding that using CLIA, which is an established paradigm, would be the fastest and easiest way to do so because the pathway is already defined. Response to Potential Solutions Pfeifer responded to the stakeholders’ suggested solutions for improv- ing the current companion diagnostic model as outlined in Box 1-2. The American Clinical Laboratory Association proposal recognizes that LDTs are part of the genetic testing landscape and proposes a regulatory frame- work consistent with the precedent for other types of laboratory testing. A disadvantage to this proposal is that it would apply to the direct-to- consumer market, in which health care providers are not involved in the testing for inherited mutations, Pfeifer said. What does this model look like, Pfeifer asked, when the patient-doctor relationship is removed? Today, most companion diagnostics separate patients into two ­ategories—responders and non-responders. But when genetic tests are c used to examine hundreds of thousands of genes, patients will be grouped into smaller and smaller categories. Eventually, every patient will have to be treated as unique. This will require a radically different approach to clini- cal trial designs, along with bioinformatics solutions and statistical tests to validate tests that apply to more than one disease, Pfeifer said. implications of next-generation sequencing The presenters and workshop participants continued to explore the possible effects that the application of next-generation sequencing for whole genome or whole exome sequencing could have on genetic testing and on disease classification. “Patient-centeredness is at the heart and soul of the modern era of stakeholder engagement,” said Muin Khoury, director of

OCR for page 19
PERSPECTIVES FROM PATIENTS, PROVIDERS, AND LAB REPRESENTATIVES 25 the Office of Public Health Genomics at the Centers for Disease Control and Prevention, in agreement with Terry. He said that he hoped that stratify­ng i patients by disease into treatment groups would be more like “n of a few” as opposed to a single subject “n of 1” classification. The n of 1 situation is a potentially intractable problem, he said, because it is not clear how the principles of evidence-based medicine would apply to this situation. Khoury said that he preferred to think about stratified medicine rather than personalized medicine so that there are a manageable number of subgroups for each disease. If that is not the case and each person is unique, then the rare disease model of regulation would apply. There is also a need for general methodology, or at least for an intellectual framework, to delineate what sequence alterations potentially predict response, and because directly evaluating every single sequence variant as small n of 1 clinical trials is not feasible, Robson said, “we need some creative thinking about designs.” Evaluation of Genomic Applications in Practice and Prevention, along with other groups such as the Clinical Sequencing Exploratory Research consor- tium, Blue Cross and Blue Shield Association Technology Evaluation Center, and Kaiser Permanente, serve as reviewers of the evidence base. Pfeifer called attention to the need to anticipate the coming era of next-generation sequencing. Next-generation sequencing technology will solve some of the current problems associated with regulation and reim- bursement. It will be able to identify all four major categories of genetic mutations: single-nucleotide variants, small insertions and deletions, copy number variants, and large-scale structural variants such as translocations and inversions. The technology will allow for the testing of hundreds or thousands of genes with the same amount of analyte, and it will be mark- edly less expensive than running many individual tests. Pfeifer’s laboratory already has started doing next-generation sequencing because some clinical settings and specimen types call for the use of that technology. The idea that a single platform is going to replace mutation-specific tests is currently no more than a hypothesis, Robson said. “A lot of people are deeply invested in this idea,” he said, “but whether or not it’s actually going to turn out to be the case remains to be seen.” Pfeifer said he was unsure about when the science is proven in the context of follow-on drugs. A drug can be shown to be safe and effective in a specific patient population, but that is not the complete answer—it is just what is known at the moment, he said, and people tend to over­ generalize what they know, as they do with tests. When tests are first approved, they appear to be solid, but over time, as more is learned about the test and the disease associated with the test, it becomes clear that the test is not necessarily optimized with regards to patient groups and precise threshold levels. The question then becomes how to incorporate new infor- mation into the formulation and use of the test, he said.

OCR for page 19
26 THERAPEUTIC AND DIAGNOSTIC CO-DEVELOPMENT Pfeifer agreed that NGS could be the “ultimate companion diagnostic.” But three issues are important to consider, he said: which technical proce- dure to use, how to improve the reproducibility of bioinformatics analyses with the same dataset, and which type of regulatory environment would support linking data analysis with patient outcomes. Pfeifer added that the same information can be fed into two different bioinformatics systems and produce two different answers. “That should be profoundly concerning to anybody who is proposing the clinical utilization of next-generation sequencing,” he said. Robson pointed out that next-generation sequencing has different applications in the sense that it can be used to produce different amounts and types of information. Degrees of sequence depth, the number of genes, the extent of the genome and, in oncology, whether germline or somatic mutations are covered all raise issues relating to the complexities of inter- pretation and incidental findings. “It’s important to maintain an awareness of those subtleties,” Robson said. Nevertheless, Robson added that his institution is already using next- generation sequencing in cases where potential germline predispositions are difficult to define phenotypically. For instance, he said, “pediatric bone m ­ arrow failure syndromes can be due to a number of different things that are hard to sort out phenotypically and are very expensive to test serially.” As a result, Memorial Sloan Kettering uses next-generation sequencing panels to gather as much information as possible for conditioning regimens prior to transplantation. It also uses this technology with some cancer sus- ceptibility syndromes that are difficult to sort out, such as oligopolyposis in the colon, which can be caused by mutations in any of a number of genes. “Again, if you do it serially, it gets pretty expensive pretty fast,” Robson said. “So a multiplex panel is actually less costly and faster.” With a previously unidentified mutation, it may be possible to predict the functional consequences using a generic paradigm that takes into account whether the mutation causes premature termination, abnormal splicing, or another functional effect, Robson said. In that case, a generic paradigm can be applied with a relatively high degree of confidence. Methodologies are already available that assign levels of confidence that a particular variation is likely to be deleterious and functionally significant, but in other cases, such as determining the importance of a missense mutation in the non- kinase domains of PI3-kinase, this approach will not work as well. “From a regulatory standpoint,” Robson asked, “what confidence annotation do you require before that [information] becomes part of the label?” He added that oophorectomies are performed based on BRCA1 mutations that are “probably pathogenic, but not necessarily definitely pathogenic.” Frueh said that next-generation sequencing will generate a huge amount of information which can then inform the decisions made by physicians and

OCR for page 19
PERSPECTIVES FROM PATIENTS, PROVIDERS, AND LAB REPRESENTATIVES 27 patients. But he also noted that other forms of information are also avail- able, such as epigenetic information and data about the microbiome. When, he asked, do physicians and patients become responsible for deciding how much and what types of information they want? Cost of Sequencing Pfeifer pointed to the existence of a “crossover point for health care financing.” Next-generation sequencing will soon provide information that is “irresistible” to have because it will contain more information about the disease. Payments will move increasingly to a bundled model that will provide a set amount of money for diagnosis and treatment. Health care networks will then be able to decide how to spend that money, and it will force health care systems to prioritize what they are doing. The question then will be whether a test, used as indicated or in an off-label way, can provide useful information that is not otherwise available. Wylie Burke, professor and chair, Department of Bioethics and Human- ities, University of Washington, said that while genomic technology could improve quality of care and reduce costs, it also has the potential to drive costs upward because there is a temptation to acquire as much data as pos- sible. Burke noted that the health care system cannot afford to pay for all of the research that needs to be done to determine the utility of genomic information. “That’s not health care—or at least at a certain point it’s not health care,” she said, “because it’s not evidence-based interventions to improve outcomes. It’s learning in the hope that we may improve outcomes in the future.” Terry also pointed to the potential for genomics to generate disparities in health care because some patients may have better access to information or to providers and may know how to navigate the system to gain the information that they need. Pfeifer explained several other ways in which genomics can add costs. Clinicians may order complex genomic tests in patients who are not well enough to benefit from the results. Or a test may suggest but not guarantee that a costly treatment will be effective. “Some of these cases we have seen are homeruns, but the reality is that not everything is,” he said. Sometimes a test indicates that a patient will not benefit from a treatment, but a pro- vider will use that treatment anyway. While some are willing to add costs to acquire more information, Pfeifer said, there is a reluctance to use the information for inaction. Direct-to-consumer testing The use of genetic tests directly by consumers for the detection of germ- line mutations presents a number of issues concerning consumer choice,

OCR for page 19
28 THERAPEUTIC AND DIAGNOSTIC CO-DEVELOPMENT innovation, ethics, and education. Terry said that the direct-to-consumer route is important because patients need to have some sort of relationship with the system and need to be able to make sure that their information and data are being used properly. Robson emphasized that the quality of the information and education provided to consumers along with their genetic test results for inherited mutations are important. Having access to genetic results enables patients to be at the table when value conversations are happening, Terry said. Direct-to-consumer genetic testing is an experiment that is pushing the envelope, Terry said. Projects like the Personal Genome Project are careful about saying what has been validated and what has not been, but many questions need to be resolved, such as who owns a person’s genome sequence, where it will be stored, and how it will be distributed. Old models to address these questions are not necessarily going to work, Terry said. Finally, Pfeifer agreed with Terry that everyone is a consumer of health care and they are involved in different areas along the spectrum of the health care system. As taxpayers, people want investments that are effective and provide value. As consumers of health care, they want tests and treat- ments that are safe and effective. “We’re all in this together,” Pfeifer said. “We are just at different points of the spectrum.”