National Academies Press: OpenBook

An Evidence Framework for Genetic Testing (2017)

Chapter: Appendix C: Using Evidence to Inform Clinical and Policy Decisions

« Previous: Appendix B: The GETT Checklist
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×

C

Using Evidence to Inform Clinical and Policy Decisions

There are different types of evidence that are considered when evaluating genetic tests, and governmental, private, and public organizations have different thresholds for the types of evidence that they use in their decision-making processes. This appendix provides examples of how different organizations use evidence to inform decision making regarding genetic tests.

As there is no clear standard for what represents sufficient evidence (Goddard et al., 2012), different groups might reach different conclusions about the same tests (Ferreira et al., 2002; IOM, 2011a). Some examples of groups that conduct evidence assessments for genetic tests and make decisions based on those assessments in the form of clinical practice guidelines and recommendations include the Agency for Healthcare Research and Quality (AHRQ), the Evaluation of Genomic Applications in Practice and Prevention (EGAPP), the American College of Medical Genetics and Genomics (ACMG), the American Society of Clinical Oncology (ASCO), the Clinical Pharmacogenetics Implementation Consortium (CPIC), and the National Society of Genetic Counselors (NSGC). Additional examples of evidence used to inform policy decisions regarding genetic tests by the Centers for Medicare & Medicaid Services (CMS), TRICARE, and private health insurers are presented.

CLINICAL PRACTICE GUIDELINES AND RECOMMENDATIONS

“Clinical Practice Guidelines (CPGs) are statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options” (IOM, 2011c, p. 15). They are generally written for use in clinical settings to help practitioners know when to order, how to interpret, or how to report genetic tests. They can be developed by a variety of organizations—professional societies, government agencies, advocacy groups, health plans, or commercial companies—each with different methods and standards. In the absence of relevant literature for a systematic review, low-quality evidence or expert opinion drives the guideline. Often consensus methods are used to reach a recommendation when evidence is poor or lacking. Thus, CPGs are limited by the quality and depth of the scientific evidence base. They might also be limited by the development process, for example, if the developers are not multi-disciplinary and multi-stakeholder or conflicts of interest and personal biases are not transparent (IOM, 2011c).

Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×

The National Guideline Clearinghouse1 collects clinical practice guidelines and makes them accessible to any audience (health providers to purchasers). For guidelines to be included in the database, they must meet certain criteria.2 Since June 2014, guidelines must meet six inclusion criteria derived from the Institute of Medicine (IOM) guidance on trustworthy clinical practice guidelines and standards for systematic reviews (IOM, 2011a,c). The new criteria have motivated several organizations that regularly assess genetic tests to modify their internal processes to meet those expectations. Given those changes, guidelines published more recently might provide clearer descriptions of relevant evidence and in a more standardized way.

Agency for Healthcare Research and Quality (AHRQ)

AHRQ, an agency of the Department of Health and Human Services, is charged with improving health care quality by fostering patient-centered outcomes research. AHRQ, through its Evidence-based Practice Centers (EPCs), sponsors the development of evidence reports and technology assessments. The EPCs systematically review the relevant scientific literature on specific topics and form partnerships and collaborations with other medical and research organizations. AHRQ’s EPC evidence reports and technology assessments inform individual health plans, providers, and purchasers (AHRQ, 2013).

AHRQ’s stepwise process for conducting systematic reviews of medical tests includes developing the topic and structuring the review, searching and collecting evidence, assessing the quality and applicability of individual studies, grading the body of evidence, and synthesizing the evidence. The strength of the evidence for each key question is assessed based on the risk of bias, consistency, directness, and precision of the evidence base. Because most evidence for tests is indirect (i.e., does not measure outcome of interest) and must be organized by a chain of evidence to reach conclusions about the outcome of interest, a framework must be used. The strength of the body of evidence for each linkage in the chain must be graded separately (AHRQ, 2012).

In certain instances, evidence of diagnostic accuracy might be sufficient. For a new test that is as good, or better, than an existing test that has established clinical utility, evidence beyond diagnostic accuracy might not be necessary to support conclusions about the new test. Those instances might include situations where later medical decisions and outcomes are comparable between the two tests or the new tests allows for greater avoidance of harms. It must be reasonable to assume that efficacy is not affected by which test is used (AHRQ, 2012).

Evaluation of Genomic Applications in Practice and Prevention (EGAPP)

Launched in 2004, this Centers for Disease Control and Prevention (CDC) initiative seeks to support the timely and efficient translation of genomic applications into medical practice by collecting, synthesizing, and reviewing data. EGAPP supports an independent, multidisciplinary panel of experts (the EGAPP Working Group: EWG) that is tasked with

  • developing a transparent and accountable process;
  • minimizing conflicts of interest;

___________________

1 The NGC is an initiative of the Agency for Healthcare Research and Quality in the Department of Health and Human Services. Available at: https://www.guideline.gov/about/index.aspx (accessed April 24, 2016).

2 Available at: https://www.guideline.gov/about/index.aspx (accessed January 31, 2017).

Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
  • optimizing existing systematic review methods to better address the challenges presented by genomic technologies (mainly complexity and rapid development); and
  • providing recommendations clearly linked to the evidence (Teutsch et al., 2009).

EGAPP has focused its resources on tests with wide application, potential impact, and high demand, including tests for diagnosis, screening, risk assessment/susceptibility, prognosis, and therapeutic purposes. Tests not considered by EGAPP include those that are being addressed by other entities, such as tests for prenatal screening or rare diseases. Potential topics for review are prioritized using a series of questions pertaining to the health burden, associated practice issues, and other concerns. Systematic reviews for selected topics are commissioned (sometimes in partnership with AHRQ). Reviews are structured and defined based on the specific medical disorder, the test, and the clinical scenario. EGAPP clearly defines what evidence is acceptable: original data, systematic reviews or meta-analysis in the peer reviewed literature; peer-reviewed unpublished data (i.e., Food and Drug Administration [FDA] data) and other reviews can be considered on a case by case basis, but editorials and opinions are specifically excluded (Teutsch et al., 2009).

EGAPP’s process is guided by a framework. The framework helps to construct a “chain of evidence” to link several pieces of evidence to answer questions about the effectiveness of a test because there is little evidence that directly answers questions about effectiveness of genetic tests. Additionally, this means that observational studies, which are not designed to address effectiveness, must be assessed (Teutsch et al., 2009).

Although the standard of evidence will vary depending on the clinical scenario and other contextual factors, the EGAPP process adds a foundation for evidentiary standards to help guide decision making. EGAPP’s process includes a multidisciplinary, independent assessment of evidence with a focus on appropriate outcomes; emphasis on methods for assessing individual study quality, methods for assessing the adequacy of the evidence for each component of the framework, and methods for assessing the overall body of evidence. Further, EGAPP values the summary and synthesis of the evidence and identification of evidentiary gaps (Teutsch et al., 2009).

EGAPP has developed a detailed list of criteria by which the internal validity of individual studies should be assessed that is specific to the component of the framework (analytic validity, clinical validity, or clinical utility). It also uses a hierarchy to describe the study designs and data that correspond to levels from 1 (highest) to 4 (lowest) for each component of the framework (see Table C-1). EGAPP’s criteria for grading the quality of the evidence for the individual components (analytic validity, AV; clinical validity, CV; and clinical utility, CU) directs the minimum number and level of data or studies required to meet expectations for convincing, adequate, or inadequate evidence for AV, CV, and CU, which indicates the strength of the evidence for the linkages in the chain of evidence. This is meant to minimize the risk that conclusions based on the evidence are wrong. Based on these ratings of the body of evidence, the EGAPP working group then assigns a level of certainty of the net benefit of the test to accompany its recommendations (Teutsch et al., 2009).

Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×

TABLE C-1 EGAPP Hierarchy of Data and Study Designs for Analytic Validity, Clinical Validity, and Clinical Utility

Level Analytic Validity Clinical Validity Clinical Utility
1 Collaborative study using a large panel of well-characterized samples
Summary data from well-designed external proficiency testing schemes or interlaboratory comparison programs
Well-designed longitudinal cohort studies
Validated clinical decision rule
Meta-analysis of randomized controlled trials
2 Other data from proficiency testing schemes
Well-designed peer-reviewed studies (e.g., method comparisons, validation studies)
Expert panel reviewed FDA summaries
Well-designed case-control studies A single randomized controlled trial
3 Less well designed peer-reviewed studies Lower-quality case-control and cross-sectional studies
Unvalidated clinical decision rule
Controlled trial without randomization
Cohort or case-control study
4 Unpublished and/or non-peer-reviewed research, clinical laboratory, or manufacturer data
Studies on performance of the same basic methodology, but used to test for a different target
Case series
Unpublished and/or non-peer-reviewed research, clinical laboratory or manufacturer data
Consensus guidelines
Expert opinion
Case series Unpublished and/or non-peer-reviewed studies
Clinical laboratory or manufacturer data
Consensus guidelines
Expert opinion

NOTE: FDA = Food and Drug Administration.

SOURCE: Reprinted by permission from Macmillan Publishers Ltd.: Genetics in Medicine (Teutsch et al., 2009), copyright (2009).

American College of Medical Genetics and Genomics (ACMG)

ACMG has issued many clinical practice guidelines, including 28 that the organization currently supports. The guidelines, all published since 2001, range in topic (e.g., by disease or by clinical service or technology) (ACMG, 2016). ACMG guidelines are primarily expert opinion supported by a literature review; however, their process is not fully described in their documents.

American Society of Clinical Oncology (ASCO)

ASCO provides a Methodology Manual on its website that describes its process for developing guidelines (ASCO, 2013a,b, 2015). ASCO outlines procedures for the entirety of its process from nominating a topic, addressing conflicts of interest among panelists, to guideline dissemination and updates that is consistent with the IOM criteria (Loblaw et al., 2012). The

Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×

approach was adapted from that developed by AHRQ, US Preventive Services Task Force (USPSTF), and Grading of Recommendations Assessment, Development and Evaluation (GRADE). Explicit details on conducting a systematic review are presented, including a five-step process to assess individual study quality, the strength of the evidence as a whole, and culminating in a rating of the strength of the resulting recommendation. Rating scales are defined for each step. Criteria to determine the quality of a study and its risk of bias are specific to study design. The aggregate body of evidence is graded on risk of bias, consistency of results, directness of evidence, and precision of results to give a total strength (high, intermediate, low, or insufficient). The rating of the strength of the body of evidence reflects the level of “confidence that the available evidence reflects the true magnitude and direction of the net effect.” In cases of insufficient evidence, ASCO defaults to expert consensus for guidance. ASCO allows recommendations to be based on formal or informal expert consensus for issues where the evidence base is insufficient to inform a recommendation (ASCO, 2013b). A process for developing consensus recommendations and thresholds for defining consensus are given (75% or more of the panelists agree). Further, rather than issue a systematic review without a recommendation because of a lack of evidence, ASCO has adapted a modified Delphi approach (Loblaw et al., 2012). An external review process is used, and ASCO outlines mechanisms by which updates are initiated (ASCO, 2013a). ASCO has the option to issue Provisional Clinical Opinions as a rapid response to important new information that is based on evidentiary assessment by the National Cancer Institute’s Physician Data Query Editorial Board (ASCO, 2015).

There is no guidance that directs authoring panels to organize evidence according to a specific framework. However, some authoring panels chose to focus on evidence that shows potential clinical utility (Van Poznak et al., 2015; Harris et al., 2016). Finally, the manual does not direct guideline authoring panels to identify gaps in evidence, but most note areas where more research is needed.

Clinical Pharmacogenetics Implementation Consortium (CPIC)

CPIC was established in 2009 as a joint initiative from the Pharmacogenomics Knowledgebase (PharmGKB) and the Pharmacogenomics Research Network (PGRN). Its goal is to produce clinical practice guidelines for pharmacogenomic therapies to help implement genetic testing for prescribing specific drugs into clinical practice. CPIC guidelines have a wide audience that includes the National Institutes of Health (NIH) and FDA (Caudle et al., 2014); they are developed following the IOM practices (IOM, 2011a,c) with a published description of the underlying methods (Caudle et al., 2014), they follow a standardized format, and are publicly available.

The kinds of evidence considered when assessing the relationship between genotype and response to a drug might include “randomized clinical trials with pharmacogenetic-based prescribing versus dosing not based on genetics; pre-clinical and clinical studies demonstrating that drug effects or concentration are linked to functional pharmacogenetic loci; case studies associating rare variants with drug effects; in vivo pharmacokinetic/pharmacodynamics studies for drug or reference drug plus variant type; and in vitro metabolic and/or transport capacity for the drug plus variant type.” As true for other kinds of genetic tests, randomized controlled trials (RCTs) relating genetic tests to clinical outcomes are rare (Caudle et al., 2014).

The methods used to assess the quality of each study are not given; however, reviewers examine the evidence for each finding and assign a rating. The qualities by which the body of

Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×

evidence is graded are not given but appear to be based on the number, quality, and consistency of the individual studies, generalizability to routine practice, and directness of the evidence. Reviewers assign a grade of high, moderate, or weak to describe the evidence base. The strength of the conclusions based on that body of evidence is determined such that “it is possible for an evidentiary conclusion based on many papers, each of which might be relatively weak, to be graded as ‘moderate’ or even ‘strong,’ if there are multiple small case reports or studies that are all supportive with no contradictory studies” (Caudle et al., 2014).

National Society of Genetic Counselors (NSGC)

NSGC issues guidelines and resources for clinical practice pertaining to access, assessment, and delivery of genetic counseling services. They also provide guidance on the use of genetic information in health care, such as for disease screening, predictive testing, disease diagnosis, or treatment.

In 2015, NSGC issued new guidance for creation of “evidence-based practice guidelines.” The guidance was created in response to the new criteria implemented by the National Guideline Clearinghouse in 2014 (NSGC, 2015). Other types of publications, less rigorously developed, are still issued by NSGC but are called “Practice Resources.” Their manual describes a 12-step process for the development of evidence-based guidelines that incorporates well-known and widely used tools for conducting systematic reviews. An interdisciplinary group is formed to complete the systematic review, and conflicts of interest are assessed. Systematic reviews are to be completed with a clearly defined literature search and selection strategy. NSGC supports using the GRADE system to assess and evaluate the body of evidence for each outcome of interest. Individual study quality is assessed using established guidance for determining risk of bias. The individuals conducting the systematic review create an evidence report, which later informs the recommendations that are made by a separate guideline group. For issues with insufficient evidence on which to base a recommendation, the guideline group might use a structured consensus approach to reach recommendations. It is proposed that guidelines be reconsidered every 3 to 4 years to remain compliant with the National Guideline Clearinghouse criteria that guidelines not be more than 5 years old (NSGC, 2015).

Levels of Evidence

Rather than use any strict standard for evidence required to support or discourage the use of a test, most organizations use an indication of the level of evidence associated with each recommendation. Table C-2 shows the levels of evidence used by the six selected organizations described above to grade evidence. These levels of evidence accompany any recommendations made so that users can understand the degree to which the recommendations are evidence based. The levels of evidence reflect the level of certainty with which the recommendation is made. Many organizations use similar systems for grading the evidence that generally consist of designations from low to high and reflect the collective confidence of the authors conducting the review. AHRQ, ASCO, and NSGC closely follow the system proposed by GRADE, whereas others are not easily compared (e.g., EGAPP). Finally, some groups, such as ACMG, do not currently have a system for evaluating the body of evidence that supports its guidelines.

Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×

TABLE C-2 Approaches to Assessing the Body of Evidence Used for Genetic Tests (Levels of Evidence)

Organization System for Assessing a Body of Evidence
AHRQ High High confidence that the evidence reflects the true effect. Further research is very unlikely to change our confidence of the estimate of effect. The body of evidence has few or no deficiencies; the true findings are stable.
Moderate Moderate confidence that the evidence reflects the true effect. Further research may change our confidence in the estimate of effect and may change the estimate. The body of evidence has some deficiencies; findings are likely to be stable but some doubt remains.
Low Low confidence that the evidence reflects the true effect. Further research is likely to change the confidence in the estimate of effect and is likely to change the estimate. The body of evidence has major or numerous deficiencies; additional evidence is needed before concluding either that findings are stable or that the estimate of effect is close to the true effect.
Insufficient Evidence either is unavailable or does not permit a conclusion. No evidence is available or the body of evidence has unacceptable deficiencies that preclude reaching a conclusion.
EGAPP Specific quality criteria are set for analytic validity, clinical validity, clinical utility based on study design, and threats to internal validity.
Convincing Observed effect is likely to be real.
Adequate A higher risk that the effect may be influenced by study flaws.
Inadequate Too many flaws to confidently associate the outcome with the gene/variant/test being studied.
ACMG Not Available
ASCO High High confidence that the available evidence reflects the true magnitude and direction of the net effect (e.g., balance of benefits versus harms), and further research is very unlikely to change either the magnitude or direction of this net effect.
Intermediate Intermediate confidence that the available evidence reflects the true magnitude and direction of the net effect. Further research is unlikely to alter the direction of the net effect; however, it might alter the magnitude of the net effect.
Low Low confidence that the available evidence reflects the true magnitude and direction of the net effect. Further research may change the magnitude and/or direction of this net effect.
Insufficient Evidence is insufficient to discern the true magnitude and direction of the net effect. Further research may better inform the topic. Reliance on consensus opinion of experts may be reasonable to provide guidance on the topic until better evidence is available.
CPIC High Evidence includes consistent results from well-designed, well-conducted studies.
Moderate Evidence is sufficient to determine health effects, but strength is limited by the number, quality, or consistency of individual studies.
Weak Evidence is insufficient to assess the effects of health outcomes because of limited power, flaws in study design or conduct, gaps in the chain of evidence, or lack of information.
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Organization System for Assessing a Body of Evidence
NSGC Uses GRADE approach:
Starting points for evaluating quality level: RCTs start high.
Observational studies start low.
Factors that may decrease or increase the quality level of a body of evidence: Decrease: Study limitations, inconsistency of results, indirectness of evidence, imprecision of results, and high risk of publication bias.
Increase: Large magnitude of effect, dose–response gradient, all plausible biases would reduce the observed effect.
High Further research is very unlikely to change our confidence in the estimate of effect.
Moderate Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very Low Any estimate of effect is very uncertain.

NOTE: ACMG = American College of Medical Genetics and Genomics; AHRQ = Agency for Healthcare Research and Quality; ASCO = American Society of Clinical Oncology; CPIC = Clinical Pharmacogenetics Implementation Consortium; EGAPP = Evaluation of Genomic Applications in Practice and Prevention; GRADE = Grading of Recommendations Assessment, Development and Evaluation; NSGC = National Society of Genetic Counselors; RCT = randomized controlled trial.

SOURCES: Brozek et al., 2009; Teutsch et al., 2009; IOM, 2011a; ASCO, 2013a; NICE, 2013; AHRQ, 2014; Caudle et al., 2014; NSGC, 2015.

EVIDENCE USED BY POLICY MAKERS

Beyond clinical decisions, policy makers also rely on evidence to inform their decisions regarding genetic tests. Examples include CMS and TRICARE, and health insurers in the private sector.

Centers for Medicare & Medicaid Services

Pertaining to assessment of genetic tests, CMS requires that they meet the medically reasonable and necessary criteria, and a test must demonstrate clinical utility and analytic validity/clinical validity. For tests undergoing a traditional review, MolDX3 will review only tests for which the best study showing evidence of clinical utility submitted is at least a retrospective data model or better (such as, prospective observational study, prospective-retrospective trials, and randomized prospectively controlled trials); other tests that rely on clinical utility evidence from only retrospective observational studies or preclinical studies are not reviewed (Palmetto GBA, 2015). MolDX requires submitting labs to provide data about the test’s analytic performance, including accuracy, sensitivity, specificity, precision, reagent and sample stability, and reference levels. For evidence of clinical validity, MolDX references EGAPP recommendations and notes that indication, intended population, and clinical

___________________

3 MolDX reviews test registration applications and technical assessments to confirm that each test meets medically reasonable and necessary criteria for coverage decisions.

Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×

performance (i.e., sensitivity, specificity, and positive predictive value) be described (Palmetto GBA, n.d.).

After review by experts, an executive committee makes final decisions about test coverage (Palmetto GBA, 2015). Those decisions are based on “hard clinical science,” and anecdotal information is not considered (Incollingo, 2014). As of April 2016, more than 1,100 tests had been excluded from Medicare coverage based on the MolDX evaluation process; however, they could be eligible for reconsideration if more information becomes available (Palmetto GBA, 2016).

TRICARE

The Defense Health Agency (DHA) is conducting a temporary demonstration project to evaluate laboratory developed tests, establish a list of tests that will be deemed as “safe and effective,” and develop a process to add new tests to the list. The results of the project will provide information to determine coverage for TRICARE beneficiaries (DoD, 2015). TRICARE issued a policy to cover all genetic tests with FDA approval but also extends coverage for several others as determined by DHA. The program will remain in effect for 3 years (2014 to 2017) to assess the feasibility, cost-effectiveness, and efficiency of the program.

DHA’s project is focused on safety and efficacy as determined by the analytic validity, clinical validity, and clinical utility of the test. The “reliable evidence” used, and its weight specified by TRICARE policy (32 CFR 199.2(b)) is as follows:

  • well-controlled trials of clinically meaningful end points, with scientifically valid data that are published in refereed medical literature
  • published formal technology assessments
  • published reports of national or professional medical associations
  • published reports of national medical policy organization positions
  • published reports of national expert opinion organizations
  • abstracts, anecdotal evidence, and personal professional opinion are specifically not included (DoD, 2015)

Because it is not likely that reliable evidence meeting the requirements above are available for rare diseases (<20,000 persons in the United States), TRICARE’s rare disease policy indicates that applicable evidence might include “trials published in refereed medical literature, formal technology assessments, national medical policy organization positions, national professional associations, and national expert opinion organizations” (DoD, 2015).

Private-Sector Health Technology Assessment

Little information is publicly available regarding the internal processes used by private insurers, and most consider the information to be proprietary (Sullivan et al., 2009). However, in general, payers base decisions on evidence that an intervention (e.g., a genetic test) is medically necessary if it is linked to improved outcomes for patients and is better than the current standard of care or no care at all (Frueh, 2013; IOM, 2015). However, there is significant variability in coverage of genomic testing among payers (Meckley and Neumann, 2010; Trosman et al., 2010; Hresko and Haga, 2012). For evidence of clinical utility of a test, payers generally consider RCTs to be the gold standard of evidence; however, clinical trial designs and observational studies are also considered (NASEM, 2016).

Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×

In the absence of evidence of clinical utility, or consensus regarding evidentiary standards of clinical utility, payers rely on a variety of information sources to develop their coverage policies (Trosman et al., 2011; Graf et al., 2013). In addition to peer-reviewed studies published in medical journals, payers consider reviews of published studies on a particular topic, such as those conducted by the AHRQ; evidence-based consensus statements or guidelines from professional societies or other nationally recognized health care organizations, such as ASCO (IOM, 2015); and guidance documents developed by multi-stakeholder groups, such as the Center for Medical Technology Policy (NASEM, 2016).

Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 101
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 102
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 103
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 104
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 105
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 106
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 107
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 108
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 109
Suggested Citation:"Appendix C: Using Evidence to Inform Clinical and Policy Decisions." National Academies of Sciences, Engineering, and Medicine. 2017. An Evidence Framework for Genetic Testing. Washington, DC: The National Academies Press. doi: 10.17226/24632.
×
Page 110
Next: References »
An Evidence Framework for Genetic Testing Get This Book
×
 An Evidence Framework for Genetic Testing
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Advances in genetics and genomics are transforming medical practice, resulting in a dramatic growth of genetic testing in the health care system. The rapid development of new technologies, however, has also brought challenges, including the need for rigorous evaluation of the validity and utility of genetic tests, questions regarding the best ways to incorporate them into medical practice, and how to weigh their cost against potential short- and long-term benefits. As the availability of genetic tests increases so do concerns about the achievement of meaningful improvements in clinical outcomes, costs of testing, and the potential for accentuating medical care inequality.

Given the rapid pace in the development of genetic tests and new testing technologies, An Evidence Framework for Genetic Testing seeks to advance the development of an adequate evidence base for genetic tests to improve patient care and treatment. Additionally, this report recommends a framework for decision-making regarding the use of genetic tests in clinical care.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!