Development, Identification, and Evaluation of Trustworthy Clinical Practice Guidelines
Abstract: In this final chapter, the committee discusses national policy issues related to clinical practice guidelines (CPGs), addressing questions of who should develop and fund CPGs and how those that are trustworthy should be identified. Furthermore, the committee discusses approaches to harmonization, dissemination (particularly the role of the National Guideline Clearinghouse [NGC]), and evaluation of guidelines. Currently a diverse group of organizations develop CPGs; the committee supports their efforts, but acknowledges the associated challenges in promoting and identifying adherence to standards. The committee recommends that the Secretary of Health and Human Services establish a public–private mechanism to examine, at the request of developer organizations, the procedures they use to produce guidelines and certify, organizations whose processes meet those standards, for a limited period of time. The committee urges the Agency for Healthcare Research and Quality (AHRQ) to examine the causes of inconsistent existing CPGs and prioritize them for harmonization. Finally, the committee urges that AHRQ continue to provide a clearinghouse function, through the NGC, but require higher standards for guideline inclusion and efficient identification of guidelines from certified organizations. AHRQ also should be involved in evaluation of the proposed standards, their effect on the quality of guidelines, and ultimately on patient care.
Previous chapters discussed standards for trustworthy guidelines, explored methods for their development and implementation, and put forth committee recommendations. In this final chapter, the committee discusses national policy questions related to clinical practice guidelines (CPGs), such as who should develop guidelines; how CPGs that meet the proposed standards should be identified; whether there is a continuing need for the National Guideline Clearinghouse (NGC); whether there should be a process to harmonize related CPGs and identify recommendations for quality measures; and how proposed standards and impact of standards-based CPGs should be pilot-tested and evaluated. Finally, the committee makes recommendations regarding the identification and certification of trustworthy CPGs, research on harmonization of inconsistent CPGs, and evaluation of the proposed standards and the impact of trustworthy clinical practice guidelines on healthcare and patient outcomes.
WHO SHOULD DEVELOP GUIDELINES?
Researchers have raised the possibility of centralizing development of CPGs in one federal organization (Shaneyfelt and Centor, 2009). The potential benefits from this arrangement could include reduced bias, a reduction in multiple CPGs on the same topic, and improved guidance for future research. A single organization that develops CPGs based on the proposed standards and provides assurance that all CPGs meet the standards would be efficient. Although the Agency for Healthcare Research and Quality (AHRQ) performed a guideline development function early in its history, it was not the sole producer of CPGs during that period. It was a politically difficult function for a public agency subject to congressional appropriations, and the agency has not attempted to reestablish that activity.
Throughout its study, the committee has recognized the many public and private organizations participating in clinical practice guideline development. The Institute of Medicine report, Knowing What Works, concluded that a pluralistic approach to guideline development, while not without problems, was desirable (IOM, 2008). This committee recognizes value in a diverse community of developers and the unique relationships each has with its constituency, relevant experts, practitioners, and funding sources. Many organizations have made major investments in technical staff and other resources devoted to CPG development (Coates, 2010). In addition, many have earned public trust for their efforts. Organiza-
tions may have one or more goals in creating a guideline: education of members and the public on a care topic, reductions in unjustified practice variations, meeting members’ demands for guidance, or assuring a role for a specialty in treatment of a particular condition or procedure.
The committee sees greater value in having a variety of organizations developing CPGs than in limiting all development to a single agency. With multiple developers, however, there is likely to be a continuing problem of multiple CPGs on the same topic (see discussion of harmonization below). Given the diversity of organizations developing CPGs and their differing needs, the committee recognizes both the desirability and hazards of proposing standards and priorities.
Furthermore, given the large number of development organizations and their differing capabilities, some may attempt to meet IOM standards, but fail in achieving all of them. It could still be difficult for guideline users to recognize which CPGs are trustworthy. (The committee addresses this problem of identifying trustworthy CPGs below.)
Current CPG development generally is financed by each organization creating a guideline. At times two or more organizations jointly develop a CPG, pooling their staff and financial resources. At other times development funds may originate from interested commercial parties (McClure, 2010). The committee believes potential for conflicts of interest are great when funding for CPG development or for the supporting organization comes from stakeholders, particularly the pharmaceutical and device industries or specialty societies, which might benefit or whose members might gain from guideline recommendations. The committee also recognizes that the proposed standards are likely to add to costs of development for some organizations, and may force other small groups to exit the guidelines business.
Because members of a guideline development group usually serve as volunteers, a major expense in production is often the systematic review (SR). The IOM Committee on the Development of Standards for Systematic Reviews of Comparative Effectiveness Research, in its related study, recommends that all SRs conducted by research organizations under contract to the Department of Health and Human Services (HHS) or the Patient-Centered Outcomes Research Institute (PCORI) agree to standards set by that committee. Given the funding for comparative effectiveness research by PCORI, the number of federally supported SRs is likely to increase significantly. Completed, high-quality SRs presumably would then
be available free (or at cost for printing) to the general public as well as to organizations wishing to develop related guidelines.
The IOM committee on CPGs hopes this SR recommendation will be implemented quickly. Nonetheless, the committee recognizes that substantial costs will remain for CPG developers. Many organizations testifying before the committee stressed the need for additional funds to produce high-quality CPGs (Fochtmann, 2010; Kelly-Thomas, 2010).
To further enhance guideline development, additional steps should be taken. Clinical topics that are of interest to limited populations, such as rare but treatable diseases, may need practice guidelines. There may be no disease group or clinical specialty society with the resources to develop such CPGs. Outside funding assistance could spur the development of such needed guidelines. The committee urges organizations desiring to produce such guidelines to coordinate their efforts with other, related organizations so they may pool their resources. This could also strengthen their efforts to seek financial assistance from foundations, government agencies, and other nonconflicted sources. In addition, HHS should promote the identification of best practices in CPG development, guided by the proposed standards herein, and should assist in the training of individuals in the specific technical skills needed in the CPG process. Importantly, HHS should assist in the training of patient and consumer representatives to participate in this process.
SHOULD THERE BE A PROCESS TO IDENTIFY CPGS THAT MEET THE PROPOSED TRUSTWORTHY STANDARDS?
With nearly 2,700 guidelines in the National Guideline Clearinghouse, numerous additional commercial guidelines, and an unknown number of others in existence, many addressing identical topics, users often face challenges in identification of guidelines based on high-quality development methods. The NGC provides a standardized summary of each CPG posting, describing its development methodology and evidence base, and providing a link to the full guideline, but the NGC makes no quality judgment. ECRI, the NGC contractor, has identified 25 medical conditions characterized by conflicting guidelines in the clearinghouse. For guidelines on closely related topics, NGC has described differences and similarities, absent individual CPG quality assessment. Reviewing substantively relevant CPGs and determining the one of highest quality for a condition is a daunting task for clinicians, and conducting such assessments independently is inefficient for
them. Often clinicians look to specialty societies or professional organizations for guidelines, or their practice organizations may develop their own CPGs or adopt a commercial suite of CPGs and encourage or expect their clinicians to follow them. Fundamentally, however, it is now nearly impossible for all stakeholders to be confident of CPG quality.
What would it mean if guidelines’ developers and users had a mechanism to immediately identify high-quality, evidence-based CPGs, which could be considered trustworthy? Users could make better clinical decisions based on the best available scientific evidence. If such high-quality CPGs were publicly identified and recognized, more developer organizations would be likely to strive for such recognition and improve their development procedures to meet standards.
Linking such identification to regulatory procedures, insurance coverage, payment systems, or quality measures is not within the committee’s scope, but an official identification or certification of trustworthy CPGs is a goal. No organization or process in the United States currently distinguishes trustworthy CPGs.
The committee believes that some guideline developers will readily embrace the eight standards in this report and adapt their development process to create CPGs that are trustworthy. However, not all developers will be able or willing to do that. Thus, the committee believes it is essential that its proposed standards be accompanied by creation of a mechanism to identify guidelines that meet development standards. Such identification will serve three purposes.
promote wider adoption of quality standards by developers because CPGs publicly identified as trustworthy, with a “seal of approval,” will have an advantage;
provide users of CPGs with an easy guide to identify trustworthy ones; and
promote adoption of trustworthy CPGs.
A process could (1) identify each guideline to see if it meets specified standards, (2) certify organizations producing guidelines that comply with quality standards, or (3) acknowledge standards compliance for each guideline production process prior to development of the guideline. The selection of any of these options has practical implications for costs, work volume, and reliability of the designation.
Identification of Individual Trustworthy CPGs
A process could be set up to review individual CPGs, assessing whether they meet standards, and labeling as “trustworthy” those that do. The process might be similar to that used by ECRI for the NGC, but would require more data to support in-depth assessment, including conflict of interest (COI) review. Creating a more transparent process is also desirable. Given the large number of CPGs and CPG updates in the clearinghouse and the new ones produced regularly, thorough inspection of each would be a very resource-intensive task. A priority-setting procedure might be useful to identify CPGs that should take precedence for review. Eventually existing CPGs will undergo an update or be withdrawn from the NGC. The updates and new CPGs will more likely be developed according to the proposed standards. If the future number of new CPGs is smaller, the identification of trustworthy CPGs may be less onerous. But if availability of medical evidence continues to expand and the development of CPGs continues to increase, the task will remain large.
Certification of Organizations with Trustworthy CPG Development Procedures
Alternatively, one could review organizations developing CPGs and their production procedures, certifying adherence to quality development standards. In that case, guidelines issued over a specified time period by certified organizations might be considered trustworthy. If an organization did not maintain proper procedures throughout the certified period, its guidelines could be challenged and certification withdrawn, if justified.
The National Institute for Health and Clinical Excellence (NICE), an independent organization offering guidance on health promotion and disease prevention and treatment in the United Kingdom, takes the organizational certification approach. National Health Service (NHS) Evidence, a part of NICE, reviews procedures that applicant organizations use to produce various types of guidance and provides an identifiable mark to be placed on future CPGs of those organizations meeting accreditation requirements and agreeing to maintain the approved processes during a 3-year accreditation period. The mark may be applied to any type of guidance for which the organization has been approved. NHS Evidence may review organizational procedures at any point during the accreditation period and, if noncompliance with accreditation requirements is detected, withdraw accreditation and the accompanying mark (NHS, 2009).
NHS Evidence has a sequential application and review process, including (1) internal review of organizational procedures, using published criteria based on the Appraisal of Guidelines Research and Evaluation (AGREE) instrument (discussed in Chapter 3) and selected recent guidance documents, (2) elicitation of external expert opinions of the staff’s review, (3) draft decision by the Advisory Committee posted on the web, (4) public consultation and comment on the draft decision, (5) final decision by the Advisory Committee, taking into account public comments, and (6) publication of the certification decision. The process is detailed on the web; related forms, manual, and additional information are available there to promote transparency and public involvement. Because NICE accreditation began in June 2009, evaluation of its process and impact is limited. However, the time from application to final decision typically should require 6–8 months. Compared to the CPG environment in the United States, which has a few hundred independent developers (the NGC includes more than 280 separate organizations developing CPGs), NICE contracts with a relatively small number of organizations to produce various guidance forms, including clinical practice guidelines (NHS, 2009). Because some U.S. CPG developers currently do not have documented, standardized procedures, and a large number have not developed many CPGs, all developers are unlikely to seek certification through such a process.
Identification of the Development Process for Each CPG
This alternative would involve assessment of the proposed development process for each planned CPG, rather than review of the organizational process, as described above. This approach provides protection against organizational failure to maintain quality procedures over time. It would require additional review effort, compared to the preceding organizational approach, if organizations produce multiple CPGs during the accreditation period. It offers an advantage over individual CPG review because it may be conducted mainly at the beginning and during the development process rather than at the end. This should minimize delays in identifying trustworthy CPGs after release, although a brief evaluation of the final draft would be necessary to ensure the developer was in procedural compliance. This approach would have an additional advantage if it induced more developers to formalize their process before creating a CPG.
The committee believes the second option, certification of an organization for a period of time based on its generic development procedures, would be the most efficient approach to identifying trustworthy CPGs.
Because the focus of the committee was on the development of standards, not the creation of a certifying body, it has not researched and prescribed all the details for such a mechanism to accomplish the functions recommended above. The committee favors a mechanism that includes participation by individuals from public and private institutions because guideline users in federal and state governments, professional associations, industry, and patient organizations have a strong interest in improving the quality of CPGs. Drawing on existing institutions for the authority and support the certifying body will need should speed its creation. At the same time, creation of the public–private certifying body would alert CPG developers of the new standards, encourage them to adopt the standards, and build on existing capacities.
Because the certification process will entail significant costs, the committee believes the Secretary of HHS should develop a way to fund this certification mechanism by drawing on the resources of interested stakeholders without biasing its decision making or the public’s perception that such a bias exists. The committee stresses that this certifying mechanism would not endorse particular drugs or treatment options for medical conditions. Nor would it make clinical decisions about the guidelines it reviews. It would merely certify the organizations’ guideline development process and identify the CPGs that result from that process as trustworthy.
Without specifying the details of such a public–private mechanism, the committee notes that the healthcare world has several examples of such organizations. The committee suggests they be examined to determine whether any might be appropriate to assume the task or identify strengths of their structures that might be incorporated in such a mechanism. Examples include the following:
National Guideline Clearinghouse: As mentioned in Chapter 2, AHRQ, in partnership with the American Medical Association and America’s Health Insurance Plans (then the American Association of Health Plans) created the NGC as a public web resource, funded federally, and managed privately through a contract with ECRI (NGC, 2010).
National Quality Forum (NQF): The Forum, a private nonprofit organization, was created in 1999 by a coalition of
public and private leaders in healthcare to promote healthcare safety and quality improvement and to endorse quality measures based on a national consensus for use in public reporting. It is governed by a board with a full range of private stakeholders as well as the directors of AHRQ, Centers for Medicare & Medicaid Services (CMS), and National Institutes of Health. Funding comes from government, including a substantial contract with CMS on performance measurement from the Medicare Improvements for Patients and Providers Act, and various private foundations, industry, and annual membership dues (NQF, 2010).
National Committee for Quality Assurance (NCQA): NCQA, a private non-profit organization founded in 1990, develops and applies HEDIS: Healthcare Effectiveness Data and Information Set measures to 90 percent of the nation’s health plans. NCQA also has programs dedicated to the accreditation, certification, recognition and distinction of health plans, disease management organizations, medical home models and other organizations working in health management and improvement. An independent board of directors generally comprising representatives from employers, physicians, public policy experts, consumer groups, and health systems governs the organization. NCQA employs COI disclosure and conflict management policies for its Board and expert panels. Funding comes from many donors and sponsors including health plans, banks, professional medical societies, healthcare foundations, medical centers and others (NCQA, 2011).
Patient-Centered Outcomes Research Institute: A new quasi-public/private nonprofit body was created under the Patient Protection and Affordable Care Act, Sec. 6301, called PCORI. Established and funded by Congress, it sits outside of government and is directed by a board that includes representatives of federal and state agencies as well as academicians, researchers, consumers, patients, and other experts. Because PCORI funding originates from Medicare Trust Funds, rather than industry, risk of COI is limited and funds are more protected and steady than those requiring annual appropriations or private fundraising.1
National Institute for Health and Clinical Excellence: The governments of England and Wales fund NICE to provide evidence-based advice to the NHS and the public on health promotion and treatment. (See discussion in Chapter 2.) It is an independent body that works in consultation with public- and private-sector experts and a council of the public. NICE supports a private, professional network of guideline developers through contracts with England’s Royal Colleges of Medicine and Surgery and with academic research centers. NICE also includes NHS Evidence, a web-based service that performs a structured review of guideline developers seeking accreditation and that recommends action to NICE (NICE, 2010).
IS THERE A CONTINUING NEED FOR A GUIDELINE CLEARINGHOUSE?
Knowledge of the existence of a CPG is prerequisite to adoption by clinicians and health plans. Having an accessible, centralized repository for viewing all publicly available, good-quality CPGs developed in the United States and by some international organizations is helpful to potential users. Without such a collection, there would be greater burden on guideline developers to publicize CPG availability and there might be reduced application of their products.
The National Guideline Clearinghouse has served as a public, accessible repository of CPGs for a number of years and has an established role in the promulgation of new and updated guidelines. NGC reviews each CPG submitted to assure compliance with the clearinghouse’s minimal standards, and requests additional information if needed. The NGC recognizes that the products listed within are of widely varying quality (Coates, 2010). The committee has heard testimony that the NGC performs a public service, but does not set sufficiently high standards to assure users that poor-quality guidelines are not admitted (Coates, 2010). Given the mixed quality of clearinghouse contents, its large volume is also problematic.
AHRQ and ECRI could take several steps to differentiate between trustworthy guidelines and others and non-CPG guidance to increase clearinghouse utility. The committee understands that, when there are no trustworthy CPGs on a topic, clinicians may need to rely on guidance of more limited quality. The steps are as follows:
To be a constructive resource, the NGC should eliminate CPGs for which trustworthiness cannot be determined, and identify the trustworthiness of those retained. As a central repository for all CPGs, the committee does not believe the NGC should be restricted to listing only those CPGs identified as trustworthy. However, the NGC’s contribution may be of questionable value when listing guidelines providing too little information for an informed reader to judge quality and trustworthiness. Additionally, “Not stated” should not be an acceptable response to items in the NGC’s structured abstract form (upon which acceptance to the NGC depends) and should disqualify a CPG for NGC inclusion.
Items that have not included a thorough SR of the relevant scientific evidence base should be excluded from the NGC. AHRQ may consider storing rejected guidelines in a public inventory of excluded guidelines within the NGC, so that stakeholders may identify any possible guideline of interest and understand why the NGC may not regard it as acceptable. Findings of no scientific evidence resulting from an SR should not preclude listing of the CPG in the NGC.
The NGC should prominently identify guidelines originating from CPG developers certified by the designated mechanism as trustworthy (if such a process is implemented).
CPGs from an organization that requested and failed review by the certifying mechanism should also be identified in a special category, with standards met and shortcomings specified.
Forms of guidance currently in the NGC or considered for future inclusion that do not meet IOM CPG definitional requirements or clearly do not adhere to trustworthy CPG standards should receive a different guidance label and be included in a separate, non-CPG category within the NGC.
AHRQ and the NGC should produce more Guideline Syntheses of topically similar CPGs. These syntheses highlight the importance of coordination among various organizations developing CPGs on similar topics, may highlight potential areas for harmonization, and offer assistance to CPG users.
The proposed standards will require additional NGC effort as current NGC abstraction does not require review of development process data adequate to meet the requirements of the proposed standards. On the other hand, coordination with the public–private certification process might expedite NGC abstraction.
Based on the preceding discussion, the NGC clearly provides a useful function for both guidelines developers and users. However, the committee believes it could do much more. Its policy of broad inclusion has led to a bewildering number of CPGs and other forms of clinical guidance of widely varying quality. Potential users of CPGs need more clarity about choices. The committee does not believe the NGC should restrict listings to CPGs identified as trustworthy. However, it should eliminate from public listings the weakest CPGs, based on their development process, as well as those CPGs that provide too little information for an informed reader to be able to judge their quality. Remaining CPGs should be distinguished from other forms of clinical guidance. Finally, the National Guideline Clearinghouse needs to be funded at a sufficient level for it to improve the quality, timeliness, and trustworthiness of its CPGs and other products.
SHOULD THERE BE A PROCESS TO HARMONIZE RELATED CPGS?
Once awareness and adoption of the proposed IOM standards for CPGs generally have been achieved, the committee believes the need for a special process to harmonize CPGs will be reduced. Increased transparency and encouragement of all developers to discuss why they believe their recommendations are similar or different from those of others would make harmonization a more conscious part of development. In addition, the NGC, when comparing similar CPGs in Guideline Syntheses, might also contrast recommendations contained in each to identify sources of convergence or areas lacking harmony. The standards are likely to reduce current and future levels of guideline duplication for several reasons:
The total number of CPGs produced may be smaller because some organizations will be unable to meet the standards. Those organizations either will choose not to produce inferior guidelines or choose to use existing trustworthy CPGs if the topic is closely related to what they need.
As current CPGs become outdated, developers might choose not to update if they cannot meet the new standards. They may also look to partner with other organizations concerned with the same issues, and pool resources and expertise to meet the standards.
Proposed Standard 3 concerning CPG development team composition calls for representation of a wide range of interests and perspectives. This should encourage collaboration among guideline development organizations and likely result in representative members from organizations having CPGs with overlapping recommendations. Their participation in the development of a new, related CPG should help minimize conflicting recommendations.
Proposed Standard 8 requires annual, ongoing monitoring of new, potentially relevant evidence. It also requires updating of extant CPGs when new evidence indicates a modification of guideline recommendations. Both of these activities help ensure that earlier guidelines are accounted for as future CPGs are developed.
If the NGC adopts higher standards for clearinghouse admission, fewer CPGs will be accessible and probably somewhat fewer will require harmonization now and in the future because some CPGs that do not meet NGC standards will not be widely circulated.
Whether or not commercial guideline developers choose to follow the proposed standards, to the extent they rely on existing CPGs from reputable developers, and to the extent there would be fewer CPGs in need of harmonization, commercial guidelines would contribute to the convergence toward existing, higher quality CPGs, rather than to a proliferation of poorer quality CPGs.
If a new, separate process were proposed to encourage CPG harmonization, it would require some authority and have a significant job tackling existing, duplicative guidelines, and also an endless job if the development standards were ineffective in reducing production of duplicative guidelines. The committee recognizes that, although future need for harmonization should be reduced, conflicting recommendations in CPGs may remain. Because the committee does not assume that all remaining duplication and conflicting recommendations are necessarily bad, AHRQ and the NGC should examine the causes of remaining multiple inconsistent CPGs and prioritize them for harmonization if considered necessary. Particular attention to harmonization should be paid when the oldest CPG on a topic is due for updating.
SHOULD THERE BE A PROCESS TO IDENTIFY WHICH RECOMMENDATIONS SHOULD BE CONSIDERED FOR QUALITY MEASURES?
Clinical Practice Guidelines have had, and are expected to have, an important influence on development of physician and hospital performance measures, especially when CPGs conform to development methods such as those recommended herein. The data gathered from use of such measures have provided consumers with valuable information on the quality of different health care providers. The committee recognizes that healthcare quality measures are developed by many different organizations for various purposes and audiences. Some measure developers and users may work for proprietary interests and prefer keeping measures confidential; others submit measures to the NQF for approval and dissemination and to a web-based clearinghouse, the National Quality Measures Clearinghouse (NQMC). Although some CPG developers also develop related quality measures and promote their use, typically those actions have not been within the purview of guideline development to produce performance measures. In fact, performing both functions might create conflicting interests. For example, a CPG might recommend the latest state-of-the-art treatment, but the Guideline Development Group (GDG) might consider it unfair or inappropriate for use as a quality measure, if the measure could be used in a pay-for-performance scheme. Measures developers, however, often rely on CPG recommendations and the related scientific evidence base. Because the NQMC is closely linked to the NGC, users of either clearinghouse can readily find related measures and CPGs.
As reflected in the NQMC, quality measures can assist in evaluating aspects of the process of care, care outcomes, access to care, and the patient’s care experience. The evidence base for a measure posted in the NQMC can be minimal—at least “one or more research studies published in a National Library of Medicine indexed, peer-reviewed journal, a[n] SR of the clinical literature, a CPG or other peer-reviewed synthesis of the clinical evidence, or a formal consensus procedure involving expert clinicians and clinical researchers,” and evidence from patients for measures of patient experience, as well as documentation concerning use of the measure (NQMC, 2010).
Because rating the strength of recommendations will occur in the development process of all CPGs adhering to the IOM’s recommended standards, the committee concludes that no additional processes are needed to identify recommendations of sufficient
strength for quality measurement. The committee urges all developers of CPG-related measures to employ only CPGs identified as trustworthy (as defined herein) when available. Only recommendations conceived in accordance with development standards, such as those proposed herein, should be transformed into quality measures.
HOW SHOULD CPG DEVELOPMENT AND IMPLEMENTATION PROCESSES AND IMPACT BE EVALUATED?
The proposed standards have not yet been evaluated by CPG developers and users. Without evaluation of the recommended guideline development process and interventions to promote CPG implementation, it will not be known whether the standards give rise to development of unbiased, scientifically valid, and trustworthy CPGs, or whether implementation of IOM standards-based CPGs gives rise to improved health outcomes. The committee believes answering questions related is important, such as,
What are strengths and weaknesses in the current execution of standards and how might the standards be revised before broad distribution (e.g., what is the optimal model of GDG-SR relationship, what is the optimal method of involving consumers, etc.)?
Are the IOM guideline development standards valid and reliable?
Are the development standards being adopted?
Is adoption increasing stakeholders’ confidence in CPGs?
Is adoption of the proposed standards enhancing the quality of the development of CPGs?
Are CPGs developed on the basis of the proposed standards more likely to be adopted?
Which interventions to promote adoption of CPGs are most effective, for which audiences, and for what types of clinical interventions?
Do CPGs developed on the basis of the proposed standards for trustworthy guidelines improve healthcare and patient outcomes?
What is the impact of the NGC?
Research to answer such questions is consistent with the mission of AHRQ. Hence, the committee believes that AHRQ should direct
a portion of its research funds to investigations of, and methods for studying, the impact of the proposed standards, and CPGs.
It is important to emphasize that understanding the feasibility of the proposed standards should be supported by pilot testing. Ultimately, the interest is in identifying strengths and weaknesses in the current execution of standards with the aim of revising and enhancing them in advance of final production, full distribution and promotion (Szklo and Nieto, 2007).
Given the growth expected in the next decade in clinical research, comparative effectiveness studies, and systematic reviews, the availability of trustworthy CPGs will become even more critical in assisting clinicians and patients in their treatment considerations. The recommendations below should help to improve the quality of CPGs available for their use.
RECOMMENDATION: DEVELOPMENT, IDENTIFICATION, AND EVALUATION OF TRUSTWORTHY CPGS
The Secretary of HHS should establish a public–private mechanism to examine, at the request of developer organizations, the procedures they use to produce their clinical practice guidelines and to certify whether these organizations’ CPG development procedures comply with standards for trustworthy CPGs.
AHRQ should take the following actions:
Require the NGC to provide a clear indication of the extent to which the clinical practice guidelines it receives adhere to standards for trustworthiness.
Conduct research on the causes of inconsistent CPGs, and strategies to encourage their harmonization.
Assess the strengths and weaknesses of proposed IOM standards by pilot-test; estimate the validity and reliability of proposed standards; evaluate the effectiveness of interventions to encourage standards’ implementation; and evaluate the effects of standards on CPG development, healthcare quality, and patient outcomes.
Coates, V. 2010. National Guidelines Clearinghouse NGC/ECRI Institute. Presented at the IOM Committee on Developing Standards for Trustworthy Clinical Practice Guidelines meeting, January 11, 2010, Washington, DC.
Fochtmann, L. 2010. American Psychiatric Association. Presented at the IOM Committee on Standards for Developing Trustworthy Clinical Practice Guidelines meeting, January 11, 2010, Washington, DC.
IOM (Institute of Medicine). 2008. Knowing what works in healthcare: A roadmap for the nation. Edited by J. Eden, B. Wheatley, B. McNeil, and H. Sox. Washington, DC: The National Academies Press.
Kelly-Thomas, K. 2010. National Association of Pediatric Nurse Practitioners. Presented at the IOM Committee on Standards for Developing Trustworthy Clinical Practice Guidelines meeting, January 11, 2010, Washington, DC.
McClure, J. 2010. National Comprehensive Cancer Network (NCCN). Presented at IOM Committee on Standards for Developing Trustworthy Clinical Practice Guidelines meeting on January 11, 2010, Washington, DC.
NCQA (National Committee for Quality Assurance). 2011. About NCQA. http://www.ncqa.org/tabid/675/Default.aspx (accessed February 2, 2011).
NGC (National Guideline Clearinghouse). 2010. National Guideline Clearinghouse. http://www.guideline.gov/ (accessed April 7, 2010).
NHS (National Health Service). 2009. Process manual for accrediting producers of guidance and recommendations for practice: A guide for producers and stakeholders: National Institute for Health and Clinical Excellence.
NICE (National Institute for Health and Clinical Excellence). 2010. Homepage. http://www.nice.org.uk/ (accessed March 7, 2010).
NQF (The National Quality Forum). 2010. About NQF. http://www.qualityforum.org/About_NQF/About_NQF.aspx (accessed October 8, 2010).
NQMC (National Quality Measures Clearinghouse). 2010. National Quality Measures Clearinghouse: Inclusion criteria. http://www.qualitymeasures.ahrq.gov/about/inclusion.aspx (accessed July 16, 2010).
Shaneyfelt, T. M., and R. M. Centor. 2009. Reassessment of clinical practice guidelines: Go gently into that good night. JAMA 301(8):868–869.
Szklo, M., and F. J. Nieto. 2007. Epidemiology: Beyond the basics. 2nd ed. Sudbury, Massachusetts: Jones and Bartlett Publishers.