Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 75
Clinical Practice Guidelines We Can Trust 4 Current Best Practices and Proposed Standards for Development of Trustworthy CPGs: Part 1, Getting Started Abstract: As stated in Chapter 1, the committee was charged with identifying standards for the production of unbiased, scientifically valid, and trustworthy clinical practice guidelines. The following two chapters describe and present the rationale for the committee’s proposed standards, which reflect a review of the literature, public comment, and expert consensus on best practices for developing trustworthy guidelines. The standards and supporting text herein address several aspects of guideline development, including transparency, conflict of interest, guideline development team composition and group process, and finally, the determination of guideline scope and the chain of logic, including interaction with the systematic review team. INTRODUCTION Chapters 4 and 5 detail aspects of the clinical practice guideline (CPG) development process, and the committee’s related proposed standards, over time, from considerations of transparency and conflict of interest (COI) to updating of guidelines. The proposed standards arose from the committee’s compliance with standard-setting methodologies elaborated on in Chapter 1. A standard is defined as a process, action, or procedure that is deemed essential to producing scientifically valid, transparent, and reproducible results. The com-
OCR for page 76
Clinical Practice Guidelines We Can Trust mittee expects its standards to be pilot-tested and evaluated for reliability and validity (including applicability), as described in detail in Chapter 7, and to evolve as the science and experience demand. This chapter captures aspects of the beginnings of guideline development, including transparency, conflict of interest, guideline development team composition and group process, and determining guideline scope and logic, including interaction with the systematic review (SR) team. The committee hopes its proposed standards serve as an important contribution to advancing the work of numerous researchers, developers, and users of guidelines, and help to clarify where evidence and expert consensus support best practices and where there is still much to learn. An important note is that, although textually discussed, no standards are proposed for certain aspects of the guideline development process, such as determining group processes, guideline scope, chain of logic underlying a guideline, incorporating patients with comorbidities and the impact of cost on rating the strength of recommendations, given that the committee could not conceive any standards applicable to all guideline development groups (GDGs) in these areas at this time. ESTABLISHING TRANSPARENCY “Transparency” connotes the provision of information to CPG users that enables them to understand how recommendations were derived and who developed them. Increasing transparency of the guideline development process has long been recommended by authors of CPG development appraisal tools (AGREE, 2001; IOM, 1992; Shaneyfelt et al., 1999) and the following leading guideline development organizations: the U.S. Preventive Services Task Force (USPSTF), National Institute for Health and Clinical Excellence (NICE), American College of Cardiology Foundation/American Heart Association (ACCF/AHA), and American Thoracic Society. However, exactly what needs to be transparent and how transparency should be accomplished has been unclear. The desire to have public access to GDG deliberations and documents must be balanced with resource and time constraints as well as the need for GDG members to engage in frank discussion. The committee found no comparisons in the literature of GDG approaches to achieving transparency, but did inspect policies of select organizations. The American Academy of Pediatrics transparency policy calls on guideline authors to make an explicit judgment regarding anticipated benefits, harms, risks, and costs (American
OCR for page 77
Clinical Practice Guidelines We Can Trust Academy of Pediatrics, 2008).1 According to Schünemann and coauthors (2007, p. 0791) in an article concerning transparent development of World Health Organization (WHO) guidelines, “Guideline developers are increasingly using the GRADE (Grading Recommendations Assessment, Development and Evaluation) approach because it includes transparent judgments about each of the key factors that determine the quality of evidence for each important outcome, and overall across outcomes for each recommendation.” Even clinical decisions informed by high-quality, evidence-based CPG recommendations are subject to uncertainty. An explicit statement of how evidence, expertise, and values were weighed by guideline writers helps users to determine the level of confidence they should have in any individual recommendation. Insufficient or conflicting evidence, inability to achieve consensus among guideline authors, legal and/or economic considerations, and ethical/religious issues are likely reasons that guideline writers leave recommendations vague (American Academy of Pediatrics, 2008). Instead, guideline developers should highlight which of these factors precluded them from being more specific or directive. When a guideline is written with full disclosure, users will be made aware of the potential for change when new evidence becomes available, and will be more likely to understand and accept future alterations to recommendations (American Academy of Pediatrics, 2008). Detailed attention to CPG development methods for appraising, and elucidating the appraisal of, evidentiary foundations of recommendations is provided in Chapter 5. Transparency also requires statements regarding the development team members’ clinical experience, and potential COIs, as well as the guideline’s funding source(s) (ACCF and AHA, 2008, AHRQ, 2008; Rosenfeld and Shiffman, 2009). Disclosing potential financial and intellectual conflicts of interest of all members of the development team allows users to interpret recommendations in light of the COIs (American Academy of Pediatrics, 2008). The following section in this chapter discusses in greater detail how to manage COIs among development team members. Ultimately, a transparent guideline should give users confidence that guidelines are based on best available evidence, largely free from bias, clear about the purpose of recommendations to individual patients, and therefore trustworthy. 1 The committee did not inspect whether GDGs followed policies on transparency set in place by their parent organizations (i.e., did AAP guidelines meet their own standard on transparency).
OCR for page 78
Clinical Practice Guidelines We Can Trust 1. Establishing Transparency 1.1 The processes by which a CPG is developed and funded should be detailed explicitly and publicly accessible. MANAGEMENT OF CONFLICT OF INTEREST The Institute of Medicine’s 2009 report on Conflict of Interest in Medical Research, Education, and Practice defined COI as “A set of circumstances that creates a risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest” (IOM, 2009, p. 46). A recent comprehensive review of COI policies of guideline development organizations yielded the following complementary descriptions of COI: “A divergence between an individual’s private interests and his or her professional obligations such that an independent observer might reasonably question whether the individual’s professional actions or decisions are motivated by personal gain, such as financial, academic advancement, clinical revenue streams, or community standing” and “A financial or intellectual relationship that may impact an individual’s ability to approach a scientific question with an open mind” (Schünemann et al., 2009, p. 565). Finally, intellectual COIs specific to CPGs are defined as “academic activities that create the potential for an attachment to a specific point of view that could unduly affect an individual’s judgment about a specific recommendation” (Guyatt et al., 2010, p. 739). Increasingly, CPG developers—including the American Heart Association, American Thoracic Society, American College of Chest Physicians, American College of Physicians, and World Health Organization—all have COI policies encompassing financial and intellectual conflicts (Guyatt et al., 2010; Schünemann et al., 2009). The concept that COI can influence healthcare decision makers is widely recognized (Als-Nielsen, 2003; Lexchin et al., 2003). Therefore, it is disturbing that an assessment of 431 guidelines authored by specialty societies reported that 67 percent neglected to disclose information on the professionals serving on the guideline development panel, making even rudimentary evaluation of COI infeasible (Grilli et al., 2000). Furthermore, an investigation of more than 200 clinical practice guidelines within the National Guideline Clearinghouse determined that greater than half included no information about financial sponsors of guidelines or financial conflicts of interest of guideline authors (Taylor, 2005). Organizations developing practice guidelines thus need to improve management and reporting of COI (Boyd and Bero, 2000; Campbell, 2007; Jacobs et al., 2004).
OCR for page 79
Clinical Practice Guidelines We Can Trust Disclosure policies should relate to all potential committee members (including public/patient representatives) and should include all current and planned financial and institutional conflicts of interest. Financial (commercial or noncommercial) COI typically stems from actual or potential direct financial benefit related to topics discussed or products recommended in guidelines. Direct financial commercial activities include clinical services from which a committee member derives a substantial proportion of his or her income; consulting; board membership for which compensation of any type is received; serving as a paid expert witness; industry-sponsored research; awarded or pending patents; royalties; stock ownership or options; and other personal and family member financial interests. Examples of noncommercial financial activities include research grants and other types of support from governments, foundations, or other nonprofit organizations (Schünemann et al., 2009). A person whose work or professional group fundamentally is jeopardized, or enhanced, by a guideline recommendation is said to have intellectual COI. Intellectual COI includes authoring a publication or acting as an investigator on a peer-reviewed grant directly related to recommendations under consideration. Finally, individuals with knowledge of relationships between their institutions and commercial entities with interests in the CPG topic are considered to have institutional COI. These include public/patient representatives from advocacy organizations receiving direct industry funding. Biases resulting from COI may be conscious or unconscious (Dana, 2003) and may influence choices made throughout the guideline development process, including conceptualization of the question, choice of treatment comparisons, interpretation of the evidence, and, in particular, drafting of recommendations (Guyatt et al., 2010). A recent study of Food and Drug Administration Advisory Committees found that members regularly disclose financial interests of considerable monetary value, yet rarely recuse themselves from decision making. When they did, less favorable voting outcomes regarding the drug in question were observed across the majority of committee meetings (Lurie et al., 2006). A related investigation observed that 7 percent of guideline developers surveyed believed their relationships with industry affected their guideline recommendations; moreover, nearly 20 percent believed that guideline coauthors’ recommendations were subject to industry influence (Chaudhry et al., 2002). Regardless of the nature of COI or its effects on guideline development, perception of bias undermines guideline users’ confidence in guideline trustworthiness as well as public trust in science (Friedman, 2002).
OCR for page 80
Clinical Practice Guidelines We Can Trust Direct guideline funding by for-profit organizations also poses COI challenges. The development, maintenance, and revision of CPGs is a costly, labor-intensive endeavor (American Academy of Physicians Assistants, 1997). Many professional societies and other groups developing guidelines rely, at least in part, on commercial sponsors to cover costs. The perception that a for-profit commercial entity, including pharmaceutical and medical device companies in particular, had influenced conclusions and recommendations of a CPG committee could undermine the trustworthiness of the GDG and its CPG (Eichacker et al., 2006; Rothman et al., 2009). Although the 2009 IOM Committee on COI in Medical Research, Education, and Practice found no systematic studies investigating the association between the guideline development process or CPG content and funding source, it did detail cases that raised concern about industry funding influence (IOM, 2009). The controversy over Eli Lilly’s involvement with practice guidelines for treatment of severe sepsis, and the company’s marketing campaign for the drug rhAPC, highlight this issue. Although Eli Lilly and the sepsis guideline development group maintain that recommendations were based on high-quality randomized controlled trials (RCTs), many experts contend the group undervalued non-RCT studies of standard therapies and failed to address concerns about rhAPC’s adverse side effects. Because Eli Lilly was the predominant funder and many development panel members had relationships with the company, trust in integrity of the guideline recommendations was understandably low (Eichacker et al., 2006). Some guideline experts have requested that professional medical organizations reject all industry funding for practice guidelines (Rothman et al., 2009) and hold GDG members to the most stringent COI standards (Sniderman and Furberg, 2009). The IOM’s 2009 report on conflict of interest suggests that adequate firewalls between funders and those who develop guidelines must exist (IOM, 2009). However, the most knowledgeable individuals regarding the subject matter addressed by a CPG are frequently conflicted. These “experts” often possess unique insight into guideline-relevant content domains. More specifically, through their research or clinical involvement, they may be aware of relevant information about study design and conduct that is not easily identified. Although expert opinion is not a form of high-quality evidence, the observations of experts may provide valuable insight on a topic; those who have such insight may simply be without substitutes. Optimally GDGs are made up of members who lack COIs. Experts who have unique knowledge about the topic under consideration—but who
OCR for page 81
Clinical Practice Guidelines We Can Trust have COIs—can share their expertise with the GDG as consultants and as reviewers of GDG products, but generally should not serve as members of the GDG. Strategies for Managing COI Strategies for managing potential COI range from exclusion of conflicted members from direct panel participation or restriction of roles, to formal or informal consultation, to participation in certain exclusive recommendations, to simple disclosure of COI. Although the 2009 IOM committee on COI found no systematic review of guideline development organizations’ conflict-of-interest policies, the committee did identify variations in the COI policies of select organizations. Specifically, COI policies vary with regard to the specific types of information that must be disclosed, who is responsible for managing conflicts and monitoring policy compliance, and whether COI procedures are transparent. Provisions for public disclosure of COI and managing relationships with funders also differ (IOM, 2009). Although disclosure of guideline development members’ financial conflicts has become common practice, many experts are skeptical that disclosure alone minimizes the impact of conflicts (Guyatt et al., 2010). Hence, increasingly rigorous management strategies have been adopted by some organizations (Schünemann et al., 2009). These have included omission of those with COI from guideline development panels (WHO, 2008) and exclusion of conflicted persons from leadership positions (NICE, 2008). The USPSTF currently bars individuals who have earned more than $10,000 per year from medical expert testimony or related endeavors from serving on guideline panels. Lesser financial or intellectual conflicts may require disclosure to other panel members or recusal from specific recommendation deliberations, at the discretion of the USPSTF chair and vice chair and under the aegis of Agency for Healthcare Research and Quality staff (AHRQ, 2008). The ACCF/AHA task force strives to balance conflict of interest, rather than remove it completely, and allows 50 percent of committee members to have industry relationships, but recuses those members from voting on relevant recommendations. The committee chair must also be free of any COI (ACCF and AHA, 2008). Other COI management approaches—including mandating clearer separation of unconflicted methodologists from the influence of potentially conflicted clinical experts—are reflected in the American College of Chest Physicians Antithrombotic Guidelines
OCR for page 82
Clinical Practice Guidelines We Can Trust (Guyatt et al., 2010). In this approach, unconflicted methodologists, such as epidemiologists, statisticians, healthcare researchers and/or “guidelineologists” (i.e., those with specific expertise in the guideline development process), lead the formulation of recommendations in collaboration with clinical experts who may be conflicted to a degree that would not preclude them from panel participation. Guyatt and coauthors advocate this strategy, stating that the key to developing unconflicted recommendations is that the responsibility for the final presentation of evidence summaries and rating of quality of evidence rests with unconflicted panel members, and in particular with the methodologist chapter editor (Guyatt et al., 2010). A 2010 examination of state-of-the-art COI management schemata for CPGs, performed by Shekelle et al. (2010), provides detailed insight for developers, as described below. Preliminary Review and Management of COI In selecting prospective participants for guideline development, disclosures typically are reviewed prior to the first meeting, and unresolvable conflicts of interest are investigated. The procedures (including step-by-step review and management) are described clearly as part of CPG development policy. Prospective members agree to divest any stocks or stock options whose value could be influenced by the CPG recommendations, and refrain from participating in any marketing activities or advisory boards of commercial entities related to the CPG topic. Disclosure of COI to Other Panel Members Once members of a guideline panel have been assembled, any member COI is disclosed and discussed before deliberations begin. Individual participants (including project chairs and panelists) label how COI might affect specific recommendations. Disclosures and conflicts should be reviewed in an ongoing manner by those managing COI. 2. Management of Conflict of Interest (COI) 2.1 Prior to selection of the guideline development group (GDG), individuals being considered for membership should declare all interests and activities potentially resulting in COI with development group activity, by written disclosure to those convening the GDG: • Disclosure should reflect all current and planned com
OCR for page 83
Clinical Practice Guidelines We Can Trust mercial (including services from which a clinician derives a substantial proportion of income), noncommercial, intellectual, institutional, and patient–public activities pertinent to the potential scope of the CPG. 2.2 Disclosure of COIs within GDG: • All COI of each GDG member should be reported and discussed by the prospective development group prior to the onset of his or her work. • Each panel member should explain how his or her COI could influence the CPG development process or specific recommendations. 2.3 Divestment • Members of the GDG should divest themselves of financial investments they or their family members have in, and not participate in marketing activities or advisory boards of, entities whose interests could be affected by CPG recommendations. 2.4 Exclusions • Whenever possible GDG members should not have COI. • In some circumstances, a GDG may not be able to perform its work without members who have COIs, such as relevant clinical specialists who receive a substantial portion of their incomes from services pertinent to the CPG. • Members with COIs should represent not more than a minority of the GDG. • The chair or cochairs should not be a person(s) with COI. • Funders should have no role in CPG development. GUIDELINE DEVELOPMENT GROUP COMPOSITION AND GROUP PROCESS Guideline development involves technical processes (SRs of relevant evidence), judgmental processes (interpretation of SR and derivation of recommendations), and interpersonal processes (consensus building). The validity of guideline recommendations may be influenced adversely if any one of these processes is biased. There has been much less methodological focus given to studying and optimizing judgmental and interpersonal processes, than on ensuring validity of the technical process. (Gardner et al., 2009; Moreira, 2005; Moreira et al., 2006; Pagliari and Grimshaw, 2002; Pagliari et
OCR for page 84
Clinical Practice Guidelines We Can Trust al., 2001). Fundamentally, the quality of the latter processes depends on composition of the group (whether the right participants have been brought to the table) and group process (whether the process allows all participants to be involved in constructive discourse surrounding implications of the systematic review). Group Composition Although the composition across prominent GDGs may vary, most commonly GDGs consist of 10 to 20 members reflecting 3 to 5 relevant disciplines (Burgers et al., 2003b). Clinical disciplines typically represented include both generalists and subspecialists involved in CPG-related care processes. Nonclinical disciplines typically represented include those of methodological orientation, such as epidemiologists, statisticians, “guidelineologists” (i.e., those with specific expertise in the guideline development process), and experts in areas such as decision analysis, informatics, implementation, and clinical or social psychology. It is important that the chair have leadership experience. Public representatives participate in a number of guideline development efforts and may include current and former patients, caregivers not employed as health professionals, advocates from patient/consumer organizations, and consumers without prior direct experience with the topic (Burgers et al., 2003b). Empirical evidence consistently demonstrates that group composition influences recommendations. In a systematic review of factors affecting judgments achieved by formal consensus development methods, Hutchings and colleague identified 22 studies examining the impact of individual participant specialty or profession. Overall, the authors observed that those who performed a procedure, versus those who did not, were more likely to rate more indications as appropriate for that procedure. In addition, in five individual studies comparing recommendations made by unidisciplinary and multidisciplinary groups, recommendations by multidisciplinary groups generally were more conservative (Hutchings and Raine, 2006). Murphy and colleagues (1998) offer other relevant findings in a systematic review in which they compared guideline recommendations produced by groups of varying composition. The authors concluded that differences in group composition may lead to contrasting recommendations; more specifically, members of a clinical specialty are more likely to promote interventions in which their specialty plays a part. Overall, the authors state: “The weight of the evidence suggests that heterogeneity in a decision-making group can lead to a better performance [e.g., clarity and creativity in strate-
OCR for page 85
Clinical Practice Guidelines We Can Trust gic decision making due to fewer assumptions about shared values] than homogeneity” (Murphy et al., 1998, p. 33). Fretheim and colleagues’ (2006a) analysis of six studies of CPGs, excluded from Murphy’s review, demonstrated that clinical experts have a lower threshold for recommending procedures they perform. Complementary findings provided by Shekelle et al. (1999) discovered that given identical evidence, a single subspecialty group will arrive at contrasting conclusions compared to those of a multidisciplinary group. Finally, an investigation of six surgical procedures by Kahan and colleagues (1996) suggests that 10 to 42 percent of cases considered appropriate for surgery by specialists who performed the procedure were considered inappropriate by primary care providers. Lomas (1993) explains and offers implications of these findings as follows: first, limited evidentiary foundations for guideline development require supplementation by a variety of stakeholders; second, value conflicts demand resolution; and third, successful introduction of a guideline requires that all key disciplines contribute to development to ensure “ownership” and support. In complementary fashion, the IOM Committee to Advise the Public Health Service on Clinical Practice Guidelines in 1990 offered the following rationale in support of multidisciplinary guideline development groups: (1) they increase the likelihood that all relevant scientific evidence will be identified and critically assessed; (2) they increase the likelihood that practical problems in guideline application will be identified and addressed; and (3) they increase a sense of involvement or “ownership” among audiences of the varying guidelines (IOM, 1990). Given these empirical and theoretical arguments, there is broad international consensus that GDGs should be multidisciplinary, with representation from all key stakeholders (ACCF and AHA, 2008; AGREE, 2003; NICE, 2009; SIGN, 2008). Rosenfeld and Shiffman (2009, p. S8) capture this sentiment in the following words: “every discipline or organization that would care about implementation [of the guideline] has a voice at the table.” This carries practical implications when convening a guideline development panel in terms of panel size, disciplinary balance, and resource support. Small groups may lack a sufficient range of experience. In their 1999 conceptualization of the CPG development process, Shekelle and colleagues (1999) assert that guideline reliability may increase in a multidisciplinary (and hence larger) group due to increased balancing of biases. More than 12 to 15 participants may result in ineffective functioning (Rosenfeld and Shiffman, 2009). Murphy and coauthors’
OCR for page 98
Clinical Practice Guidelines We Can Trust DETERMINING GUIDELINE SCOPE AND REQUISITE CHAIN OF LOGIC Guideline development groups determine scope and logic (formulation of key clinical questions and outcomes) of CPGs in a variety of ways. Though the committee found no one approach rose to the level of a standard, it recognizes the importance of various associated components to the guideline development process. The committee therefore considered factors important in determining guideline scope, as well as the development of an analytical model to assist in identification of critical clinical questions and key outcomes, and exploration of the quality of varying evidence in a chain of reasoning. Elaborating Scope When elaborating guideline scope, GDG members need to consider a variety of clinical issues, including benefits and harms of different treatment options; identification of risk factors for conditions; diagnostic criteria for conditions; prognostic factors with and without treatment; resources associated with different diagnostic or treatment options; the potential presence of comorbid conditions; and patient experiences with healthcare interventions. These issues must be addressed in the context of a number of factors, including target conditions, target populations, practice settings, and audience (Shekelle et al., 2010). Analytic Framework To define which clinical questions must be answered to arrive at a recommendation, which types of evidence are relevant to the clinical questions, and by what criteria that evidence will be evaluated and lead to clinical recommendations, GDGs optimally specify a chain of reasoning or logic related to key clinical questions that need to be answered to produce a recommendation on a particular issue. Failure to do so may undermine the trustworthiness of guidelines by neglecting to define at the outset the outcomes of interest, specific clinical questions to be answered, and available evidence. The absence of these guideposts can become apparent as guideline development work unfolds. Failure to define key questions and failure to specify outcomes of interest and admissible evidence can result in wasted time, money, and staff resources to gather and analyze evidence irrelevant to recommendations. Poorly defined outcomes can obscure important insights in the evidence review
OCR for page 99
Clinical Practice Guidelines We Can Trust process, resulting in incomplete or delayed examination of relevant evidence. Disorganized analytic approaches may result in the lack of a crisp, well-articulated explanation of the recommendations’ rationale. Poorly articulated or indirect evidence chains can make it difficult to discern which parts of the analytic logic are based on science or opinion, the quality of that evidence, and how it was interpreted. Readers can be misled into thinking that there is more (or less) scientific support for recommendations than actually exists. The ambiguity can also cause difficulty in establishing research priorities (Shekelle et al., 2010; Weinstein and Fineberg, 1980). The visual analytic framework described here is one of a variety of potential approaches; the particular model is less important than the principles on which it is based. These principles include the need for guideline developers to take the following actions: (1) make explicit decisions at the outset of the analytic process regarding the clinical questions that need to be answered and the patient outcomes that need to be assessed in order to formulate a recommendation on a particular issue; (2) have a clear understanding of the logic underlying each recommendation; (3) use the analytic model for keeping the GDG “on track”; (4) be explicit about types of evidence or opinion, as well as the value judgments supporting each component of the analytic logic; and (5) transmit this information with clarity in the guideline’s rationale statement (discussed hereafter). Explication of Outcomes Guideline developers must unambiguously define outcomes of interest and the anticipated timing of their occurrence. Stating that a practice is “clinically effective” is insufficient. Specification of the outcomes (including magnitude of intervention benefits and harms) and time frames in which they are expected to occur, as reflected in a clinical recommendation, is required. The GDG must decide which health outcomes or surrogate outcomes will be considered. A health outcome, which can be acute, intermediate, or long term, refers to direct measures of health status, including indicators of physical morbidity (e.g., dyspnea, blindness, functional status, hospitalization), emotional well-being (e.g., depression, anxiety), and mortality (e.g., survival, life expectancy). Eddy defines these as “outcomes that people experience (feel physically or mentally) and care about” (Eddy, 1998, p. 10). This is a critical area for serious consideration of consumer input. Health outcomes are the preferred metric, but surrogate outcomes are sometimes used as proxies for health outcomes. Surrogate outcomes are often physiologic variables, test results, or
OCR for page 100
Clinical Practice Guidelines We Can Trust other measures that are not themselves health outcomes, but that have established pathophysiologic relationships with those outcomes. The validity of a surrogate endpoint must be well established in order to accept it as a proxy for a health outcome endpoint. For example, for AIDS, the need for ventilator support, loss of vision, and death would be acute, intermediate, and long-term outcomes respectively, while increased CD4 cell counts or decreased viral-load measures represent surrogate outcomes (Fleming and DeMets, 1996). Guideline developers must determine which of these outcome classes must be affected to support a recommendation. One Example of Guideline Logic: The Analytic Graphical Model These potentially complex interrelationships can be visualized in a graphic format. A recent example of an analytic framework (Figure 4-1) was developed by the USPSTF in consideration of its guideline for osteoporosis screening (Nelson et al., 2010). This diagrammatic approach, first described in the late 1980s, emerged from earlier advances in causal pathways (Battista and Fletcher, 1988), causal models (Blalock, 1985), influence diagrams (Howard and Matheson, 1981), and evidence models (Woolf, 1991). Construction of the diagram begins with listing the outcomes the GDG has identified as important. This list of benefits and harms reflects key criteria the development group must address in arriving at a recommendation. Surrogate outcomes considered reliable and valid outcome indicators may then be added to the diagram. The interconnecting lines, or linkages, appearing in Figure 4-1 represent critical premises in logic or reasoning that require confirmation by FIGURE 4-1 Analytic framework and KQs. NOTE: KQ = key question. SOURCE: Nelson et al. (2010).
OCR for page 101
Clinical Practice Guidelines We Can Trust evidence review to support related recommendations. KQ1 is the overarching question—does risk factor assessment or bone measurement testing lead to reduced fracture-related morbidity and mortality? KQ2 (Is the patient “low risk” or “high risk” for fracture-related morbidity and mortality?), KQ3 (If a patient is “high risk” for fracture-related morbidity and mortality are bone measurement test results normal or abnormal?), KQ4 (If a patient is “high risk” for fracture-related morbidity and mortality, do harms associated with bone measurement testing outweigh benefits?), KQ5 (If patient bone measurement testing is abnormal, will treatment result in reduced fractures?), and KQ6 (If patient bone measurement is abnormal, do treatment harms outweigh benefits?) are questions about intermediate steps along the guideline logic or reasoning path concerning the accuracy of risk factor assessment and bone measurement testing, and potential benefits and harms of testing and treatment of persons identified as abnormal (Shekelle et al., 2010). Specification of the presumed relationships among acute, intermediate, long-term, and surrogate outcomes in a visual analytic model serves a number of useful purposes. It forces guideline developers to make explicit, a priori decisions about outcomes of interest in the derivation of a recommendation. It allows others to judge whether important outcomes are overlooked (Harris et al., 2001). It makes explicit a development group’s judgments regarding the validity of various indicators of outcome. The proposed interrelationships depicted in the diagram reveal group members’ assumptions pertinent to pathophysiologic relationships. They also allow others to make a general determination of whether the correct questions were asked at the outset (IOM, 2008). Filling in the Evidence Linkages in the visual reasoning model provide a “road map” to guide the evidence review. They specify a list of questions that must be answered to derive recommendations. This focused approach, in which evidence review is driven by key questions, is more efficient than broad reviews of a guideline topic. A common error among guideline developers is to conduct an amorphous literature search with broad inclusion criteria. Because hundreds to thousands of data sources usually are available on any guideline topic, such an approach often retrieves many irrelevant citations. A targeted approach is more expeditious, less costly, and directed only to the specific issues that are priorities to be addressed in confirming the rationale for recommendations (AHRQ, 2009; Slavin, 1995).
OCR for page 102
Clinical Practice Guidelines We Can Trust In addition to defining questions to be answered in the literature review, linkages in the analytic framework keep the review process on track. Linkages serve as placeholders for documenting whether supporting evidence has been uncovered for a particular linkage and the nature of that evidence. By identifying which linkages have been “filled in” with evidence, the analytic framework provides a flowchart for tracking progress in evidence identification. It also serves as a checklist to ensure that important outcomes of interest are not neglected in the evidence review process (Harris et al., 2001). Although the linkages define questions to be answered and provide placeholders for documenting results, they do not define the quality of evidence or its implications for recommendations. However, this graphical exercise may serve as a preliminary foundation for deriving clinical recommendations. Scanning linkages in the model directs CPG developers to each of the specific components of their reasoning that require evidence in support of recommendations, an assessment of the quality of that evidence, and an appraisal of the strength of a recommendation that can be made. The complexity of the quality of evidence and strength of recommendation appraisal activities is discussed fully in Chapter 5. With regard to the greater state of the art of CPGs, the analytic model highlights most important outcomes that, depending on the quality of available evidence, require consideration by future investigators in establishing effectiveness of a clinical practice and the demand for guidelines. This information is essential, in an era of limited research resources, to establish priorities and direct outcomes research to fundamental questions. Finally, outcomes identified in the analytic model also provide a template for evaluating effects of guidelines on quality of care (Shekelle et al., 2010). The Rationale Statement The composition of a clear rationale statement is facilitated by the analytic framework. The rationale statement summarizes the benefits and harms considered in deriving the recommendation, and why the outcomes were deemed important (including consideration of patient preferences); the GDG’s assumptions about relationships among all health and surrogate outcomes; and the nature of evidence upholding linkages. If the review uncovered linkages lacking supportive evidence, the rationale statement can speak to the role that opinion, theory, or clinical experience may play in arriving at a recommendation. The rationale statement may thereby provide clinicians, policy makers, and other guideline users with credible
OCR for page 103
Clinical Practice Guidelines We Can Trust insight into underlying model assumptions. It also avoids misleading generalizations about the evidence, such as claiming a clinical practice is supported by “randomized controlled trials” when such evidence supports only one linkage in the analytic model. By sharing the blueprint for recommendations, the linkages in the analytic logic allow various developers to identify pivotal assumptions about which they disagree (Shekelle et al., 2010). REFERENCES ACCF and AHA (American College of Cardiology Foundation and American Heart Association). 2008. Methodology manual for ACCF/AHA guideline writing committees. In Methodologies and policies from ACCF/AHA Taskforce on Practice Guidelines. ACCF and AHA. AGREE (Appraisal of Guidelines for Research & Evaluation). 2001. Appraisal of Guidelines for Research & Evaluation (AGREE) instrument. AGREE. 2003. Development and validation of an international appraisal instrument for assessing the quality of clinical practice guidelines: The AGREE project. Quality and Safety in Health Care 12(1):18–23. AHRQ (Agency for Healthcare Research and Quality). 2008. U.S. Preventive Services Task Force procedure manual. AHRQ Pub. No. 08-05118-ef. http://www.ahrq.gov/clinic/uspstf08/methods/procmanual.htm (accessed February 12, 2009). AHRQ. 2009. Methods guide for comparative effectiveness reviews (accessed January 23, 2009). Als-Nielsen, B., W. Chen, C. Gluud, and L. L. Kjaergard. 2003. Association of funding and conclusions in randomized drug trials: A reflection of treatment effect or adverse events? JAMA 290:921–928. American Academy of Pediatrics. 2008. Toward transparent clinical policies. Pediatrics 121(3):643–646. American Academy of Physicians Assistants. 1997. Policy brief: Clinical practice guidelines. http://www.aapa.org/gandp/cpg.html (accessed May 21, 2007). Bales, R. F., and F. L. Strodtbeck. 1951. Phases in group problem-solving. Journal of Abnormal Social Psychology 46(4):485–495. Bastian, H. 1996. Raising the standard: Practice guidelines and consumer participation. International Journal of Quality Health Care 8(5):485–490. Battista, R. N., and S. W. Fletcher. 1988. Making recommendations on preventive practices: Methodological issues. American Journal of Preventive Medicine 4(4 Suppl):53–67; discussion 68–76. Blalock, H. J., ed. 1985. Causal models in the social sciences, 2nd ed. Chicago. IL: Aldine. Boivin, A., K. Currie, B. Fervers, J. Gracia, M. James, C. Marshall, C. Sakala, S. Sanger, J. Strid, V. Thomas, T. van der Weijden, R. Grol, and J. Burgers. 2010. Patient and public involvement in clinical guidelines: International experiences and future perspectives. Quality and Safety in Health Care 19(5):e22. Boyd, E. A., and L. A. Bero. 2000. Assessing faculty financial relationships with industry: A case study. JAMA 284(17):2209–2214. Burgers, J., R. Grol, N. Klazinga, M. Makela, J. Zaat, and AGREE Collaboration. 2003a. Towards evidence-based clinical practice: An international survey of 18 clinical guideline programs. International Journal on Quality Health Care 15(1):31–45.
OCR for page 104
Clinical Practice Guidelines We Can Trust Burgers, J. S., R. P. Grol, J. O. Zaat, T. H. Spies, A. K. van der Bij, and H. G. Mokkink. 2003b. Characteristics of effective clinical guidelines for general practice. British Journal of General Practice 53(486):15–19. Campbell, E. G. 2007. Doctors and drug companies—scrutinizing influential relationships. New England Journal of Medicine 357(18):1796–1797. Carman, K. L., M. Maurer, J. M. Yegian, P. Dardess, J. McGee, M. Evers, and K. O. Marlo. 2010. Evidence that consumers are skeptical about evidence-based health care. Health Affairs 29(7):1400–1406. Carver, A., and V. Entwistle. 1999. Patient involvement in sign guideline development groups. Edinburgh, Scot.: Scottish Association of Health Councils. Chaudhry, S., S. Schroter, R. Smith, and J. Morris. 2002. Does declaration of competing interests affect readers’ perceptions? A randomised trial. BMJ 325(7377):1391–1392. Dana, J. 2003. Harm avoidance and financial conflict of interest. Journal of Medical Ethics Online Electronic Version:1–18. Devereaux, P. J., D. R. Anderson, M. J. Gardner, W. Putnam, G. J. Flowerdew, B. F. Brownell, S. Nagpal, and J. L. Cox. 2001. Differences between perspectives of physicians and patients on anticoagulation in patients with atrial fibrillation: Observational study. BMJ 323(7323):1218–1221. Dolders, M. G. T., M. P. A. Zeegers, W. Groot, and A. Ament. 2006. A meta-analysis demonstrates no significant differences between patient and population preferences. Journal of Clinical Epidemiology 59(7):653–664. Duff, L. A., M. Kelson, S. Marriott, A. Mcintosh, S. Brown, J. Cape, N. Marcus, and M. Traynor. 1993. Clinical guidelines: Involving patients and users of services. British Journal of Clinical Governance 1(3):104–112. Eddy, D. 1998. Performance measurement: Problems and solutions. Health Affairs 17(4):7–25. Eichacker, P. Q., C. Natanson, and R. L. Danner. 2006. Surviving sepsis—practice guidelines, marketing campaigns, and Eli Lilly. New England Journal of Medicine 355(16):1640–1642. Fleming, T. R., and D. L. DeMets. 1996. Surrogate end points in clinical trials: Are we being misled? Annals of Internal Medicine 125(7):605–613. Fretheim, A., H. J. Schünemann, and A. D. Oxman. 2006a. Improving the use of research evidence in guideline development: Group composition and consultation process. Health Research Policy and Systems 4:15. Fretheim, A., H. J. Schünemann, and A. D. Oxman. 2006b. Improving the use of research evidence in guideline development: Group processes. Health Research Policy and Systems 4:17. Friedman, P. J. 2002. The impact of conflict of interest on trust in science. Science and Engineering Ethics 8(3):413–420. Gardner, B., R. Davidson, J. McAteer, and S. Michie. 2009. A method for studying decision-making by guideline development groups. Implementation Science 4(1):48. Grilli, R., N. Magrini, A. Penna, G. Mura, and A. Liberati. 2000. Practice guidelines developed by specialty societies: The need for a critical appraisal. The Lancet 355(9198):103–106. Grimshaw, J., M. Eccles, and I. Russell. 1995. Developing clinically valid practice guidelines. Journal of Evaluation in Clinical Practice 1(1):37–48. Guyatt, G., E. A. Akl, J. Hirsh, C. Kearon, M. Crowther, D. Gutterman, S. Z. Lewis, I. Nathanson, R. Jaeschke, and H. Schünemann. 2010. The vexing problem of guidelines and conflict of interest: A potential solution. Annals of Internal Medicine 152(11):738–741.
OCR for page 105
Clinical Practice Guidelines We Can Trust Harris, R. P., M. Helfand, S. H. Woolf, K. N. Lohr, C. D. Mulrow, S. M. Teutsch, and D. Atkins. 2001. Current methods of the U.S. Preventive Services Task Force: A review of the process. American Journal of Preventive Medicine 20(3 Suppl):21–35. Howard, R., and J. Matheson, eds. 1981. Readings on the principles and applications of decision analysis. Menlo Park, CA: Strategic Decisions Group. Hutchings, A., and R. Raine. 2006. A systematic review of factors affecting the judgments produced by formal consensus development methods in health care. Journal of Health Services Research and Policy 11(3):172–179. IOM (Institute of Medicine). 1990. Clinical practice guidelines: Directions for a new program. Edited by M. J. Field and K. N. Lohr. Washington, DC: National Academy Press. IOM. 1992. Guidelines for clinical practice: From development to use. Edited by M. J. Field and K. N. Lohr. Washington, DC: National Academy Press. IOM. 2008. Knowing what works in health care: A roadmap for the nation. Edited by J. Eden, B. Wheatley, B. McNeil, and H. Sox. Washington, DC: The National Academies Press. IOM. 2009. Conflict of interest in medical research, education, and practice. Edited by B. Lo and M. J. Field. Washington, DC: The National Academies Press. Jacobs, A. K., B. D. Lindsay, B. J. Bellande, G. C. Fonarow, R. A. Nishimura, P. M. Shah, B. H. Annex, V. Fuster, R. J. Gibbons, M. J. Jackson, and S. H. Rahimtoola. 2004. Task force 3: Disclosure of relationships with commercial interests: Policy for educational activities and publications. Journal of American College of Cardiology 44(8):1736–1740. Kahan, J. P., R. E. Park, L. L. Leape, S. J. Bernstein, L. H., Hilborne, L. Parker, C. J. Kamberg, D. J. Ballard, and R. H. Brook. 1996. Variations by specialty in physician ratings of the appropriateness and necessity of indications for procedures. Medical Care 34(6):512–523. Lau, J. 2010. Models of interaction between clinical practice guidelines (CPG) groups and systematic review (SR) teams. Presented at IOM Committee on Standards for Developing Trustworthy Clinical Practice Guidelines meeting, January 12, Washington, DC. Lexchin, J., L. A. Bero, B. Djulbegovic, and O. Clark. 2003. Pharmaceutical industry sponsorship and research outcome and quality: Systematic review. BMJ 326(7400):1167–1170. Lomas, J. 1993. Making clinical policy explicit. Legislative policy making and lessons for developing practice guidelines. International Journal of Technology Assessment in Health Care 9(1):11–25. Lurie, P., C. M. Almeida, N. Stine, A. R. Stine, and S. M. Wolfe. 2006. Financial conflict of interest disclosure and voting patterns at Food and Drug Administration drug advisory committee meetings. JAMA 295(16):1921–1928. Moreira, T. 2005. Diversity in clinical guidelines: The role of repertoires of evaluation. Social Science and Medicine 60(9):1975–1985. Moreira, T., C. May, J. Mason, and M. Eccles. 2006. A new method of analysis enabled a better understanding of clinical practice guideline development processes. Journal of Clinical Epidemiology 59(11):1199–1206. Moynihan, R., and D. Henry. 2006. The fight against disease mongering: Generating knowledge for action. PLoS Medicine 3(4):e191. Murphy, E., R. Dingwall, D. Greatbatch, S. Parker, and P. Watson. 1998. Qualitative research methods in health technology assessment: A review of the literature. Health Technology Assessment 2(16):vii–260.
OCR for page 106
Clinical Practice Guidelines We Can Trust Nelson, H. D., E. M. Haney, T. Dana, C. Bougatsos, and R. Chou. 2010. Screening for osteoporosis: An update for the U.S. Preventive Services Task Force. Annals of Internal Medicine 153(2):99–111. New Zealand Guidelines Group. 2001. Handbook for the preparation of explicit evidence-based clincal practice guidelines. http://www.nzgg.org.nz (accessed August 26, 2009). NICE (National Institute for Health and Clinical Excellence). 2008. A code of practice for declaring and dealing with conflicts of interest. London, UK: NICE. NICE. 2009. Methods for the development of NICE public health guidance, 2nd ed. London, UK: NICE. NIH (National Institutes of Health). 2010. About the consensus development program. http://consensus.nih.gov/aboutcdp.htm (accessed July 20, 2010). Pagliari, C., and J. Grimshaw. 2002. Impact of group structure and process on multidisciplinary evidence-based guideline development: An observational study. Journal of Evaluation in Clinical Practice 8(2):145–153. Pagliari, C., J. Grimshaw, and M. Eccles. 2001. The potential influence of small group processes on guideline development. Journal of Evaluation in Clinical Practice 7(2):165–173. Richardson, F. M. 1972. Peer review of medical care. Medical Care 10(1):29–39. Rosenfeld, R., and R. N. Shiffman. 2009. Clinical practice guideline development manual: A quality-driven approach for translating evidence into action. Otolaryngology–Head & Neck Surgery 140(6 Suppl 1):1–43. Rothman, D. J., W. J. McDonald, C. D. Berkowitz, S. C. Chimonas, C. D. DeAngelis, R. W. Hale, S. E. Nissen, J. E. Osborn, J. H. Scully, Jr., G. E. Thomson, and D. Wofsy. 2009. Professional medical associations and their relationships with industry: A proposal for controlling conflict of interest. JAMA 301(13):1367–1372. Schünemann, H. J., A. Fretheim, and A. D. Oxman. 2006. Improving the use of research evidence in guideline development: Integrating values and consumer involvement. Health Research Policy and Systems 4:22. Schünemann, H. J., S. R. Hill, M. Kakad, G. E. Vist, R. Bellamy, L. Stockman, T. F. Wisloff, C. Del Mar, F. Hayden, T. M. Uyeki, J. Farrar, Y. Yazdanpanah, H. Zucker, J. Beigel, T. Chotpitayasunondh, T. H. Tran, B. Ozbay, N. Sugaya, and A. D. Oxman. 2007. Transparent development of the WHO rapid advice guidelines. PLoS Medicine 4(5):0786–0793. Schünemann, H. J., M. Osborne, J. Moss, C. Manthous, G. Wagner, L. Sicilian, J. Ohar, S. McDermott, L. Lucas, and R. Jaeschke. 2009. An official American Thoracic Society policy statement: Managing conflict of interest in professional societies. American Journal of Respiratory Critical Care Medicine 180(6):564–580. Shaneyfelt, T., M. Mayo-Smith, and J. Rothwangl. 1999. Are guidelines following guidelines? The methodological quality of clinical practice guidelines in the peer-reviewed medical literature. JAMA 281:1900–1905. Shekelle, P. G., and D. L. Schriger. 1996. Evaluating the use of the appropriateness method in the Agency for Health Care Policy and Research clinical practice guideline development process. Health Services Research 31(4):453–468. Shekelle, P. G., S. H. Woolf, M. Eccles, and J. Grimshaw. 1999. Clinical guidelines. Developing guidelines. BMJ 318(7183):593–596. Shekelle, P. G., H. Schünemann, S. H. Woolf, M. Eccles, and J. Grimshaw. 2010. State of the art of CPG development and best practice standards. In Committee on Standards for Developing Trustworthy Clinical Practice Guidelines comissioned paper. SIGN (Scottish Intercollegiate Guidelines Network), ed. 2008. SIGN 50: A guideline developer’s handbook. Edinburgh, Scot.: SIGN.
OCR for page 107
Clinical Practice Guidelines We Can Trust Slavin, R. E. 1995. Best evidence synthesis: An intelligent alternative to meta-analysis. Journal of Clinical Epidemiology 48(1):9–18. Sniderman, A. D., and C. D. Furberg. 2009. Why guideline-making requires reform. JAMA 301(4):429–431. Taylor, I. 2005. Academia’s “misconduct” is acceptable to industry. Nature 436(7051): 626. Telford, R., J. D. Boote, and C. L. Cooper. 2004. What does it mean to involve consumers successfully in NHS research? A consensus study. Health Expectations 7(3):209–220. Tuckman, B. W. 1965. Developmental sequence in small groups. Psychology Bulletin 63:384–399. van Wersch, A., and M. Eccles. 1999. Patient involvement in evidence-based health in relation to clinical guidelines. In The evidence-based primary care handbook. Edited by M. Gabbay. London, UK: Royal Society of Medicine Press Ltd. Pp. 91–103. van Wersch, A., and M. Eccles. 2001. Involvement of consumers in the development of evidence based clinical guidelines: Practical experiences from the north of England evidence based guideline development programme. Quality in Health Care 10(1):10–16. Weinstein, M. C., and H. V. Fineberg. 1980. Clinical decision analysis. Philadelphia, PA: W. B. Saunders. WHO (World Health Organization). 2008. WHO handbook for guideline development. Geneva, Switz.: WHO. Williamson, C. 1998. The rise of doctor–patient working groups. BMJ 317(7169):1374–1377. Woolf, S. 1991. AHCPR interim manual for clinical practice guideline development. AHCPR Pub. No. 91-0018. Rockville, MD: U.S. Department of Health and Human Services.
OCR for page 108
Clinical Practice Guidelines We Can Trust This page intentionally left blank.