7
Evolution in Procedures and Methods for Developing Practice Guidelines

You didn't tell me I'd spend all my time plowing up snakes.

Chair of an AHCPR guidelines development panel, 1990

Involvement in the development of clinical practice guidelines is a learning experience that has both positive and negative features--—as suggested by the above comment from one participant in the process. Involvement in implementation likewise provides lessons that are relevant to the process of developing guidelines. In examining the practical, technical, and policy questions about guidelines implementation and health care reform raised in the preceding chapters, the committee concluded that it needed to underscore the point made in Chapter 2: Planning for successful implementation begins with the development of guidelines. In the future, guidelines developers should give more and earlier attention to what will make guidelines practical and credible. This kind of early consideration will require both improvements in technical methods and greater sensitivity to how guidelines may be appropriately integrated into information systems, quality assurance programs, liability decision making, and cost-management efforts.

Fortunately, accelerating professional, governmental, and other involvement in the guidelines enterprise is reflected in two phenomena: the sheer amount of effort now seen and the increased focus on improving the development process and its products. This expansion of the field shows itself in several ways. Among them are the following:

  • maturation and specification of formal procedures and structures for guideline development;

  • growing appreciation of the complexity and importance of involving the appropriate kinds of individuals in guideline development; and



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use 7 Evolution in Procedures and Methods for Developing Practice Guidelines You didn't tell me I'd spend all my time plowing up snakes. Chair of an AHCPR guidelines development panel, 1990 Involvement in the development of clinical practice guidelines is a learning experience that has both positive and negative features--—as suggested by the above comment from one participant in the process. Involvement in implementation likewise provides lessons that are relevant to the process of developing guidelines. In examining the practical, technical, and policy questions about guidelines implementation and health care reform raised in the preceding chapters, the committee concluded that it needed to underscore the point made in Chapter 2: Planning for successful implementation begins with the development of guidelines. In the future, guidelines developers should give more and earlier attention to what will make guidelines practical and credible. This kind of early consideration will require both improvements in technical methods and greater sensitivity to how guidelines may be appropriately integrated into information systems, quality assurance programs, liability decision making, and cost-management efforts. Fortunately, accelerating professional, governmental, and other involvement in the guidelines enterprise is reflected in two phenomena: the sheer amount of effort now seen and the increased focus on improving the development process and its products. This expansion of the field shows itself in several ways. Among them are the following: maturation and specification of formal procedures and structures for guideline development; growing appreciation of the complexity and importance of involving the appropriate kinds of individuals in guideline development; and

OCR for page 163
Guidelines for Clinical Practice: From Development to Use increased concern about competing and conflicting guidelines, locally adapted guidelines, and transformed versions of guidelines (such as medical review criteria). More generally, experience with guidelines development is highlighting two rather different (but not mutually exclusive) emphases in or orientations to the process of guidelines development. One approach stresses the significance of the science base for guidelines and the use of quantitative modeling in systematically estimating and comparing outcomes. The other approach stresses professional judgment in areas in which the science base is weak or nonexistent. This duality need not and should not be seen as an unbridgeable dichotomy. Professional judgment must be applied to the science base, and science must inform professional judgment. When the science base is strong, however, it should not be disregarded in favor of consensus based on customary practice. When consensus is not consistent with the evidence, the case for consensus should be explicitly and persuasively argued. This chapter begins with a brief discussion of how certain key players in the guidelines arena have evaluated and refined their organizational structures and procedures over the years. Following are several sections that examine persistent issues about methods for developing guidelines, approaches that selected groups have taken in dealing with these issues, and problems that warrant continued attention. A final section discusses the interface of development and implementation as it involves, first, conflicting "national" guidelines; second, local adaptation of existing or emerging "national" guidelines; and, third, formatting and dissemination of guidelines. The discussion of attributes for review criteria in Chapter 5 and the discussion of cost analysis in Chapter 6 also relate to the theme of this chapter. Although the focus here is on practice guidelines, much of this chapter is also relevant to development of medical review criteria. GENERAL STRUCTURES AND PROCEDURES As organizations recognize the demands of developing guidelines in a credible and accountable manner, those entities that plan an ongoing involvement tend to initiate commonly used organizational processes. They create supporting committees, staff positions, procedures, record-keeping systems, budget justification mechanisms, communications links, and eventually, with more difficulty, mechanisms for evaluating performance and results. Certainly, organizational resources constrain what can be established, but if resources are too limited to create and maintain such organizational structures, they may also be too limited to support the development of products consistent with the attributes set forth in Chapter 1.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use The following examples indicate some key ways in which guidelines development is evolving. The first considers the early learning experience of the Agency for Health Care Policy and Research (AHCPR); the second and third examples involve one private and one public organization's efforts to evaluate their work and make their products more credible and useful to practitioners; the fourth example focuses on interorganizational cooperation as a way of building both relevance and credibility. Learning Lessons: The Agency for Health Care Policy and Research Not surprisingly, given its short existence and its pattern of rather substantial staff turnover (since September 1990, three directors and three contractors), the AHCPR Forum for Quality and Effectiveness in Health Care has found its structures and procedures to be somewhat in flux. Still, the richness and value of the first year's experience with the AHCPR panels should not be underestimated, and the Forum has made a serious effort to evaluate and build on their experience. When the Forum was barely into its second year, the staff organized a retreat to consider what panel participants and staff had learned from its first few guidelines panels. "Lessons learned" about these complex activities included the points below.1 First, the work of the guidelines panel chairpersons has proved vastly more demanding than had been originally envisioned. Current panel chairs and AHCPR staff believe that a commitment of at least 25 percent time is needed to handle these activities adequately. Second, the literature reviews have been more time-consuming and in some senses more costly than expected. The literature searches, reviews, and analyses took as much as nine months from start to finish and cost anywhere from $22,000 to $235,000, evidently depending chiefly on the strategy used for the literature review and the size of the body of work that needed to be included. In some cases, they were also less rewarding than anticipated, owing in part to the difficulty of identifying (only) appropriate journal articles and similar materials through current National Library of Medicine indexing and coding systems. Third, costs could probably be brought down somewhat if time constraints on the panels could be loosened and if more specific instructions for methodology were available. Trying to meet tight deadlines tends to be expensive. For example, to complete their work on time, some panels employed unnecessarily highly qualified individuals to carry out the literature review; most made extensive use of Federal Express and overnight mail 1   The points cited are based on unpublished materials ("Summary of Responses to a Questionnaire on Guideline Panel Activities and Views") prepared for the AHCPR Office of the Forum by Health Systems Research, Inc., January 24, 1991.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use instead of regular mail. Participants in some panels argued for more centralized guidance about, for example, summary tables, schemes for rating evidence, and other details, which might help to minimize costs related to unnecessary ''experimenting" in these areas.2 Fourth, in general, the procedures for the first round of AHCPR panels were not uniformly helpful to the panels and their chairpersons. Participants in this first staff retreat, however, could not agree on what specifically they might leave unchanged and what they might modify. One area in which consensus did materialize was that a skilled methodologist should assist the chairperson in organizing the literature search, review, and analysis throughout the process. The variability in views of the panel chairs and others engaged in the agency's early efforts is itself instructive. Certainly, much of the seeming inefficiency of the initial panels can be ascribed to the fact that the Forum was a new unit in a new agency performing a new function with little time for adequate advance planning—under those circumstances, the endeavors may have gone as smoothly as might have been expected. A second retreat may also be scheduled. As described in Chapter 2, the agency has elected to sponsor some guidelines panels (on otitis media, rehabilitation following stroke, and congestive heart failure) through a contracting mechanism. The contractors are required to recommend chairpersons and approximately 15 members of the panels, according to an explicit set of criteria specified by AHCPR. Among those criteria are relevant training and clinical experience, interest in quality assurance and research on the clinical condition in question, capacity to lead a health care team and to respond to consumer concerns, a broad public health view, and a commitment to and prior experience in the development of clinical practice guidelines. Building a Formal Program: The Clinical Efficacy Assessment Program The work of the American College of Physicians (ACP) exemplifies the ongoing formalization of professional society efforts to develop guidelines (Morris, 1987; Ball, 1990; White and Ball, 1990). The ACP began its work on guidelines in 1976 in response to a request from what is now the Blue Cross and Blue Shield Association for assistance in assessing medical techniques; the initial effort was consensus based and relatively informal. In 1981, with a grant from the John A. Hartford Foundation, the ACP initiated 2   A consultant to AHCPR prepared an "Interim Manual" as a protocol for expert panels convened by the Forum: although dated October 1990, it was available in draft form earlier in that year (Woolf, 1990a).

OCR for page 163
Guidelines for Clinical Practice: From Development to Use a demonstration project, the Clinical Efficacy Assessment Project (CEAP). In 1984, the ACP established CEAP as a permanent program and in 1986 published its procedures for guidelines development (ACP, 1986). During 1990, the college evaluated the CEAP effort on the principle that "any good policy making process must be both self-critical and, when called for, self-correcting" (White and Ball, 1990, p. 51). In one innovative step, the ACP convened focus groups to learn more about the utility and significance of its CEAP efforts. The review made clear that the ACP's decade of experience with CEAP laid the groundwork for experimenting with new models for guidelines development and evaluation. In the 1990s, the college plans to strengthen its program. Among plans for the future are (1) using new methods for assessing data, including patient preferences; (2) revising formats for guidelines; (3) making draft guidelines available on line for a network of members who will pretest the guidelines and then measure patient outcomes when the guidelines are used according to specific protocols; (4) starting a formal convening activity to involve multidisciplinary groups in the development of guidelines; and (5) developing a systematic and perhaps new way of updating guidelines (Linda White, ACP, personal communication, August 1991). The ACP is also working with researchers at Johns Hopkins Medical Institutions to survey ACP members about their knowledge, perceptions, and use of guidelines. Finally, the ACP has announced plans for a new center to link guidelines development and outcomes research and to try to determine more reliably the use of guidelines by physicians and their utility for these practitioners. In sum, the focus is very much on improving guidelines development and evaluation so that the products of these processes can be more readily and effectively adopted. Improving Consensus Development: The National Institutes of Health Over time, government agencies involved directly or indirectly with guideline development have--—like professional societies—refined their procedures and methods. One example is the National Institutes of Health (NIH) Consensus Development Conference program, which is administered by the Office of Medical Applications of Research (OMAR, 1988). In the 1980s, OMAR undertook several assessments of the program. Some work was done internally—for example, trials of different mechanisms for running conferences and for disseminating consensus statements (Jacoby, 1983, 1985; Perry, 1987, 1988). Other evaluations were performed by outside parties (Wortman and Vinokur, 1982; Wortman et al., 1988), culminating in a lengthy and rigorous evaluation conducted by RAND Corporation researchers of the content of consensus statements and their impact in terms

OCR for page 163
Guidelines for Clinical Practice: From Development to Use of different behaviors on the part of physicians (Kosecoff et al., 1987; Kahan et al., 1988; Kanouse et al., 1989). More recently, an IOM study committee examined OMAR's program and made several recommendations about its structure and functions (IOM, 1990d).3 The committee called for, among other things, greater emphasis on the concerns of users of consensus statements, with an acknowledgment that the program's fundamental purpose should be "to change behavior toward appropriate use of health practices and technologies" (p. 1). Other issues that the committee addressed were topic selection; better collection, analysis, and use of scientific data before a given conference; attention to dissemination strategies; continued experimentation and self- (or outside) evaluation; and appointment of an external advisory council to assist OMAR in setting its agenda. Again, the objective of these recommendations was to make consensus statements more usable and useful. Interorganizational Cooperation: Medical Societies and Others Moving beyond the internal use of multidisciplinary processes, several organizations are looking for opportunities to join formally with other groups in cooperative efforts. To date, collaborations appear to involve mainly physician organizations, although a few involve nonphysician professional groups, research organizations, and payers. One notable effort aimed at training guidelines developers rather than developing guidelines per se has been sponsored by the John A. Hartford Foundation and the Council of Medical Specialty Societies. Activities have included a training course in which specialty society participants developed guidelines and an introductory manual for developing guidelines (Eddy, 1991c). Reflecting the challenges faced by those who work across specialty lines, workshops on resolving interspeciality conflicts have been another feature of this initiative. Organizational cooperation, whether within or across professional boundaries, can serve several aims. They include the following: greater efficiency through pooling of resources and expertise for methods development, training, and problem solving; 3   In addition, in 1989 the IOM organized a workshop on international consensus development programs in conjunction with an annual meeting of the International Society for Technology Assessment in Health Care (IOM, 1990g). A group of workshop participants developed a lengthy set of recommendations about strengthening such programs. Procedures and methods figured prominently in those recommendations and presaged many of the points made by the IOM practice guidelines study committees. For example, these recommendations concern documentation, use of the best available scientific evidence (including meta-analysis where possible), monitoring and review to determine if recommendations need to be reassessed, and attention to information dissemination and evaluation at the outset of development.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use learning from shared experience; better anticipation of user circumstances and concerns; and development of commitment and support for implementation from individuals with varied professional and institutional affiliations. One complex effort involving the American Medical Association (AMA), the RAND Corporation, and a consortium of academic medical centers has already been cited in Chapter 2. This effort has been multiorganizational less in its individual components—indicator construction, guidelines development, and guidelines testing—than in its attempt to create planned links among these activities. The complexity of coupling organizations (not just individuals from different disciplines) has made this project quite difficult to negotiate, execute, and maintain. Two other multiorganizational efforts led by the AMA were also noted in Chapter 2: the Specialty Society Partnership, involving the AMA and 14 national medical specialty societies, and the Practice Parameters Forum, comprising national medical specialty and state medical societies. Two major objectives of the AMA and the groups working with it have been to devise criteria for judging the soundness of the process for developing practice parameters and then to establish a process for judging specific parameters according to these criteria and perhaps endorsing those that pass (AMA, 1990a). The first criterion is that guidelines should be developed by physician organizations. Reflecting the weight that most professional organizations place on individual professional judgment, the AMA's assessment effort concentrates on process and documentation; it assumes that expert health professionals will have ensured that guidelines correspond to scientific knowledge. The long-standing collaboration between the Blue Cross and Blue Shield Association and the ACP has produced two seminal handbooks: Common Diagnostic Tests (Sox, 1987, 1990) and Common Screening Tests (Eddy, 1991a). These handbooks include papers that systematically analyze research, project outcomes, estimate cost-effectiveness, and recommend practices based on the strength of the evidence concerning each test. Each handbook concludes with summary recommendations (guidelines) intended to aid health benefit plans in making coverage decisions. The first handbook prompted considerable furor: it was hailed by the health services, technology assessment, and quality assurance communities and decried by at least some members of the practice community. Criticism soon gave way to acknowledgment of its major contribution to more effective and appropriate clinical decision making, and the major charge that has been made against the second edition is that not all the groups that wanted to be involved in its development were included. Several other collaborative efforts can be cited. For example, the ACP, the American Academy of Ophthalmology, and the American Diabetes

OCR for page 163
Guidelines for Clinical Practice: From Development to Use Association are cooperating on guidelines for management of diabetic retinopathy. The American College of Cardiology and the American Heart Association have had an ongoing collaboration for the past decade. This arrangement has produced nearly a dozen guidelines, which have been published in the Journal of the American College of Cardiology and in Circulation. The most recent have been for coronary artery bypass graft surgery and for implantation of cardiac pacemakers and antiarrhythmia devices; future guidelines are planned for cardiac catheterization and cardiac catheterization laboratories, electrocardiography, chest pain management in the emergency room, and cardiac radionuclide imaging (a revision of a guideline released in 1986). PERSISTENT QUESTIONS FOR THE DEVELOPMENT PROCESS Participation in the Process As the guidelines development process evolves, more attention is being paid to who takes part in the process, when and how they participate, and what such participation should achieve. This interest reflects both the increasing sophistication of sponsors and the increasing visibility of the process and its products. Several persistent debates about participation can be identified, although the general trend seems to be to expand the scope of involvement. This subsection briefly reviews participative patterns to date; the next subsection addresses points about the development process from a more methodologic stance. Creating Guidelines Panels and Selecting Panel Members One issue related to panel selection is whether organizations assemble panels for each guideline or create standing groups. OMAR (for the NIH Consensus Development Conferences) and AHCPR establish independent panels for each guideline. RAND similarly creates a new panel for each technology, procedure, or condition for which it develops appropriateness indicators. By contrast, the Canadian Task Force on the Periodic Health Examination and the U.S. Preventive Services Task Force (USPSTF) have stable, standing panels and expert consultants and engage in a continuous process of revising previous recommendations and addressing new topics. The ACP process occupies the middle ground, with a standing oversight committee but selected experts who are engaged to develop guidelines on specific topics. Although no evidence exists on the subject, it is likely that each strategy is appropriate for different circumstances. Principles for selecting members of guidelines panels differ along two major dimensions: (1) the generalist-specialist dimension and (2) the physi-

OCR for page 163
Guidelines for Clinical Practice: From Development to Use cian-nonphysician dimension. The former dimension has been a major concern of physician groups and involves three subdimensions: primary care versus specialist physician, community-based versus academic physicians, and specialists in related fields (for example, the role of a cardiologist in guidelines developed by thoracic surgeons). Questions of expertise and turf are not the only limiting factors: any group tends to find it easier to organize, communicate with, and rely on its own members than on nonmembers. The second dimension of selection—physician-nonphysician—is debated by most groups that engage in guideline development. It tends to dissolve into two questions: Should other clinicians and health professionals, such as nurses, therapists, health educators, and nutritionists, be involved, and if so how and how much? Similarly, what should be the role of patients and consumers, payers, administrators, and public officials? Beyond these groups, any number of other types of interested parties and experts may wish or need to be involved in the development of specific guidelines. Among those who might be considered, in the former case, are representatives of voluntary patient and disease groups or representatives of affected provider associations (e.g., hospital or home health agency associations). Involvement in the latter case might comprise expert clinical consultants, expert consultants in other disciplines (economics, law, outcomes measurement), and other methodologists (e.g., those skilled in meta-analysis). The great majority of specialty organizations apparently rely on expert panels composed entirely of physicians in that specialty. Exceptions to this rule include AHCPR panels, which include primary and specialty physicians, nurses, selected allied health disciplines as appropriate, and consumers. NIH Consensus Development Conference panels may include Ph.D. researchers in addition to physicians and other types of clinicians. Similarly, RAND panels for developing appropriateness criteria and some ACP panels have gone beyond the specialty-specific approach. Site visits for this study suggested that institutional providers (e.g., hospitals, HMOs) that develop guidelines for internal use also are more likely to include different types of clinicians and health professionals. Selecting Reviewers of Draft Guidelines Identification of reviewers for sets of draft guidelines is another area in which groups may differ substantially in how they select participants—assuming that they have a process for reviewing draft documents at all. Debates about physician and nonphysician involvement tend to reappear at this stage. Nonetheless, whatever the position of an organization with respect to composition of guidelines panels, it tends at this stage to broaden its range of participants.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use The USPSTF, for example, sent its recommendations and draft background papers for review by more than 300 medical, public health, and "other" experts, including individuals in government health agencies, the U.S. Public Health Service, academic medical centers, and medical organizations. The recommendations were revised if a reviewer identified relevant studies not examined in the report, misinterpretations of findings, or other issues deserving revision within the constraints of the Task Force methodology. The format of this [the Task Force's] report was designed in consultation with representatives of medical specialty organizations, including the American Medical Association, the American College of Physicians, the American Academy of Family Physicians, the American Academy of Pediatrics, the American College of Obstetricians and Gynecologists, the American College of Preventive Medicine, the American Dental Association, and the American Osteopathic Association (USPSTF, 1989, p. xxxvii). The AHCPR guidelines are a special case because they are the products of nongovernmental panels supported with federal funds. The government's internal review of the guidelines examines only the process by which they were developed; an elaborate external review and pilot-testing process is being implemented to consider the soundness of the guidelines themselves. For the guidelines to be developed through the contracting mechanism, four drafts of guidelines are required. The third draft will be reviewed by an outside group of "peer reviewers," and the fourth (that is, the version produced after the peer review process) will be subjected to pilot-testing. Based on comments from the pilot-testers, a fifth and final version of the guideline is to be submitted to AHCPR. Increasing concern about the practical needs of professionals is reflected in the recent activities of the American Society of Internal Medicine (ASIM) and its Internal Medicine Center to Advance Research and Education (IMCARE, 1990; Simmons, 1990). The center has created an innovative Guidelines Network that will not develop guidelines but instead organize network internists to review, upon request, the guidelines of other organizations. The intent is to provide greater insight into how well a guideline may work in clinical practice. For guidelines developed by a subspecialty but intended for use by all internists, network members offer broad-based feedback beyond the subspecialty. More than 400 internists nationwide, including physicians in general and subspecialty internal medicine, have contacted the network about being volunteer reviewers. Furthermore, to broaden participation, internal medicine-related organizations are being asked to suggest additional volunteers. Although network members are primarily ASIM members, ASIM membership is not a requirement. IMCARE plans to establish an advisory panel on

OCR for page 163
Guidelines for Clinical Practice: From Development to Use an annual basis; 1991 appointments were announced in March (IMCARE, 1991). The IMCARE guidelines network is informing AHCPR and other organizations of its availability to aid their guideline development or evaluation efforts. One early activity of IMCARE involved this IOM project. Specifically, the center staff organized an evaluation of the IOM's draft guidelines assessment instrument, which appears in revised form in Appendix B. That review produced 65 useful responses (and an overall summary) in a relatively short turnaround time. Another review strategy is typified by the ACP's practice of publishing background papers and policy statements in the Annals of Internal Medicine. This opens the analyses and guidelines to very broad professional and scientific scrutiny. In general, the ACP has instituted a sort of "due process" by seeking the opinions of any agency, group, or individual with a potential vested interest. Updating Existing Guidelines "Scheduled review," one desirable attribute for practice guidelines, asserts that guideline documents should state when a guideline ought to be revisited and what information would trigger a detailed review and possible change in or withdrawal of the guideline. Generally, such statements put users on notice that the developer group may not or will not stand behind the guideline in its current form once the deadline has arrived. Given the acute sensitivity of professional organizations to advancing medical knowledge, the need for such a review process appears to be well understood, although in practice it may be implemented to differing degrees. The General Accounting Office's (GAO, 1991b) survey of medical specialty societies found that most of the groups had discussed a process of periodic review and updating of guidelines but that not all had begun (or had even begun planning) such a process. Of those societies with plans or programs, seven planned annual reviews and one planned a 10-year review with earlier revisiting of the guideline if the need was clear. One society invokes a "sunset" provision by stating that guidelines will expire after 3 years and must be rewritten (unless they have been revised in the interim). As organizations continue to formalize their guidelines development activities, a typical goal is to establish a formal review process to determine if and when guidelines need updating or other action. Formal updating activities can involve specifying a target review date when a guideline is first proposed and reinstating a former guidelines panel or appointing a new one. Alternatively, a periodic or rolling review process can be established that routinely covers all guidelines. The Canadian Task

OCR for page 163
Guidelines for Clinical Practice: From Development to Use ular patient population (e.g., mobility of migrant workers) differ dramatically from the situations contemplated by a set of guidelines, some modification in recommended preventive, diagnostic, or treatment regimens may be reasonable. Similarly, some health delivery systems and institutions may face constraints that are unchangeable in the short term. These constraints might involve regulatory prohibitions, lack of equipment, or shortages of personnel. Such problems may prompt adaptations that define protocols for situations in which care must be provided but the most appropriate course of care is impossible to implement. Chapter 6 recognized that national guidelines may not incorporate judgments of cost-effectiveness, which some organizations believe they must have to allocate limited resources in a manner consistent with their objectives and environments. Other organizations may seek to apply continuous quality improvement precepts to narrow variations in practice. The result of both these policies may be guidelines that exclude certain options, on the grounds that they are too costly relative to their benefits, or that delineate specific "pathways" or "protocols" that are less variable than those described in a set of national guidelines. A typical example of a narrowing in guidelines occurs when an organization or a public agency (e.g., a state Medicaid program) creates a drug formulary that does not include all of the drugs that are considered reasonable options for treating certain problems. Depending on the extent to which an institution intends to constrain its financial liability for the use of costly but optional forms of care, patient preferences may be accorded greater or lesser weight than they are in national guidelines. Another rationale given for the adaptation of guidelines is behavioral. Some argue that it is important to secure practitioner (and, less commonly, patient or enrollee) acceptance of guidelines through participation in their adoption. Some departures from national guidelines are viewed as acceptable when it is thought that such variation will lead to the actual use of the most critical elements in guidelines rather than to their rejection. The committee had mixed feelings about this rationale, and this discussion should not be seen as a justification for wholesale or casual departure from well documented, science-based guidelines for clinical practice. Finally, generally unstated rationales for local adaptation may be to protect professional habits and local customs for their own sake and to protect economic self-interest by endorsing unnecessary care or care that others could provide as well or more economically. For example, a guideline that did not specifically limit the type of practitioner who could perform certain kinds of eye examinations might be reworked to restrict the practice only to physicians or to particular specialists. Committee members were distinctly unsympathetic to such practices and to rationales for guide-

OCR for page 163
Guidelines for Clinical Practice: From Development to Use lines that were covertly designed to protect habit, "turf," or income at the expense of patients and those who pay for their care. In short, the main concern here is with fundamental departures from an existing scientifically based, well-documented set of guidelines. Among such changes would be designating certain practices appropriate when national guidelines define them as inappropriate, labeling a practice optional rather than recommended or vice versa, or changing threshold values for making treatment decisions. When local institutions do adapt national guidelines, one useful step might be for them to notify the originating group and to explain the circumstances that led to their modifications. Whether national guidelines could or should be revised to accommodate or recognize these circumstances will depend on the specifics (for example, the likelihood that the same circumstances will occur more generally). If this process of communication and consideration became established, it would provide an ongoing—if not always systematic—source of feedback for revising and improving guidelines. Processes for Local Adaptation Local programs to adapt guidelines vary greatly in the formality of their processes and structures, but they appear generally to be a less sophisticated, less rigorous kind of effort than that endorsed by this committee. One effort located toward the sophisticated, science-based end of this spectrum is the work by Group Health Cooperative of Puget Sound (GHCPS) to develop a preventive care manual for its primary care practitioners. This activity, described in Chapter 6, reflects a specific objective (focusing resources on high-risk groups) and specific organizational characteristics (for example, an enrolled population and integrated patient records). One particular task for GHCPS has been to reconcile or choose from among inconsistent guidelines from different sources, although the materials available to this committee do not fully explicate the basis for different choices. The less scientific, more behavioral or strategic approach is represented by one of the groups visited by the committee. This organization was developing guidelines or pathways for the care of patients admitted for certain common clinical procedures. Those involved did not employ a systematic process to identify and assess the scientific literature, estimate health outcomes, explain the rationale for the pathway, or document these steps. The pathways were presented as charts to advise clinicians on generally desired practices and to reduce variability in patient care. Although this last process did not incorporate systematic use of the scientific literature on a clinical problem, it was systematic and data oriented in that it identified topics for pathway development based, in part, on the variability

OCR for page 163
Guidelines for Clinical Practice: From Development to Use in existing practice patterns. Practitioners received periodic reports on how their performance compared with that of their peers and with the pathway. Within a framework such as that offered by continuous quality improvement, empirical and incremental testing and modification of guidelines may well be appropriate (indeed, even necessary). Such testing may not conform to the highest standards of experimental research design, but it can provide a systematic, practical, and direct means of identifying where guidelines—as well as clinical practice—may need modification. Ideally, this kind of local but systematic information will become part of the broader evolutionary framework for guidelines development and improvement as national and local groups develop communication and tracking mechanisms. Other local processes may be fairly unsystematic. They involve no analysis of local patterns of care, no explicit formulation of objectives, no literature review, no formal decision making processes, and no documentation of evidence or rationales for decisions. This method might be called a "back of the envelope" approach to guideline development. Even when the rationale is worthy, this "back of the envelope" approach to adapting or developing guidelines (or medical review criteria) is unacceptable. It offers too much leeway, on the one hand, for uncritical accommodation of local traditions and narrow self-interest and, on the other, for excessive and unwarranted interference with physician-patient decision making. The Standing of "Adapted" Guidelines Adaptation processes intended to win physician acceptance of guidelines—the behavioral rationale for adaptation—should be guided and constrained by an expectation that the resulting guidelines and criteria will still be credible in their process, rationale, and documentation. The requirement for systematic and careful procedures applies as well to de novo development activities and efforts to devise medical review criteria. Where carefully developed and documented "national" guidelines exist, local adaptation processes should provide explicit rationales for changes that relate to specific, well-defined local conditions or objectives. If national guidelines are in one way or another accorded legal stature with respect to malpractice liability (or immunity from liability), then serious attention must be given to the stature of guidelines that are modified to suit local circumstances or preferences and, possibly, to the criteria used to evaluate the quality of care that is rendered. Even if it can be shown that these derivations of existing national guidelines were arrived at through procedures similar to those that produced the original guideline, they may or may not enjoy the same legal stature as the originals. Although the "respectable minority" doctrine described in Chapter 5 could accommodate

OCR for page 163
Guidelines for Clinical Practice: From Development to Use some differences, it would be troublesome were it to justify departures from guidelines that are based on strong scientific evidence and consensus. Given the evolving views about the relationship between malpractice and guidelines in general, this issue is quite speculative at this time. Formatting and Dissemination For the purposes of this report, effective formatting means presenting guidelines in physical arrangements or media that can be readily understood and applied by practitioners, patients, or other intended user groups. Effective dissemination means delivering guidelines to their intended audiences in ways that promote the reception, understanding, acceptance, application, and positive impact of the guidelines. For the purposes of this discussion, effective dissemination presupposes effective formatting, and the discussion centers on the former. Appendix A discusses and illustrates some approaches to formatting guidelines. Dissemination is in part an answer to the question: "Suppose I want a guideline for something. What do I need to do to find it?" Two broad possibilities exist. First, organizations currently producing guidelines probably have distributed them or related materials, and the questioner may well have filed the documents so that they can be retrieved. Second, the relevant guidelines may have been acquired by a general information resource such as the National Library of Medicine (NLM) and entered into a data base that can be queried on a wide array of topics. Sponsors and developers of guidelines usually take responsibility for their initial dissemination to major target audiences, often either physicians or nurses. For example, many specialty societies, such as the ACP and the American College of Cardiology, begin their dissemination efforts by publishing individual guidelines in their journals, which all members receive. The GAO survey (1991b) reported that societies also publish in newsletters, the journals of other societies, and other places. For some types of guidelines, particularly for collections dealing with similar clinical issues, the initial step may be direct distribution of the guidelines to members. The American Academy of Pediatrics does this every other year with its Report of the Committee on Infectious Diseases, familiarly known as the Red Book (AAP, 1991). Specialty societies may also distribute guidelines to other societies, to federal agencies, and to selected audiences in the health care and medical education communities. Press conferences and press releases may accompany such publications. One strength of these kinds of dissemination activities is that they are part of an ongoing process. They have an institutional past and a future that should help build both awareness and acceptance, at least among members of the sponsoring organization and eventually among outsiders as well.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use Following initial dissemination steps,9 guideline developers may proceed with an array of activities such as cooperating with other interested parties in disseminating information to patients or consumers. This is where the second response noted above comes into play. The lay press, patient groups, computerized information systems, and directories may begin to make guidelines more widely available or known to practitioners, patients, and others. As noted earlier, the AMA publishes quarterly update listings of guidelines developed by both the AMA and specialty societies. In addition, publications are emerging that reprint or summarize selected guidelines or otherwise report on the field; the Report on Medical Guidelines & Outcomes Research, published by Health & Sciences Communications and now nearing the end of its second year, is an example. The NLM, as described elsewhere in this report, will store, index, and otherwise make available information on practice guidelines, specifically including those from AHCPR panels. Those involved in the development and use of guidelines are paying increasing attention to a series of strategic "who, what, why, when, and how" questions. Specifically: Who do you want to reach and why? What do they need? How quickly do you want to reach them? What relevant techniques are available, and how do they vary in effectiveness and cost? Answers to these questions will influence some dissemination decisions such as whether to use professional or mass media, direct mailings, or journal publication. The length and complexity of the guideline will also influence the choice of dissemination technique. As noted in Chapter 4, options for dissemination now include a variety of computer-based tools including on-line literature search systems, floppy disks, and CD-ROM disks. Other decisions will be contingent on a variety of environmental factors. What are the opportunities for dissemination and application within the intended audience? What are the barriers? How can different dissemination strategies be combined and coordinated with other implementation strategies to increase the probability of effective application of guidelines? Answers to these questions will yield ideas about who else will be or should be involved in dissemination, whether it should be a one-time effort or a continuous process, and what resources are needed. Many of the issues raised in the discussion of education in Chapter 4 will apply here as well. Several specific factors related to dissemination might be considered legitimate and realistic concerns of guidelines developers, even if develop- 9   In addition to publishing guidelines (in various media) and generally publicizing the availability of the guidelines document, disseminating organizations may also respond to requests for and inquiries about the guidelines and undertake similar tasks. Dissemination should also be understood to include any efforts needed to inform users of mistakes ("errata" or corrections, in publishing terms) and to advise users that existing guidelines are being withdrawn or revised.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use ers do not actually carry out dissemination activities. Among these are characteristics of the target audience, timeliness and number of dissemination efforts, and the planned publishing, publicizing, and distribution of the guidelines. Depending on the combinations of these factors, dissemination activities might be considered relatively narrow and weak or relatively broad and robust. The number of independent dissemination efforts—for instance, a one-shot announcement or several sequenced activities—may also influence the eventual result of the impact. Again, developers may need to be aware of plans in this area so as to be available for comment or interpretation over a longer or shorter term. The impact of the guideline might also be affected by the timeliness or urgency of the dissemination effort. For example, some guidelines might be rushed into print in a special journal issue or put on a fast-track publication schedule; others may be published in a more routine manner. Developers may need to be sensitive to the significance of their work so that they can accommodate it to the demands or expectations of such schedules. In addition, the nature of the publication(s) may have implications for what guideline developers do (and for their length of service on a guideline panel). Guidelines may appear in their entirety, as synopses, or both; furthermore, they may appear in different formats and languages. The AHCPR guidelines are a case in point.10 As this report was being prepared, the agency was planning to produce three versions of the guidelines aimed at the professional community: (1) the full technical guideline plus all documentation (biosketches of panel members, description of the processes followed, results of the literature review and analysis, recommendations, references, etc.); (2) a shorter version that includes the full set of recommendations and the entire bibliography; and (3) a pocket-sized, "quick reference" version that summarizes just the recommendations. (These have been referred to variously as "Papa Bear, Mama Bear, and Baby Bear" and the "500-page, 50-page, and 5-page" versions.) The agency appears to be focusing its broadest dissemination efforts on the shortest version as the one most likely to be sought out or read, once it has been noticed. For some topics or conditions that cut across all age groups, these three types of publications will be produced separately for adult and pediatric populations. Plans also call for consumer versions of at least the smallest version. Finally, editions of the consumer brochure in both English and Spanish are planned. At least one of the shorter versions (probably the medium-length one) will be available through the NLM's on-line capabilities. Those who request the longest (full) technical document from AHCPR's Center for Re- 10   Dissemination activities will be handled by the Center for Research Dissemination and Liaison at AHCPR, not by the Office of the Forum.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use search Dissemination and Liaison will receive it by mail, although whether the Center will make it available free of charge or for a nominal amount is not yet decided. The NLM will probably forward orders for the full document to the Center for handling. "Version-specific" dissemination plans are still under discussion. The Journal of the American Medical Association may publish the announcement of the guideline and the shortest, clinician version of it; AHCPR will encourage relevant specialty societies to announce the guideline as well. Some thought is being given to dissemination of the consumer version through mass print media, such as Good Housekeeping, Ladies Home Journal, and the like. Other avenues of dissemination being considered include an 800 telephone number for inquiries (1-800-358-9295); other, more sophisticated marketing strategies are also being explored. Publicizing the guidelines, as contrasted with publishing them, may be another activity to which developers should be attentive. Public relations and marketing activities in such cases might range from the printing of an announcement of the availability of the guidelines, to a formal press release, briefing, or conference,11 to announcements broadcast through newsletters, journals, and computer bulletin boards, to even more elaborate strategies and combinations of strategies. A final set of decisions concerning the distribution of guidelines may have little direct effect on what developers do but may well affect the long-run impact of what they produce. These decisions involve the question of whether guidelines documents (or synopses, or both) are made available free of charge or at some price (and, if the latter, what that price might be). For example, guidelines developed under AHCPR auspices and made available through the NLM may be free of charge except for the nominal charges of the NLM for connection times to the relevant bibliographic and retrieval services. Dissemination of information or guidelines is by itself insufficient to induce use of that information or to change behavior; indeed, excessive distribution of information to physicians or other clinicians can lead to a significant problem of information overload with no redeeming change in practice patterns or habits. Nonetheless, bringing guidelines to people's attention, and making them available as requested or required, are precur- 11   The production of the first three AHCPR guidelines was accorded such significance that as of late 1991, plans were being developed to convene a press conference at which the Secretary of Health and Human Services would present at least the first of the guidelines (on postoperative pain management). Chairs of the panels and staff of the AHCPR and the Forum would be present and representatives of relevant specialty societies and professional associations would be invited to give statements concerning at least the aim of the effort and the process followed.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use sors to more direct efforts to influence behavior. Recognition of that reality and appropriate planning for dissemination are thus important components of what guideline developers need to do in the future. Evaluating Impact If formatting and dissemination operate at the interface at which development begins to shift over to implementation, then evaluation operates at the interface at which the results of implementation are fed back to improve and revise guidelines. Although evaluation of the impact of guidelines is not fundamentally a task for developers, the latter can be presumed to have at least an interest in learning what effects, if any, their work has had. Some groups may, in fact, have sufficient concern about what influence their guidelines are having to carry out various evaluation efforts; others may simply cooperate with outside evaluation activities. This chapter briefly raises the subject, therefore, on the grounds that those in the business of developing guidelines will have concerns about, if not direct involvement in, assessing the effects of their efforts. As professional societies, public agencies, and others assess their involvement in developing guidelines, they eventually face questions about results. For example, do practitioners, patients, payers, and others even know the guidelines exist? Do they think they are credible and usable? Do they, in fact, use them? How do they affect patient decisions and behavior? Are guidelines having any impact on health outcomes, payment decisions, medical liability, costs, or other factors? In general, groups have confronted these questions after they have developed several guidelines and have not built evaluation of impact into their programs (Audet et al., 1990). This approach is beginning to change, however, as organizations consider whether their financial and volunteer resources are being constructively used. For example, the GAO (1991b) survey reported that at least four medical societies were interested in evaluating the impact of their guidelines. The ACP self-evaluation of the CEAP activity has already been noted. Focus groups and surveys are relatively inexpensive means of evaluating results, but they are also relatively weak research strategies in a world where the double-blind randomized clinical trial is the ideal. This report has described one randomized clinical trial involving the use of computer-based reminders for preventive care (McDonald et al., 1984); at least one other similar trial involving hospital admission testing guidelines has been planned (Audet et al., 1990). As noted in Chapter 4, some research has attempted to compare the results of different strategies for informing and educating practitioners about guidelines. A recently completed but not yet published evaluation sponsored by a large managed care organization examined several questions (Audet et al.,

OCR for page 163
Guidelines for Clinical Practice: From Development to Use 1990): (1) Does practitioner participation in the process of guideline development affect subsequent use by practitioners of the guidelines? (2) Do guidelines decrease resources utilization? (3) If guidelines do reduce utilization, is it at the expense of quality of care, as reflected in patient outcomes? Preliminary results indicate that ''physician generated guidelines codified parsimonious practices, which had a salutary and not negative effect on patient outcomes" (Greenfield, 1991). However, physicians who developed guidelines were no more likely to change practices (for example, to order diagnostic tests more conservatively) than were those who were not involved. The organizational response to these preliminary findings is that involvement in guidelines development is not a sufficient stimulus for change and that they must become part of an integrated quality improvement strategy. Efforts to evaluate the impact of the guidelines development require both interest and resources. As expensive and methodologically demanding as guideline development is, evaluation of the impact of guidelines is even more demanding. Partly for this reason and partly because the guideline development enterprise is still relatively young, evaluation projects are likely to remain relatively uncommon. To the extent that evaluations are undertaken, they may have more in common with models of program evaluation than with models of clinical biomedical research. One organization with a clear mandate to undertake evaluation of guidelines is AHCPR. Under OBRA 89, it is required to determine the impact of its first three guidelines on the cost, quality, appropriateness, and effectiveness of health care and to report these findings to Congress by January 1, 1993. As of late 1991, AHCPR's attention was solidly focused on development of guidelines and related medical review criteria; none of the guidelines due by January 1, 1991, had yet been published. Possible activities were still under discussion, and no formal research plan had been made public. (The effort to develop review criteria, described in Chapter 5, includes some provisions for testing their use and impact.) This lack of progress on impact evaluation is not surprising, given the unrealistic deadlines faced by AHCPR.12 The agency's 1993 report to Congress will be a status report of activities in progress and planned; with respect to the actual impact of guidelines, some proxy measures (e.g., media citations) may be generated. In addition, lessons learned as the initial guidelines panels pretested their draft guidelines will be a form of impact evaluation. Among the proposed activities 12   The timetable is unrealistic for several reasons. First, the guidelines will probably not have had time to make a measurable impact on health, cost, or other outcomes; this would probably be true even if the first three had been published on schedule. Second, even if the guidelines have fairly immediate effects, the data to document such effects will generally be unavailable. For example, insurance claims or other data showing changes in the use of procedures or practices may not be accessible in the time frame specified. Likewise, data on patient outcomes will take time to collect.

OCR for page 163
Guidelines for Clinical Practice: From Development to Use will be a set of internal projects, grants, and contracts with a mix of short and longer-term objectives—for example, to consider the impact of guidelines for preventive services in inner cities, to investigate the impact of dental surgery guidelines, and to evaluate interactive videodisc technologies to encourage behavioral change (Linda Demlo, AHCPR, personal communication, October 1991). In its 1990 report, the IOM noted that explanations of policy success or failure, in general, needed to consider the following: the validity of the policy premises—for example, the assumption of many policy makers that broader development and use of practice guidelines will achieve significant cost savings; the quality of the implementation process—for example, the extent to which information was disseminated or incentives were created for the use of guidelines; the existence of countervailing events—for example, court decisions limiting the ability of health care organizations or payers to review the appropriateness of care and then deny either practice privileges or payment for practitioners providing inappropriate care; and the nature of supportive or enabling conditions—for example, the breadth of professional interest in the topic covered by the guidelines or a technical breakthrough in access to computer-based information systems. Even groups that cannot contemplate rigorous evaluation may benefit by considering what would be required to evaluate their guidelines. What would they consider success? What potential adverse consequences should be tracked? What information about the clinical problem, the patient's circumstances and preferences, and the delivery setting should be recorded to permit later evaluation of the processes and outcomes of care? What confounding factors should be considered? Are there intermediate steps that might be usefully monitored? Short of full-scale evaluation, what might users of guidelines do to assess short- or long-term results? Might these users be encouraged to undertake some evaluation on their own or perhaps in collaboration with the guidelines development group? Some attention to these and similar questions may help developers of guidelines identify previously unsuspected opportunities for evaluation. It may also intensify their interest in finding resources to support evaluation and to refine the way they approach the process of developing guidelines. SUMMARY More resources and more systematic procedures do not guarantee good guidelines, but the committee reiterates that guidelines development is a serious enterprise that deserves careful planning and execution. The com-

OCR for page 163
Guidelines for Clinical Practice: From Development to Use mittee observed several promising efforts at improving structures and processes for developing guidelines, strengthening methods, and incorporating more attention to implementation, evaluation, and revision. This last step is critical to effective, comprehensive application of guidelines. The attitudes, needs, and circumstances of practitioners, patients, and other users of clinical practice guidelines must be anticipated and considered from the earliest stages of guidelines development, if guidelines are to be applied to achieve their goals. Likewise, evaluation issues—the intended effects of guidelines, means of measuring impact, potential confounding factors—have to be considered when guidelines are being framed rather than dealt with after the fact. The next, concluding chapter of the report brings together this committee's principal conclusions and recommendations about the clinical practice guidelines enterprise. It does so in some comfort with the progress that the field has made in recent years, taking it as a good omen of the progress that can be made on the many conceptual, practical, methodological, and political challenges that still remain.