National Academies Press: OpenBook

Clinical Practice Guidelines: Directions for a New Program (1990)


« Previous: Updating and Revising
Suggested Citation:"EVALUATION OF GUIDELINES." Institute of Medicine. 1990. Clinical Practice Guidelines: Directions for a New Program. Washington, DC: The National Academies Press. doi: 10.17226/1626.
Page 91
Suggested Citation:"EVALUATION OF GUIDELINES." Institute of Medicine. 1990. Clinical Practice Guidelines: Directions for a New Program. Washington, DC: The National Academies Press. doi: 10.17226/1626.
Page 92

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

IMPLEMENTATION AND EVALUATION 91 sponsors of guidelines is ensuring that outdated versions of guidelines are abandoned. To the extent that a set of guidelines have been integrated into the operations of thousands of local organizations and practitioner offices and incorporated into specialized computer software and information systems, this practical element of updating will be a particular challenge. (The analogue in medical practice is the abandonment of obsolete procedures and therapies.) In some cases, only parts of guidelines may need to be withdrawn—for instance, the clinical scope of guidelines may change without much need for modification of other elements. EVALUATION OF GUIDELINES The purpose of evaluation is to determine what outcomes—both desired and undesired, anticipated and unanticipated—have occurred as a result of a policy or program (Suchman, 1967). For practice guidelines, the primary outcome variables identified in the legislation are the quality, appropriateness, effectiveness, and cost of care provided to Medicare beneficiaries. However, evaluation that concentrates solely on ultimate outcomes and ignores intervening events may be incapable of distinguishing why a policy succeeded or failed. This chapter distinguishes two kinds of evaluation—practice evaluation and guidelines evaluation. Practice evaluation focuses on health care decisions and interventions using various methods. Some methods, for example, randomized clinical trials, explicitly evaluate the impact of clinical interventions on such health outcomes as mortality, morbidity, and quality of life (Institute of Medicine, 1985; Kanouse and Jacoby, 1988; Kosecoff et al., 1987; Lomas et al., 1989). Other methods, such as those employing medical review criteria and the other practice evaluation tools described in Chapter 2, typically do not assess outcomes but instead compare how actual practices (or proposed practices) match practices set forth in the review criteria or standards. These kinds of assessments assume that there are links between such practices and better health outcomes, although, as much of the quality of care literature makes clear, this is not always a viable assumption. A second type of evaluation, which is the focus of the following discussion, is better described as a form of policy and program evaluation . The question is whether public and private policies and programs in this area have the effects intended; that is, do practice guidelines, as a policy instrument, affect clinical practice and health outcomes? This kind of evaluation can encompass every step in the development and implementation of guidelines and the intermediate outcomes of each of these steps. Evaluating the impact of guidelines means determining their major intended and unintended effects and, insofar as possible, the causes of

IMPLEMENTATION AND EVALUATION 92 these effects (or their absence). A recent survey of practice guidelines activities conducted for the IOM concluded that, among organizations involved with guidelines, implementation and evaluation have received secondary emphasis compared with development and promulgation (Audet and Greenfield, 1989). Relatively few steps were under way or planned to evaluate the impact of guidelines on the cost, quality, and outcomes of care and on patient and practitioner satisfaction. This neglect of evaluation is unfortunate because the effectiveness of guidelines cannot be taken for granted. Two areas of concern can be raised with respect to the evaluation of practice guidelines. First are narrow issues relating to specific legislative requirements for DHHS. Second are broad questions about how to evaluate the impact of guidelines and build better policies and programs based on that evaluation. This report focuses on the first set of issues. The necessary planning to meet OBRA 89 requirements should start now. Such planning is particularly important because the legislation's requirements raise several problems, which the committee understands are recognized by department officials and congressional staff. Most simply stated, although the legislation's provisions for evaluation are laudable, the 1993 timetable for evaluating the first three guidelines developed by the Forum is unrealistic. On the one hand, the guidelines probably will not have had time to make a measurable impact. On the other hand, even if the guidelines had had fairly immediate effects, the measurement data to document such effects would generally be unavailable. For example, insurance claims or other data showing changes in the use of procedures or practices may not be accessible in the time frame specified. Likewise, data on patient outcomes will take time to collect. Rather than ask Congress for a change in the evaluation timetable, the Forum proposes to provide a status report as of January 1, 1993. The committee considers this appropriate so long as DHHS begins serious planning for the evaluation soon and takes steps to put necessary data collection processes in place. As noted earlier, many of the steps needed for evaluation can and should be initially considered and specified as guidelines are developed. Indeed, a hallmark of good evaluation research is that planning for the evaluation begins before the program gets under way. Over the long-term, the data development responsibilities of AHCPR can be used to support guidelines evaluation as well as outcomes and effectiveness research. OBRA 89 requires evaluation of the impact of guidelines on quality, effectiveness, appropriateness, and cost of care. Information on intermediate outcomes or intervening variables is also important to determine such facts as whether the guidelines have, indeed, been received, read, understood, accepted, and remembered by practitioners and patients (Kaluzny, 1990;

Clinical Practice Guidelines: Directions for a New Program Get This Book
Buy Paperback | $50.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook,'s online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!