Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
IMPLEMENTATION AND EVALUATION 93 Lomas, 1990). Such information is essential to understanding why a set of guidelines have or have not achieved their desired outcomes and to determining whether to continue, revise, or abandon the guidelines. In general, explanations for policy success or failure need to consider evidence about the following: ⢠the validity of the policy premises, for example, the assumption of many policymakers that broader development and use of practice guidelines will achieve significant cost savings; ⢠the quality of the implementation process, for example, the extent to which information was disseminated or incentives were created for the use of the guidelines; ⢠the existence of countervailing events, for example, court decisions limiting the ability of health care organizations or payers to review the appropriateness of care and then deny either practice privileges or payment for practitioners providing inappropriate care; and ⢠the nature of supportive or enabling conditions, for example, the breadth of professional interest in the topic covered by the guidelines or a technical breakthrough in access to computer-based information systems. This chapter has suggested several aspects of the implementation processes for both the government program and the guidelines themselves that warrant assessment. These aspects include (1) the effectiveness of different formats for a given guideline, (2) the impact of different dissemination strategies for different audiences, and (3) the role of alternative means of promoting day-to-day application of the guidelines. Indeed, the entire process of guidelines development will surely need investigation over time. No one approach is, in the short run (if ever), likely to prove definitively superior, although unsatisfactory methods can be identified and the strengths and weaknesses of other methods can be better understood. In addition, evaluation of the cost-effectiveness of different implementation activities could help in making decisions about how to allocate limited government and private resources. CONCLUSION The charge to this IOM committee was narrow: to provide timely advice to AHCPR on its initial steps to meet its responsibilities for practice guidelines under OBRA 89. To that end, the committee focused on definitions and attributes for practice guidelines and on certain aspects of guidelines implementation and evaluation. For the latter, the emphasis is on how implementation and evaluation decisions can relate to and reinforce such attributes of guidelines as validity and reliability.