Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
ATTRIBUTES OF GOOD PRACTICE GUIDELINES 65 approaches require that analysts guard against any application of quantitative and other systematic techniques that may disguise the limitations of incomplete or poor literature and thereby distort conclusions. The references to this chapter describe several formal approaches to evaluating evidence. Strength of Evidence Inevitably, the evidence for some guidelines will be more abundant, consistent, clear, relevant, and methodologically rigorous than the evidence for others. Consequently, guidelines developers should provide some explicit description of the scientific certainty associated with a set of guidelines (Eddy, 1990aâe). The approach recently taken by the U.S. Preventive Services Task Force (1989) was to rank study designs; randomized controlled trials were ranked highest and expert opinion, lowest. However, this unidimensional scheme would rate a poorly executed randomized clinical trial more highly than a carefully done nonrandomized trial, a questionable result in the committee's view. More complex and statistically based techniques may be more accurate, but specific recommendations are beyond the scope of this committee. One consequence of a thorough and expert assessment of the evidence may be a decision to defer the effort to develop guidelines for the condition or service in question because the evidence is weaker or less conclusive than expected when the effort was initiated. When it is imperative to go ahead with guidelines on a topic, the alternative is to rely more heavily on expert consensus. Even so, the experts may eventually agree that guidelines should be deferred for lack of either evidence or consensus. Use of Expert Judgment Expert or group judgment may come into play in guidelines development in two somewhat different but not incompatible ways. First, groups may be used to evaluate and rate scientific evidence with or without the support of quantitative methods such as meta-analysis. Second, group judgment may be used as the primary basis for a guideline when the scientific evidence is weak or nonexistent. Rather than have expert panels accept a consultant's or other party's review uncritically, the panels should conduct their own careful "review of the review" of the literature. The methods used to arrive at group judgments must be carefully selected and well described (IOM, 1985). For example, if formal votes are taken, a secret, written ballot should be used, insofar as possible, and a record of the results of each round of voting should be maintained. Any departure from a policy of "one person-one vote" must be justified. If a