(European Organisation for Rare Diseases, 2005). Moreover, even though the evidence that is presented in systematic reviews may be comprehensive, it does not necessarily come in a form that is meaningful to doctors. For example, review documents typically summarize treatment effects in terms of relative risk, which does not take into account the prevalence of the disease. They also may not account for the presence of comorbidities. Physicians may prefer to make treatment decisions according to the absolute risks and benefits of treatment (presented as the number of events per 100 patients treated or the number of patients who need to be treated to prevent a single event) (Jackson and Feder, 1998).
Consumers also have unmet information needs. Direct-to-consumer advertising encourages greater spending on prescription drugs, which may potentially avert the underuse of medication but which may also promote medication overuse (Donohue et al., 2007). Consumers need to know when claims are valid and apply to them and when the claims are exaggerated or irrelevant to their needs. Physicians must be prepared to respond to consumer requests for information on heavily marketed prescription drugs and other clinical services, and they are also the target of aggressive sales efforts by pharmaceutical representatives (Angell, 2004).
The organizations that provide systematic reviews and clinical guidelines use different grading systems to characterize the quality of evidence and the strength of recommendations. These codes fall primarily into four categories: letters only (e.g., A, B, and C), Roman numerals only (e.g., I, II, and III), mixed letters and numerals (e.g., Ia, Ib, and IIa), and terms (e.g., strong and weak or consistent and inconsistent) (Schünemann et al., 2003). The discrepancies among grading systems cause difficulties for end users, who must decipher and remember what each of the various designations means. AHRQ identified more than 100 scales, checklists, and other instruments used to rate the quality of individual studies and the strength of bodies of evidence (AHRQ, 2002).
Although by definition systematic reviews are supposed to use scientific methods to synthesize the available evidence, the organizations that produce these syntheses do not always make the processes and deliberations that they used public and transparent. Few organizations depend on an externally reviewed protocol to conduct their reviews. Consequently, the steps taken to address some of the difficult—often very subjective—elements of the synthesis process, such as the basis for including or excluding particular