As applied to evidence reports and recommendation statements, scientific rigor implies that research methods minimize bias, that the results are reliable and valid, and that both the methods used and all results are completely reported. Methods have been developed for systematically reviewing evidence on effectiveness and these methods are evidence based (i.e., the evidence has shown that failure to adhere to these methods can result in invalid or biased findings) (Higgins and Green, 2006; Moher et al., 1999; Stroup et al., 2000). However, as noted earlier, there is considerable evidence indicating that many systematic reviews do not meet scientific standards (Gøtzsche et al., 2007; Moher et al., 2007). Particularly worrisome is the lack of attention to the quality and scientific rigor of the studies that are included in the review. Publication in a high-impact journal, unfortunately, does not guarantee that the methods used in the study were sound (Steinberg and Luce, 2005). Less is known about bias-free processes for translating evidence into clinical recommendations.
In the present context, transparency refers to the use of clear, unambiguous language to convey scientific results and conclusions. It gives the reader the ability to clearly link judgments, decisions, or actions to the information on which they are based. Different entities frequently review the same published evidence and arrive at different conclusions about their safety and effectiveness, and it is important to be able to identify possible explanations. Methods should be explicitly defined, consistently applied, and available for public review so that observers can readily link judgments, decisions, or actions to the data on which they are based. There is extensive evidence that most systematic reviews lack adherence to a transparent and documented set of standards (Bhandari et al., 2001; Delaney et al., 2005; Glenny et al., 2003; Hayden et al., 2006; Jadad and McQuay, 1996; Jadad et al., 2000; Mallen et al., 2006; Moher et al., 2007; Whiting et al., 2005). This undermines the public’s ability to be confident in the integrity of the process.
Reporting standards provide transparency by requiring extensive discussion on the methods used to conduct the review in sufficient detail to replicate the results. In 1999 and 2000, QUOROM (Quality of Reporting of Meta-analyses) and MOOSE (Meta-analysis Of Observational Studies in Epidemiology) reporting standards were published to improve the quality of meta-analyses, although neither set of standards has become widely adopted (Moher et al., 2007). CONSORT (Consolidated Standards for Reporting Trials) has simplified the task of summarizing evidence from randomized controlled trials (Moher et al., 1999; Stroup et al., 2000).