Appendix D

Reporting Guidelines

When publishing the results of omics research or a study to evaluate an omics-based test, authors should clearly report the methods and results. Comprehensive reporting allows independent scientists and other stakeholders (journal editors, reviewers, clinicians, funding agencies, etc.) to assess the strength of a study and the findings. Inadequate reporting may impair the ability of the field to determine whether research findings are reproducible, accurate, and well founded. In addition, scientific research depends on understanding and building on previous findings. Thus, incomplete reporting can impede future research and new scientific discovery.

Evidence shows that many published biomarker research studies inadequately document important aspects of the scientific process. Brundage et al. (2002) noted that some articles do not provide the level of detail necessary to understand what methodologies were used and to understand important variations that arose, further complicating interpretations in the scientific literature. Articles can also fail to adequately address statistical analyses and ineffectively describe survival statistics (Riley et al., 2003). Missing information on covariate data, which was observed in 81 of 100 articles in one review, has the potential to introduce bias and interfere with the development of lead prognostic models (Burton and Altman, 2004).

In recent years, a number of international, multidisciplinary groups have developed reporting guidelines to improve the quality of published health research studies. A working definition of a reporting guideline is: “A checklist, flow diagram, or explicit text to guide authors in reporting a specific type of research, developed using explicit methodology” (Moher et al., 2010b). A seminal reporting guideline is the Consolidated Standards



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 289
Appendix D Reporting Guidelines When publishing the results of omics research or a study to evaluate an omics-based test, authors should clearly report the methods and results. Comprehensive reporting allows independent scientists and other stake- holders (journal editors, reviewers, clinicians, funding agencies, etc.) to assess the strength of a study and the findings. Inadequate reporting may impair the ability of the field to determine whether research findings are reproducible, accurate, and well founded. In addition, scientific research depends on understanding and building on previous findings. Thus, incom- plete reporting can impede future research and new scientific discovery. Evidence shows that many published biomarker research studies inade- quately document important aspects of the scientific process. Brundage et al. (2002) noted that some articles do not provide the level of detail necessary to understand what methodologies were used and to understand important variations that arose, further complicating interpretations in the scientific literature. Articles can also fail to adequately address statistical analyses and ineffectively describe survival statistics (Riley et al., 2003). Missing information on covariate data, which was observed in 81 of 100 articles in one review, has the potential to introduce bias and interfere with the development of lead prognostic models (Burton and Altman, 2004). In recent years, a number of international, multidisciplinary groups have developed reporting guidelines to improve the quality of published health research studies. A working definition of a reporting guideline is: “A checklist, flow diagram, or explicit text to guide authors in reporting a specific type of research, developed using explicit methodology” (Moher et al., 2010b). A seminal reporting guideline is the Consolidated Stan- 289

OCR for page 289
290 EVOLUTION OF TRANSLATIONAL OMICS dards of Reporting Trials (CONSORT), which was first published in 1996 and updated in 2001 and 2010, to provide a checklist and flow diagram intended to improve the reporting of randomized controlled trials (RCTs) (Moher et al., 2010a). Other familiar reporting guidelines include • STARD (STAndards for the Reporting of Diagnostic accuracy studies) • STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) • REMARK (REporting recommendations for tumor MARKer prog- nostic studies) • MIAME (Minimum Information About a Microarray Experiment) • PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) • BRISQ (Biospecimen Reporting for Improved Study Quality) Reporting guidelines are distinct from prescribing the methods used in the conduct of research. Although reporting focuses on describing the methods used in a specific study, it does not necessarily mean that a well- reported study is a high-quality study. However, the rationale is that report- ing guidelines, by transparently and thoroughly describing the methods employed and results obtained, can help investigators assess the quality of a study. THE NEED FOR MULTIPLE REPORTING GUIDELINES IN OMICS In the 2007 Institute of Medicine (IOM) report Cancer Biomarkers: The Promises and Challenges of Improving Detection and Treatment, the committee noted that different sets of reporting guidelines would need to be developed depending on the technologies involved in a study, whether a single biomarker versus panels or patterns of biomarkers is being inves- tigated, and the intended applications of the study. Table D-1 lists several reporting guidelines relevant to omics-based research. In addition, in June 2011, a group of 20 methodologists, clinicians, and journal editors from around the world convened to develop reporting guidelines for studies to develop and/or validate multivariable prediction models. The intended output of this meeting included the publication of a reporting guideline checklist that describes the guideline development process, as well as a lon- ger explanatory publication that mirrors those produced for other reporting guidelines (Collins, 2011). Box D-1 describes the REMARK guideline as an example of a reporting guideline checklist. REMARK, first published in 2005, was developed to address a number of inadequacies of reporting that are prevalent in tumor marker prognostic studies (McShane et al., 2005).

OCR for page 289
291 APPENDIX D Altman et al. (2012a,b) expands upon the REMARK checklist in order to improve its use and effectiveness by better clarifying the intent of each item and why the information is important to report. Each checklist item is explained and accompanied by published examples of good reporting, as well as relevant empirical evidence on the quality of reporting. The explana- tion and elaboration document highlights the REMARK profile, a suggested tabular format for summarizing key study details, and serves as a reference for the many issues to consider when designing, conducting, and analyzing tumor marker studies and prognostic studies in medicine in general.1 The current patchwork of reporting guidelines relevant to omics-based research may leave a number of gaps and overlaps in coverage (IOM, 2007). The 2007 IOM biomarker committee concluded that efforts to cre- ate reporting guidelines have been piecemeal and primarily nonbinding, involving a number of professional organizations and other groups. To facilitate the development of coherent guidelines, the committee called on government agencies to work together with pharmaceutical and diagnostic industries, academia, and health care payors. The committee concluded: “[T]here is a great need for a coherent strategy to make the biomarker development and adoption process more transparent, to remove inconsis- tency and uncertainty, and to elevate the standards and oversight applied to biomarker tests. No federal agency currently takes responsibility for ensur- ing the clinical validity of biomarkers, but oversight and ownership of the process are key to developing strategies and making effective and efficient progress in the field” (IOM, 2007, p. 83). To provide leadership to this effort, the committee strongly urged an appropriate federal agency to coor- dinate and oversee interagency efforts in the development of standards and guidelines, and it suggested that the National Institute of Standards and Technology may be an option. No government agency has assumed this leadership role. However, in 2006, the National Knowledge Service of the U.K. National Health Service established a network called Enhancing the QUAlity and Transparency Of health Research (EQUATOR) to foster coordination and collaboration in the development of reporting guidelines (EQUATOR Network, 2011) (see Box D-2). Similarly, a multidisciplinary group developed the MIBBI Project (The Minimum Information for Biological and Biomedical Investigators) in 2008 to harmonize the development of minimum information checklists for biological and biomedical investigations. 1 Personal communication, Lisa McShane, National Cancer Institute, January 9, 2012.

OCR for page 289
TABLE D-1 Reporting Standards Used in Omics Research 292 System Date Study Type Structure Adoption by Journals REMARK 2005 Tumor • 20-item checklist with the headings: A Mentioned in instructions to authors marker Introduction, Materials and Methods, Results, and from: prognostic Discussion • Clinical Cancer Research studies • Breast Cancer Research and Treatment • Journal of Clinical Oncology • Journal of the National Cancer Institute • Journal of Pathology CONSORT 2001 Randomized • 25-item checklist with the headings: Title, A CONSORT is endorsed by more than (updated in controlled Abstract, Introduction, Methods, Results, 50 percent of the core medical journals 2010) trials Discussion, and Other Information listed in the Abridged Index Medicus on • flow diagram depicting the passage of A PubMed participants through a trial (enrollment, intervention allocation, follow-up, and analysis) MIAME 2001 Microarray- Six critical elements: More than 50 journals require MIAME- based gene 1. Experimental design: the set of hybridization compliant data as a condition for expression experiments as a whole publishing microarray-based papers experiments 2. Array design: each array used and each element (spot, feature) on the array 3. Samples: samples used, extract preparation, and labeling 4. Hybridizations: procedures and parameters 5. Measurements: images, quantification, and specifications 6. Normalization controls: types, values, and specifications

OCR for page 289
BRISQ 2011 Studies that Three tiers of data elements that should be None use human considered for reporting, if known or applicable: biospecimens • irst tier: Items recommended to report (e.g., F organ[s] or anatomical site from which the biospecimens were derived) • econd tier: Items beneficial to report (e.g., time S from biospecimen excision/acquisition to stabilization) • hird tier: Additional items to report (e.g., T environmental factors to which patients were exposed) STARD 2003 Diagnostic • 25-item checklist with the headings: Title/ A More than 200 biomedical journals accuracy Abstract/Keywords, Introduction, Methods, encourage the use of the STARD statement Results, and Discussion in their instructions for authors • flow diagram depicting the method of A recruitment of patients or samples, the order of test execution, and the number of patients undergoing the test under evaluation and the reference test SOURCES: Bossuyt et al. (2004); Brazma et al. (2001); McShane et al. (2005); Moore et al. (2011); Schulz et al. (2010). 293

OCR for page 289
294 EVOLUTION OF TRANSLATIONAL OMICS BOX D-1 Example of a Reporting Guideline Checklist: The REMARK Checklist INTRODUCTION 1. State the marker examined, the study objectives, and any prespecified hypotheses. MATERIALS AND METHODS Patients 2. Describe the characteristics (e.g., disease stage or comorbidities) of the study patients, including their source and inclusion and exclusion criteria. 3. Describe treatments received and how chosen (e.g., randomized or rule based). Specimen characteristics 4. Describe type of biological material used (including control samples) and methods of preservation and storage. Assay methods 5. Specify the assay method used and provide (or reference) a detailed protocol, including specific reagents or kits used, quality control procedures, reproduc- ibility assessments, quantitation methods, and scoring and reporting protocols. Specify whether and how assays were performed blinded to the study endpoint. Study design 6. State the method of case selection, including whether prospective or retro- spective and whether stratification or matching (e.g., by stage of disease or age) was used. Specify the time period from which cases were taken, the end of the follow-up period, and the median follow-up time. 7. Precisely define all clinical endpoints examined. 8. List all candidate variables initially examined or considered for inclusion in models. 9. Give rationale for sample size; if the study was designed to detect a specified effect size, give the target power and effect size. Statistical analysis methods 10. Specify all statistical methods, including details of any variable selection proce- dures and other model-building issues, how model assumptions were verified, and how missing data were handled.

OCR for page 289
295 APPENDIX D 11. Clarify how marker values were handled in the analyses; if relevant, describe methods used for cutpoint determination. RESULTS Data 12. Describe the flow of patients through the study, including the number of patients included in each stage of the analysis (a diagram may be helpful) and reasons for dropout. Specifically, both overall and for each subgroup exten- sively examined, report the numbers of patients and the number of events. 13. Report distributions of basic demographic characteristics (at least age and sex), standard (disease-specific) prognostic variables, and tumor marker, including numbers of missing values. Analysis and presentation 14. Show the relation of the marker to standard prognostic variables. 15. Present univariate analyses showing the relation between the marker and outcome, with the estimated effect (e.g., hazard ratio and survival probability). Preferably provide similar analyses for all other variables being analyzed. For the effect of a tumor marker on a time-to-event outcome, a Kaplan–Meier plot is recommended. 16. For key multivariable analyses, report estimated effects (e.g., hazard ratio) with confidence intervals for the marker and, at least for the final model, all other variables in the model. 17. Among reported results, provide estimated effects with confidence intervals from an analysis in which the marker and standard prognostic variables are included, regardless of their statistical significance. 18. If done, report results of further investigations, such as checking assumptions, sensitivity analyses, and internal validation. DISCUSSION 19. Interpret the results in the context of the prespecified hypotheses and other relevant studies; include a discussion of limitations of the study. 20. Discuss implications for future research and clinical value. SOURCE: McShane et al. (2005).

OCR for page 289
296 EVOLUTION OF TRANSLATIONAL OMICS BOX D-2 The EQUATOR Network The EQUATOR Network is the most exhaustive source of reporting guidelines, containing an up-to-date list of health research reporting guidelines. Acting as an umbrella organization, the EQUATOR Network convenes researchers, medical journal editors, peer reviewers, developers of reporting guidelines, research fund- ing bodies, and other collaborators with the goals of promoting transparency and accurate reporting of research studies and monitoring progress in health research reporting. The first project of the Network was to identify all of the available guidelines for the reporting of health research studies and survey the reporting guideline authors to ascertain more information about their development methodology, dissemina- tion and implementation strategies, problems they encountered, and impact of the guidelines. Investigators identified 37 reporting guidelines that met their inclu- sion criteria, of which they received 30 survey responses (81 percent response rate). Most (73 percent) reporting guidelines were developed by international, multidisciplinary groups, with most groups including statisticians, journal editors, clinicians, and epidemiologists. The most cited reason for developing a reporting guideline was the poor quality of reporting (87 percent), followed by the influence of other guidelines (30 percent). Survey respondents noted a dearth of funding for the development of guidelines, which may be problematic because guidelines will likely require regular updates. Additional observations from the survey suggested that there is a need to harmonize the methodology used to develop reporting guidelines and to determine their impact on the field. SOURCES: EQUATOR Network, 2011; Simera et al., 2008; Verbeek, 2008. REPORTING GUIDELINES AND THE QUALITY OF REPORTING The evidence that the use of reporting guidelines leads to improve- ments in reporting is limited. Plint and colleagues (2006) conducted a systematic review to assess whether the CONSORT checklist improves the quality of RCT reporting. This review suggested that journals that adopted CONSORT had significantly better reporting of method of sequence gen- eration (risk ratio 1.67; 95% confidence interval 1.19-2.33); allocation con- cealment (risk ratio 1.66; 95% confidence interval 1.37-2.00); and overall number of CONSORT items compared with non-adopters (standardized mean difference 0.83; 95% confidence interval 0.46-1.19). CONSORT adoption was not associated with improved reporting of participant flow, blinding of participants, or data analysis. A study by Smith and colleagues (2008) analyzed the quality of reporting and adherence to a modified

OCR for page 289
297 APPENDIX D CONSORT statement in four nursing journals. Investigators found no difference in quality of reporting among the four journals, and found that the quality of reporting of RCTs improved significantly in only one journal (Nursing Research) with the adoption of the CONSORT guideline (Smith et al., 2008). Investigators have also studied whether diagnostic accuracy reporting improved after the publication of the STARD reporting guidelines, and found that the mean number of reported STARD items was 11.9 (range 3.5-19.5) in 2000, before publication of STARD, and 13.6 (range 4.0-21.0) in 2004, after the publication of STARD. Investigators reported that this was a significant increase of 1.81 items (95% confidence interval 0.61- 3.01) (Smidt et al., 2006). However, a more recent analysis of the STARD reporting guideline found that the quality of reporting diagnostic accuracy studies has remained relatively constant (Wilczynski, 2008). This analysis searched six journals adhering to STARD guidelines and six journals that did not for the years 2001, 2002, 2004, and 2005. The change in the mean total score of the modified STARD checklist (which included 13 of 25 STARD items) was analyzed using covariance. The covariance analysis found that the interaction between the two independent factors (STARD or non-STARD journal and year of publication) and the dependent variable (mean total STARD score) was not significant (F = 0.664, df = 3, partial eta(2) = 0.009, p = 0.58). In the IOM report Finding What Works in Health Care: Standards for Systematic Reviews (2011), the committee recommended three related standards for documenting a systematic review process that draws largely on the PRISMA reporting guideline. However, the committee emphasized that the evidence that reporting guidelines improve the quality of reporting is weak. The committee noted that the few observational studies that have evaluated this question have serious flaws, and that no controlled studies to assess whether PRISMA has improved the reporting of systematic reviews have been undertaken (IOM, 2011). Adherence to Reporting Guidelines There is little agreement on who should be responsible for monitoring researchers’ adherence to reporting guidelines and if adherence should be voluntary or mandatory (Vendenbroucke, 2009). The developers of the SQUIRE (Standards for Quality Improvement Reporting Excellence) guide- lines have encouraged studying the implementation of reporting guidelines, stating that “the question of how publication guidelines can be used most effectively appears to us to be an empirical one, and therefore we strongly encourage editors and authors to collect, analyze, and report their experi- ences” (Davidoff et al., 2008, p. 675).

OCR for page 289
298 EVOLUTION OF TRANSLATIONAL OMICS The 2007 IOM biomarkers committee recommended that federal funding should stipulate that researchers comply with reporting guide- lines. Other possible monitors of adherence include journal editors or peer reviewers. The International Committee of Medical Journal Editors (ICMJE) is an organization of medical journal editors (including Annals of Internal Medi- cine, New England Journal of Medicine, Journal of the American Medical Association, and others) who collaborate to produce and update the Uni- form Requirements for Manuscripts (ICMJE, 2011). The ICMJE encourages authors to “consult reporting guidelines relevant to their specific research design,” noting that “[r]esearch reports frequently omit important informa- tion” that reporting guidelines are meant to correct. It refers authors to the EQUATOR Network to find applicable guidelines (ICMJE, 2010). However, the extent that journals either encourage or mandate adher- ence to guidelines is variable. An individual journal’s policies can be located in the instruction to authors section. For example, Science’s instruction to authors states: “We encourage compliance with MIBBI guidelines,” but do not mandate their use (Science, 2011). The British Medical Journal’s (BMJ’s) instructions explicitly list the reporting guidelines that its authors must follow and also note that authors need to submit accompanying checklists and flowcharts in compliance with the guidelines. Its justification for this policy is “to improve BMJ papers’ reporting and increase reviewers’ understanding” of the elements that must be included in a report of a health research study (BMJ, 2011). In describing journals’ roles in publication at a public IOM committee meeting, Dr. Veronique Kiermer of Nature said, “We cannot necessarily dictate how people do research, but we can dictate how they report it, or at least we can play an important role there. Ensur- ing proper document[ation], based on community-driven standards and guidelines, is very important” (Kiermer, 2011). REFERENCES Altman, D. G., L. M. McShane, W. Sauerbrei, and S. E. Taube. 2012a. Reporting recommen- dations for tumor marker prognostic studies (REMARK): Explanation and elaboration. BMC Medicine 10:51. Altman, D. G., L. M. McShane, W. Sauerbrei, and S. E. Taube. 2012b. Reporting recommen- dations for tumor marker prognostic studies (REMARK): Explanation and elaboration. Public Library of Science 9(5):e1001216. BMJ (British Medical Journal). 2011. Article Requirements. http://resources.bmj.com/bmj/ authors/article-submission/article-requirements (accessed May 25, 2011). Bossuyt, P. M., J. B. Reitsma, D. E. Bruns, C. A. Gatsonis, P. P. Glasziou, L. M. Irwig, J. G. Lijmer, D. Moher, D. Rennie, and H. C. de Vet. 2004. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD initiative. Family Practice 21(1):4-10.

OCR for page 289
299 APPENDIX D Brazma, A., P. Hingamp, J. Quackenbush, G. Sherlock, P. Spellman, C. Stoeckert, J. Aach, W. Ansorge, C. A. Ball, H. C. Causton, T. Gaasterland, P. Glenisson, F. C. P. Holstege, I. F. Kim, V. Markowitz, J. C. Matese, H. Parkinson, A. Robinson, U. Sarkans, S. Schulze-Kremer, J. Stewart, R. Taylor, J. Vilo, and M. Vingron. 2001. Minimum informa- tion about a microarray experiment (MIAME)—toward standards for microarray data. Nature Genetics 29(4):365-371. Brundage, M. D., D. Davies, and W. J. Mackillop. 2002. Prognostic factors in non-small cell lung cancer: A decade of progress. Chest 122(3):1037-1057. Burton, A., and D. G. Altman. 2004. Missing covariate data within cancer prognostic studies: A review of current reporting and proposed guidelines. British Journal of Cancer 91(1):4-8. Collins, G. 2011. Consensus-based guidelines for transparent reporting. BMJ. http://blogs.bmj. com/bmj/2011/08/03/gary-collins-opening-up-multivariable-prediction-models/ (accessed December 15, 2011). Davidoff, F., P. Batalden, D. Stevens, G. Ogrinc, S. Mooney, and SQUIRE Development Group. 2008. Publication guidelines for improvement studies in health care: Evolution of the SQUIRE project. Annals of Internal Medicine 149(9):670-676. EQUATOR Network. 2011. Welcome to the EQUATOR Network Website—The Resource Centre for Good Reporting of Health Research Studies. http://www.equator-network.org/ home/ (accessed May 19, 2011). ICMJE (International Committee of Medical Journal Editors). 2010. Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publi- cation. Updated April 2010. http://www.icmje.org/urm_full.pdf (accessed May 25, 2011). ICMJE. 2011. About the International Committee of Medical Journal Editors. http://www. icmje.org/about.html (accessed May 25, 2011). IOM (Institute of Medicine). 2007. Cancer Biomarkers: The Promises and Challenges of Improving Detection and Treatment. Washington, DC: The National Academies Press. IOM. 2011. Finding What Works in Health Care: Standards for Systematic Reviews. Wash- ington, DC: National Academies Press. Kiermer, V. 2011. Publication of Research Involving Large Datasets and “Omics” Technolo- gies. Discussion at a Workshop on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials, March 30, Washington, DC. McShane, L. M., D. G. Altman, W. Sauerbrei, S. E. Taube, M. Gion, and G. M. Clark. 2005. REporting recommendations for tumor MARKer prognostic studies (REMARK). Journal of the National Cancer Institute 97(16):1180-1184. Moher, D., S. Hopewell, K. F. Schulz, V. Montori, P. C. Gotzsche, P. J. Devereaux, D. Elbourne, M. Egger, and D. G. Altman. 2010a. CONSORT 2010 explanation and elaboration: Updated guidelines for reporting parallel group randomised trials. Journal of Clinical Epidemiology 63(8):e1-e37. Moher, D., K. Schulz, I. Simera, and D. G. Altman. 2010b. Guidance from developers of health research reporting guidelines. PLoS Medicine 7(2):e1000217. Moore, H. M., A. B. Kelly, S. D. Jewell, L. M. McShane, D. P. Clark, R. Greenspan, D. F. Hayes, P. Hainaut, P. Kim, E. A. Mansfield, O. Potapova, P. Riegman, Y. Rubinstein, E. Seijo, S. Somiari, P. Watson, H.-U. Weier, C. Zhu, and J. Vaught. 2011. Biospecimen Reporting for Improved Study Quality (BRISQ). Cancer Cytopathology 119(2):92-101. Plint, A. C., D. Moher, A. Morrison, K. F. Schulz, D. G. Altman, C. Hill, and I. Gaboury. 2006. Does the CONSORT checklist improve the quality of reports of randomized controlled trials: A systematic review. Medical Journal of Australia 185(5):263-267. Riley, R. D., K. R. Abrams, A. J. Sutton, P. C. Lambert, D. R. Jones, D. Heney, and S. A. Burchill. 2003. Reporting of prognostic markers: Current problems and development of guidelines for evidence-based practice in the future. British Journal of Cancer 88(8):1191-1198.

OCR for page 289
300 EVOLUTION OF TRANSLATIONAL OMICS Schulz, K. F., D. G. Altman, and D. Moher. 2010. CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials. PLoS Medicine 7(3):e1000251. Science. 2011. General Information for Authors. http://www.sciencemag.org/site/feature/ contribinfo/prep/gen_info.xhtml (accessed May 25, 2011). Simera, I., D. G. Altman, D. Moher, K. F. Schulz, and J. Hoey. 2008. Guidelines for reporting health research: The EQUATOR Network’s survey of guideline authors. PLoS Medicine 5(6):e139. Smidt, N., A. W. Rutjes, D. A. van der Windt, R. W. Ostelo, P. M. Bossuyt, J. B. Reitsma, L. M. Bouter, and H. C. de Vet. 2006. The quality of diagnostic accuracy studies since the STARD statement: Has it improved? Neurology 67(5):792-797. Smith, B. A., H. J. Lee, J. H. Lee, M. Choi, D. E. Jones, R. B. Bausell, and M. E. Broome. 2008. Quality of reporting randomized controlled trials (RCTs) in the nursing literature: Application of the consolidated standards of reporting trials (CONSORT). Nursing Outlook 56(1):31-37. Vendenbroucke, J. P. 2009. STREGA, STROBE, STARD, SQUIRE, MOOSE, PRISMA, GNOSIS, TREND, ORION, COREQ, QUOROM, REMARK, and CONSORT: For whom does the guideline toll? Journal of Clinical Epidemiology 62(6):594-596. Verbeek, J. 2008. MOOSE CONSORT STROBE and MIAME STARD REMARK or how can we improve the quality of reporting studies. Scandinavian Journal of Work, Environment, and Health 34(3):165-167. Wilczynski, N. L. 2008. Quality of reporting of diagnostic accuracy studies: No change since STARD statement of publication—before-and-after study. Radiology 248(3):817-823.