3
Standards for Finding and Assessing Individual Studies

Abstract: This chapter addresses the identification, screening, data collection, and appraisal of the individual studies that make up a systematic review’s (SR’s) body of evidence. The committee recommends six related standards. The search should be comprehensive and include both published and unpublished research. The potential for bias to enter the selection process is significant and well documented. Without appropriate measures to counter the biased reporting of primary evidence from clinical trials and observational studies, SRs will reflect and possibly exacerbate existing distortions in the biomedical literature. The review team should document the search process and keep track of the decisions that are made for each article. Quality assurance and control are critical during data collection and extraction because of the substantial potential for errors. At least two review team members, working independently, should screen and select studies and extract quantitative and other critical data from included studies. Each eligible study should be systematically appraised for risk of bias; relevance to the study’s populations, interventions, and outcomes measures; and fidelity of the implementation of the interventions.

The search for evidence and critical assessment of the individual studies identified are the core of a systematic review (SR).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 81
3 Standards for Finding and Assessing Individual Studies Abstract: This chapter addresses the identification, screening, data collection, and appraisal of the individual studies that make up a systematic review’s (SR’s) body of evidence. The committee recommends six related standards. The search should be compre- hensive and include both published and unpublished research. The potential for bias to enter the selection process is significant and well documented. Without appropriate measures to counter the biased reporting of primary evidence from clinical trials and obser- vational studies, SRs will reflect and possibly exacerbate existing distortions in the biomedical literature. The review team should document the search process and keep track of the decisions that are made for each article. Quality assurance and control are criti- cal during data collection and extraction because of the substantial potential for errors. At least two review team members, working independently, should screen and select studies and extract quan- titative and other critical data from included studies. Each eligible study should be systematically appraised for risk of bias; relevance to the study’s populations, interventions, and outcomes measures; and fidelity of the implementation of the interventions. The search for evidence and critical assessment of the indi- vidual studies identified are the core of a systematic review (SR). 81

OCR for page 81
82 FINDING WHAT WORKS IN HEALTH CARE These SR steps require meticulous execution and documentation to minimize the risk of a biased synthesis of evidence. Current practice falls short of recommended guidance and thus results in a mean- ingful proportion of reviews that are of poor quality (Golder et al., 2008; Moher et al., 2007a; Yoshii et al., 2009). An extensive literature documents that many SRs provide scant, if any, documentation of their search and screening methods. SRs often fail to acknowledge or address the risk of reporting biases, neglect to appraise the qual - ity of individual studies included in the review, and are subject to errors during data extraction and the meta-analysis (Cooper et al., 2006; Delaney et al., 2007; Edwards et al., 2002; Golder et al., 2008; Gøtzsche et al., 2007; Horton et al., 2010; Jones et al., 2005; Lundh et al., 2009; Moher et al., 2007a; Roundtree et al., 2008; Tramer et al., 1997). The conduct of the search for and selection of evidence may have serious implications for patients’ and clinicians’ decisions. An SR might lead to the wrong conclusions and, ultimately, the wrong clinical recommendations, if relevant data are missed, errors are uncorrected, or unreliable research is used (Dickersin, 1990; Dwan et al., 2008; Glanville et al., 2006; Gluud, 2006; Kirkham et al., 2010; Turner et al., 2008). In this chapter, the committee recommends methodological standards for the steps involved in identifying and assessing the individual studies that make up an SR’s body of evidence: plan- ning and conducting the search for studies, screening and selecting studies, managing data collection from eligible studies, and assess - ing the quality of individual studies. The committee focused on steps to minimize bias and to promote scientifically rigorous SRs based on evidence (when available), expert guidance, and thought- ful reasoning. The recommended standards set a high bar that will be challenging for many SR teams. However, the available evidence does not suggest that it is safe to cut corners if resources are limited. These best practices should be thoughtfully considered by anyone conducting an SR. It is especially important that the SR is transpar- ent in reporting what methods were used and why. Each standard consists of two parts: first, a brief statement describing the related SR step and, second, one or more elements of performance that are fundamental to carrying out the step. Box 3-1 lists all of the chapter’s recommended standards. Note that, as throughout this report, the chapter’s references to “expert guidance” refer to the published methodological advice of the Agency for Healthcare Research and Quality (AHRQ) Effec- tive Health Care Program, the Centre for Reviews and Dissemina- tion (CRD) (University of York), and the Cochrane Collaboration.

OCR for page 81
83 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES Appendix E contains a detailed summary of expert guidance on this chapter’s topics. THE SEARCH PROCESS When healthcare decision makers turn to SRs to learn the poten- tial benefits and harms of alternative health care therapies, it is with the expectation that the SR will provide a complete picture of all that is known about an intervention. Research is relevant to individual decision making, whether it reveals benefits, harms, or lack of effec - tiveness of a health intervention. Thus, the overarching objective of the SR search for evidence is to identify all the studies (and all the relevant data from the studies) that may pertain to the research ques- tion and analytic framework. The task is a challenging one. Hun - dreds of thousands of research articles are indexed in bibliographic databases each year. Yet despite the enormous volume of published research, a substantial proportion of effectiveness data are never published or are not easy to access. For example, approximately 50 percent of studies appearing as conference abstracts are never fully published (Scherer et al., 2007), and some studies are not even reported as conference abstracts. Even when there are published reports of effectiveness studies, the studies often report only a sub - set of the relevant data. Furthermore, it is well documented that the data reported may not represent all the findings on an intervention’s effectiveness because of pervasive reporting bias in the biomedical literature. Moreover, crucial information from the studies is often difficult to locate because it is kept in researchers’ files, government agency records, or manufacturers’ proprietary records. The following overview further describes the context for the SR search process: the nature of the reporting bias in the biomedical literature; key sources of information on comparative effectiveness; and expert guidance on how to plan and conduct the search. The committee’s related standards are presented at the end of the section. Planning the Search The search strategy should be an integral component of the research protocol1 that specifies procedures for finding the evidence directly relevant to the SR. Items described in the protocol include, 1 See Chapter 2 for the committee’s recommended standards for establishing the research protocol.

OCR for page 81
84 FINDING WHAT WORKS IN HEALTH CARE BOX 3-1 Recommended Standards for Finding and Assessing Individual Studies Standard 3.1 Conduct a comprehensive systematic search for evidence Required elements: 3.1.1 Work with a librarian or other information specialist trained in performing systematic reviews (SRs) to plan the search strategy 3.1.2 Design the search strategy to address each key research question 3.1.3 Use an independent librarian or other information special- ist to peer review the search strategy 3.1.4 Search bibliographic databases 3.1.5 Search citation indexes 3.1.6 Search literature cited by eligible studies 3.1.7 Update the search at intervals appropriate to the pace of generation of new information for the research question being addressed 3.1.8 Search subject-specific databases if other databases are unlikely to provide all relevant evidence 3.1.9 Search regional bibliographic databases if other data- bases are unlikely to provide all relevant evidence Standard 3.2 Take action to address potentially biased reporting of research results Required elements: 3.2.1 Search grey-literature databases, clinical trial registries, and other sources of unpublished information about studies 3.2.2 Invite researchers to clarify information related to study eligibility, study characteristics, and risk of bias 3.2.3 Invite all study sponsors to submit unpublished data, in- cluding unreported outcomes, for possible inclusion in the systematic review 3.2.4 Handsearch selected journals and conference abstracts 3.2.5 Conduct a web search 3.2.6 Search for studies reported in languages other than En- glish if appropriate Standard 3.3 Screen and select studies Required elements: 3.3.1 Include or exclude studies based on the protocol’s pre- specified criteria

OCR for page 81
85 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES 3.3.2 Use observational studies in addition to randomized clini- cal trials to evaluate harms of interventions 3.3.3 Use two or more members of the review team, working independently, to screen and select studies 3.3.4 Train screeners using written documentation; test and re- test screeners to improve accuracy and consistency 3.3.5 Use one of two strategies to select studies: (1) read all full-text articles identified in the search or (2) screen titles and abstracts of all articles and then read the full text of articles identified in initial screening. 3.3.6 Taking account of the risk of bias, consider using obser- vational studies to address gaps in the evidence from ran- domized clinical trials on the benefits of interventions Standard 3.4 Document the search Required elements: 3.4.1 Provide a line-by-line description of the search strategy, including the date of every search for each database, web browser, etc. 3.4.2 Document the disposition of each report identified includ- ing reasons for their exclusion if appropriate Standard 3.5 Manage data collection Required elements: 3.5.1 At a minimum, use two or more researchers, working in- dependently, to extract quantitative and other critical data from each study. For other types of data, one individual could extract the data while the second individual indepen- dently checks for accuracy and completeness. Establish a fair procedure for resolving discrepancies—do not simply give final decision-making power to the senior reviewer 3.5.2 Link publications from the same study to avoid including data from the same study more than once 3.5.3 Use standard data extraction forms developed for the spe- cific systematic review 3.5.4 Pilot-test the data extraction forms and process Standard 3.6 Critically appraise each study Required elements: 3.6.1 Systematically assess the risk of bias, using predefined criteria 3.6.2 Assess the relevance of the study’s populations, interven- tions, and outcome measures 3.6.3 Assess the fidelity of the implementation of interventions

OCR for page 81
86 FINDING WHAT WORKS IN HEALTH CARE but are not limited to, the study question; the criteria for a study’s inclusion in the review (including language and year of report, publication status, and study design restrictions, if any); the data - bases, journals, and other sources to be searched for evidence; and the search strategy (e.g., sequence of database thesaurus terms, text words, methods of handsearching). Expertise in Searching A librarian or other qualified information specialist with train- ing or experience in conducting SRs should work with the SR team to design the search strategy to ensure appropriate transla- tion of the research question into search concepts, correct choice of Boolean operators and line numbers, appropriate translation of the search strategy for each database, relevant subject headings, and appropriate application and spelling of terms (Sampson and McGowan, 2006). The Cochrane Collaboration includes an Informa- tion Retrieval Methods Group2 that provides a valuable resource for information specialists seeking a professional group with learning opportunities. Expert guidance recommends that an experienced librarian or information specialist with training in SR search methods should also be involved in performing the search (CRD, 2009; Lefebvre et al., 2008; McGowan and Sampson, 2005; Relevo and Balshem, 2011). Navigating through the various sources of research data and publi- cations is a complex task that requires experience with a wide range of bibliographic databases and electronic information sources, and substantial resources (CRD, 2009; Lefebvre et al., 2008; Relevo and Balshem, 2011). Ensuring an Accurate Search An analysis of SRs published in the Cochrane Database of Sys- tematic Reviews found that 90.5 percent of the MEDLINE searches contained at least one search error (Sampson and McGowan, 2006). Errors included spelling errors, the omission of spelling variants and truncations, the use of incorrect Boolean operators and line numbers, inadequate translation of the search strategy for different databases, 2 For more information on the Cochrane Information Retrieval Methods Group, go to http://irmg.cochrane.org/.

OCR for page 81
87 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES misuse of MeSH3 and free-text terms, unwarranted explosion of MeSH terms, and redundancy in search terms. Common sense sug- gests that these errors affect the accuracy and overall quality of SRs. AHRQ and CRD SR experts recommend peer review of the electronic search strategy to identify and prevent these errors from occurring (CRD, 2009; Relevo and Balshem, 2011). The peer reviewer should be independent from the review team in order to provide an unbiased and scientifically rigorous review, and should have expertise in infor- mation retrieval and SRs. In addition, the peer review process should take place prior to the search process, rather than in conjunction with the peer review of the final report, because the search process will provide the data that are synthesized and analyzed in the SR. Sampson and colleagues (2009) recently surveyed individuals experienced in SR searching and identified aspects of the search process that experts agree are likely to have a large impact on the sensitivity and precision of a search: accurate translation of each research question into search concepts; correct choice of Boolean and proximity operators; absence of spelling errors; correct line numbers and combination of line numbers; accurate adaptation of the search strategy for each database; and inclusion of relevant subject head- ings. Then they developed practice guidelines for peer review of elec- tronic search strategies. For example, to identify spelling errors in the search they recommended that long strings of terms be broken into discrete search statements in order to make null or misspelled terms more obvious and easier to detect. They also recommended cutting and pasting the search into a spell checker. As these guidelines and others are implemented, future research needs to be conducted to validate that peer review does improve the search quality. Reporting Bias Reporting biases (Song et al., 2010), particularly publication bias (Dickersin, 1990; Hopewell et al., 2009a) and selective reporting of trial outcomes and analyses (Chan et al., 2004a, 2004b; Dwan et al., 2008; Gluud, 2006; Hopewell et al., 2008; Turner et al., 2008; Vedula et al., 2009), present the greatest obstacle to obtaining a complete collection of relevant information on the effectiveness of healthcare interventions. Reporting biases have been identified across many health fields and interventions, including treatment, prevention, and diagnosis. For example, McGauran and colleagues (2010) identified 3 MeSH (Medical Subject Headings) is the National Library of Medicine’s controlled vocabulary thesaurus.

OCR for page 81
88 FINDING WHAT WORKS IN HEALTH CARE instances of reporting bias spanning 40 indications and 50 different pharmacological, surgical, diagnostic, and preventive interventions and selective reporting of study data as well as efforts by manufac - turers to suppress publication. Furthermore, the potential for report- ing bias exists across the entire research continuum—from before completion of the study (e.g., investigators’ decisions to register a trial or to report only a selection of trial outcomes), to reporting in conference abstracts, selection of a journal for submission, and sub - mission of the manuscript to a journal or other resource, to editorial review and acceptance. The following describes the various ways in which reporting of research findings may be biased. Table 3-1 provides definitions of the types of reporting biases. Publication Bias The term publication bias refers to the likelihood that publica- tion of research findings depends on the nature and direction of TABLE 3-1 Types of Reporting Biases Type of Reporting Bias Definition Publication bias The publication or nonpublication of research findings, depending on the nature and direction of the results Selective outcome The selective reporting of some outcomes but not reporting bias others, depending on the nature and direction of the results Time-lag bias The rapid or delayed publication of research findings, depending on the nature and direction of the results Location bias The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of results. Language bias The publication of research findings in a particular language, depending on the nature and direction of the results Multiple (duplicate) The multiple or singular publication of research publications findings, depending on the nature and direction of the results Citation bias The citation or noncitation of research findings, depending on the nature and direction of the results SOURCE: Sterne et al. (2008).

OCR for page 81
89 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES a study’s results. More than two decades of research have shown that positive findings are more likely to be published than null or negative results. At least four SRs have assessed the associa - tion between study results and publication of findings (Song et al., 2009). These investigations plus additional individual stud- ies indicate a strong association between statistically significant or positive results and likelihood of publication (Dickersin and Chalmers, 2010). Investigators (not journal editors) are believed to be the major reason for failure to publish research findings (Dickersin and Min, 1993; Dickersin et al., 1992). Studies examining the influence of edi - tors on acceptance of submitted manuscripts have not found an association between results and publication (Dickersin et al., 2007; Lynch et al., 2007; Okike et al., 2008; Olson et al., 2002). Selective Outcome Reporting Bias To avert problems introduced by post hoc selection of study outcomes, a randomized controlled trial’s (RCT’s) primary outcome should be stated in the research protocol a priori, before the study begins (Kirkham et al., 2010). Statistical testing of the effect of an intervention on multiple possible outcomes in a study can lead to a greater probability of statistically significant results obtained by chance. When primary or other outcomes of a study are selected and reported post hoc (i.e., after statistical testing), the reader should be aware that the published results for the “primary outcome” may be only a subset of relevant findings, and may be selectively reported because they are statistically significant. Outcome reporting bias refers to the selective reporting of some outcomes but not others because of the nature and direction of the results. This can happen when investigators rely on hypothesis test- ing to prioritize research based on the statistical significance of an association. In the extreme, if only positive outcomes are selectively reported, we would not know that an intervention is ineffective for an important outcome, even if it had been tested frequently (Chan and Altman, 2005; Chan et al., 2004a,b; Dwan et al., 2008; Turner et al., 2008; Vedula et al., 2009). Recent research on selective outcome reporting bias has focused on industry-funded trials, in part because internal company docu - ments may be available, and in part because of evidence of biased reporting that favors their test interventions (Golder and Loke, 2008; Jorgensen et al., 2008; Lexchin et al., 2003; Nassir Ghaemi et al., 2008; Ross et al., 2009; Sismondo 2008; Vedula et al., 2009).

OCR for page 81
90 FINDING WHAT WORKS IN HEALTH CARE Mathieu and colleagues (2009) found substantial evidence of selective outcome reporting. The researchers reviewed 323 RCTs with results published in high-impact journals in 2008. They found that only 147 had been registered before the end of the trial with the primary outcome specified. Of these 147, 46 (31 percent) were published with different primary outcomes than were registered, with 22 introducing a new primary outcome. In 23 of the 46 discrep- ancies, the influence of the discrepancy could not be determined. Among the remaining 23 discrepancies, 19 favored a statistically significant result (i.e. a new statistically significant primary out - come was introduced in the published article or a nonsignificant primary outcome was omitted or not defined as primary in the published article). In a study of 100 trials published in high-impact journals between September 2006 and February 2007 and also registered in a trial registry, Ewart and colleagues found that in 34 cases (31 percent) the primary outcome had changed (10 by addition of a new primary outcome; 3 by promotion from a secondary outcome; 20 by deletion of a primary outcome; and 6 by demotion to a second - ary outcome); and in 77 cases (70 percent) the secondary outcome changed (54 by addition of a new secondary outcome; 5 by demo- tion from a primary outcome; 48 by deletion; 3 by promotion to a primary outcome) (Ewart et al., 2009). Acquiring unpublished data from industry can be challenging. However, when available, unpublished data can change an SR’s conclusions about the benefits and harms of treatment. A review by Eyding and colleagues demonstrates both the challenge of acquir- ing all relevant data from a manufacturer and how acquisition of those data can change the conclusion of an SR (Eyding et al., 2010). In their SR, which included both published and unpublished data acquired from the drug manufacturer, Eyding and colleagues found that published data overestimated the benefit of the antidepressant reboxetine over placebo by up to 115 percent and over selective serotonin reuptake inhibitors (SSRIs) by up to 23 percent. The addi - tion of unpublished data changed the superiority of reboxetine vs. placebo to a nonsignificant difference and the nonsignificant differ- ence between reboxetine and SSRIs to inferiority for reboxetine. For patients with adverse events and rates of withdrawals from adverse events inclusion of unpublished data changed nonsignificant dif- ference between reboxetine and placebo to inferiority of rebox- etine; while for rates of withdrawals for adverse events inclusion of unpublished data changed the nonsignificant difference between reboxetine and fluoxetine to an inferiority of fluoxetine.

OCR for page 81
91 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES Although there are many studies documenting the problem of publication bias and selective outcome reporting bias, few studies have examined the effect of such bias on SR findings. One recent study by Kirkham and colleagues assessed the impact of outcome reporting bias in individual trials on 81 SRs published in 2006 and 2007 by Cochrane review groups (Kirkham et al., 2010). More than one third of the reviews (34 percent) included at least one RCT with suspected outcome reporting bias. The authors assessed the poten - tial impact of the bias and found that meta-analyses omitting trials with presumed selective outcome reporting for the primary outcome could overestimate the treatment effect. They also concluded that tri- als should not be excluded from SRs simply because outcome data appear to be missing when in fact the missing data may be due to selective outcome reporting. The authors suggest that in such cases the trialists should be asked to provide the outcome data that were analyzed, but not reported. Time-lag Bias In an SR of the literature, Hopewell and her colleagues (2009a) found that trials with positive results (statistically significant in favor of the experimental arm) were published about a year sooner than trials with null or negative results (not statistically significant or statistically significant in favor of the control arm). This has impli- cations for both systematic review teams and patients. If positive findings are more likely to be available during the search process, then SRs may provide a biased view of current knowledge. The limited evidence available implies that publication delays may be caused by the investigator rather than by journal editors (Dickersin et al., 2002b; Ioannidis et al., 1997, 1998). Location Bias The location of published research findings in journals with dif- ferent ease of access or levels of indexing is also correlated with the nature and direction of results. For example, in a Cochrane method - ology review, Hopewell and colleagues identified five studies that assessed the impact of including trials published in the grey litera - ture in an SR (Hopewell et al., 2009a). The studies found that trials in the published literature tend to be larger and show an overall larger treatment effect than those trials found in the grey literature (primarily abstracts and unpublished data, such as data from trial registries, “file drawer data,” and data from individual trialists).

OCR for page 81
144 FINDING WHAT WORKS IN HEALTH CARE Furlan, A. D., E. Irvin, and C. Bombardier. 2006. Limited search strategies were effec - tive in finding relevant nonrandomized studies. Journal of Clinical Epidemiology 59(12):1303–1311. Garritty, C., A. C. Tricco, M. Sampson, A. Tsertsvadze, K. Shojania, M. P. Eccles, J. Grimshaw, and D. Moher. 2009. A framework for updating systematic reviews. In Updating systematic reviews: The policies and practices of health care organizations involved in evidence synthesis. Garrity, C. M.Sc. thesis. Toronto, ON: University of Toronto. Gartlehner, G., S. West, K. N. Lohr, L. Kahwati, J. Johnson, R. Harris, L. Whitener, C. Voisin, and S. Sutton. 2004. Assessing the need to update prevention guidelines: A comparison of two methods. International Journal for Quality in Health Care 16(5):399–406. Gartlehner, G., R. A. Hansen, P. Thieda, A. M. DeVeaugh-Geiss, B. N. Gaynes, E. E. Krebs, L. J. Lux, L. C. Morgan, J. A. Shumate, L. G. Monroe, and K. N. Lohr. 2007. Comparative effectiveness of second-generation antidepressants in the pharmacologic treatment of adult depression. Rockville, MD: Agency for Healthcare Research and Quality. Gillen, S., T. Schuster, C. Meyer zum Büschenfelde, H. Friess, and J. Kleeff. 2010. Pre - operative/neoadjuvant therapy in pancreatic cancer: A systematic review and meta-analysis of response and resection percentages. PLoS Med 7(4):e1000267. Glanville, J. M., C. Lefebvre, J. N. V. Miles, and J. Camosso-Stefinovic. 2006. How to identify randomized controlled trials in MEDLINE: Ten years on. Journal of the Medical Library Association 94(2):130–136. Glasgow, R. E. 2006. RE-AIMing research for application: Ways to improve evidence for family medicine. Journal of the American Board of Family Medicine 19(1):11– 19. Glasgow, R. E., T. M. Vogt, and S. M. Boles. 1999. Evaluating the public health impact of health promotion interventions: The RE-AIM framework. American Journal of Public Health 89(9):1322–1327. Glass, G. V., and M. L. Smith. 1979. Meta-analysis of research on class size and achievement. Educational Evaluation and Policy Analysis 1(1):2–16. Glasziou, P., I. Chalmers, M. Rawlins, and P. McCulloch. 2007. When are randomised trials unnecessary? Picking signal from noise. BMJ 334(7589):349–351. Glasziou, P., E. Meats, C. Heneghan, and S. Shepperd. 2008. What is missing from descriptions of treatment in trials and reviews? BMJ 336(7659):1472–1474. Gluud, L. L. 2006. Bias in clinical intervention research. American Journal of Epidemiol- ogy 163(6):493–501. Golder, S., and Y. K. Loke. 2008. Is there evidence for biased reporting of published adverse effects data in pharmaceutical industry-funded studies? British Journal of Clinical Pharmacology 66(6):767–773. Golder, S., and Y. Loke. 2009. Search strategies to identify information on adverse ef - fects: A systematic review. Journal of the Medical Library Association 97(2):84–92. Golder, S., and Y. K. Loke. 2010. Sources of information on adverse effects: A system- atic review. Health Information & Libraries Journal 27(3):176–190. Golder, S., Y. Loke, and H. M. McIntosh. 2008. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. Journal of Clinical Epide- miology 61(5):440–448. Goldsmith, M. R., C. R. Bankhead, and J. Austoker. 2007. Synthesising quantitative and qualitative research in evidence-based patient information. Journal of Epide- miology & Community Health 61(3):262–270. Gøtzsche, P. C. 1987. Reference bias in reports of drug trials. BMJ (Clinical Research Ed.) 295(6599):654–656.

OCR for page 81
145 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES Gøtzsche, P. C. 1989. Multiple publication of reports of drug trials. European Journal of Clinical Pharmacology 36(5):429–432. Gøtzsche, P. C., A. Hrobjartsson, K. Maric, and B. Tendal. 2007. Data extraction errors in meta-analyses that use standardized mean differences. JAMA 298(4):430– 437. Green, S., and J. P. T. Higgins. 2008. Preparing a Cochrane review. In Cochrane hand- book for systematic reviews of interventions, edited by J. P. T. Higgins and S. Green. Chichester, UK: John Wiley & Sons. Gregoire, G., F. Derderian, and J. Le Lorier. 1995. Selecting the language of the pub - lications included in a meta-analysis: Is there a Tower of Babel bias? Journal of Clinical Epidemiology 48(1):159–163. Hansen, R. A., G. Gartlehner, D. Kaufer, K. N. Lohr, and T. Carey. 2006. Drug class re- view of Alzheimer’s drugs: Final report. http://www.ohsu.edu/drugeffectiveness/ reports/final.cfm (accessed November 12, 2010). Harris, R. P., M. Helfand, S. H. Woolf, K. N. Lohr, C. D. Mulrow, S. M. Teutsch, D. Atkins, and Methods Work Group, Third U.S. Preventive Services Task Force. 2001. Current methods of the U.S. Preventive Services Task Force: A review of the process. American Journal of Preventive Medicine 20(3 Suppl):21–35. Hartling, L., F. A. McAlister, B. H. Rowe, J. Ezekowitz, C. Friesen, and T. P. Klassen. 2005. Challenges in systematic reviews of therapeutic devices and procedures. Annals of Internal Medicine 142(12 Pt 2):1100–1111. Helfand, M., and H. Balshem. 2010. AHRQ series paper 2: Principles for develop - ing guidance: AHRQ and the Effective Health Care Program. Journal of Clinical Epidemiology 63(5):484–490. Helmer, D., I. Savoie, C. Green, and A. Kazanjian. 2001. Evidence-based practice: Extending the search to find material for the systematic review. Bulletin of the Medical Library Association 89(4):346–352. Hempell, S., M. Suttorp, J. Miles, Z. Wang, M. Maglione, S. Morton, B. Johnsen, D. Valentine, and P. Shekelle. 2011. Assessing the empirical evidence of associa - tions between internal validity and effect sizes in randomized controlled trials. Evidence Report/Technology Assessment No. HHSA 290 2007 10062 I (prepared by the Southern California Evidence-based Practice Center under Contract No. 290-2007-10062-I), Rockville, MD: AHRQ. Heres, S., S. Wagenpfeil, J. Hamann, W. Kissling, and S. Leucht. 2004. Language bias in neuroscience: Is the Tower of Babel located in Germany? European Psychiatry 19(4):230–232. Hernandez, D. A., M. M. El-Masri, and C. A. Hernandez. 2008. Choosing and using citation and bibliographic database software (BDS). Diabetic Education 34(3): 457–474. Higgins, J. P. T., and D. G. Altman. 2008. Assessing risk of bias in included studies. In Cochrane handbook for systematic reviews of interventions, edited by J. P. T. Higgins and S. Green. Chichester, U.: The Cochrane Collaboration. Higgins, J. P. T., and J. J. Deeks. 2008. Selecting studies and collecting data. In Cochrane handbook for systematic reviews of interventions, edited by J. P. T. Higgins and S. Green. Chichester, UK: The Cochrane Collaboration. Higgins, J. P. T., S. Green, and R. Scholten. 2008. Maintaining reviews: Updates, amendments and feedback. In Cochrane handbook for systematic reviews of inter- ventions, edited by J. P. T. Higgins and S. Green. Chichester, UK: The Cochrane Collaboration. Hirsch, L. 2008. Trial registration and results disclosure: Impact of U.S. legislation on sponsors, investigators, and medical journal editors. Current Medical Research and Opinion 24(6):1683–1689.

OCR for page 81
146 FINDING WHAT WORKS IN HEALTH CARE Hoaglin, D. C., R. L. Light, B. McPeek, F. Mosteller, and M. A. Stoto. 1982. Data for decisions. Cambridge, MA: Abt Books. Hopewell, S., L. Wolfenden, and M. Clarke. 2008. Reporting of adverse events in sys - tematic reviews can be improved: Survey results. Journal of Clinical Epidemiology 61(6):597–602. Hopewell, S., K. Loudon, and M. J. Clarke et al. 2009a. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database of Systematic Reviews 1:MR000006.pub3. Hopewell, S., M. J. Clarke, C. Lefebvre, and R. W. Scherer. 2009b. Handsearching versus electronic searching to identify reports of randomized trials. Cochrane Database of Systematic Reviews 4: MR000001.pub2. Horton, J., B. Vandermeer, L. Hartling, L. Tjosvold, T. P. Klassen, and N. Buscemi. 2010. Systematic review data extraction: Cross-sectional study showed that expe- rience did not increase accuracy. Journal of Clinical Epidemiology 63(3):289–298. Humphrey, L., B. K. S. Chan, S. Detlefsen, and M. Helfand. 2002. Screening for breast cancer. Edited by Oregon Health & Science University Evidence-based Practice Center under Contract No. 290-97-0018. Rockville, MD: Agency for Healthcare Research and Quality. Huston, P., and D. Moher. 1996. Redundancy, disaggregation, and the integrity of medical research. Lancet 347(9007):1024–1026. Huth, E. J. 1986. Irresponsible authorship and wasteful publication. Annals of Internal Medicine 104(2):257–259. ICMJE (International Committee of Medical Journal Editors). 2010. Uniform require- ments for manuscripts submitted to biomedical journals: Writing and editing for bio - medical publication. http://www.icmje.org/urm_full.pdf (accessed July 8, 2010). Ioannidis, J. P. A., J. C. Cappelleri, H. S. Sacks, and J. Lau. 1997. The relationship be - tween study design, results, and reporting of randomized clinical trials of HIV infection. Controlled Clinical Trials 18(5):431–444. Ioannidis, J. 1998. Effect of the statistical significance of results on the time to comple - tion and publication of randomized efficacy trials. JAMA 279:281–286. IOM (Institute of Medicine). 2008. Knowing what works in health care: A roadmap for the nation. Edited by J. Eden, B. Wheatley, B. McNeil, and H. Sox. Washington, DC: The National Academies Press. IOM. 2009. Initial national priorities for comparative effectiveness research. Washington, DC: The National Academies Press. ISI Web of Knowledge. 2009. Web of science. http://images.isiknowledge.com/ WOKRS49B3/help/WOS/h_database.html (accessed May 28, 2010). Jones, A. P., T. Remmington, P. R. Williamson, D. Ashby, and R. S. Smyth. 2005. High prevalence but low impact of data extraction and reporting errors were found in Cochrane systematic reviews. Journal of Clinical Epidemiology 58(7):741–742. Jorgensen, A. W., K. L. Maric, B. Tendal, A. Faurschou, and P. C. Gotzsche. 2008. Industry-supported meta-analyses compared with meta-analyses with non- profit or no support: Differences in methodological quality and conclusions. BMC Medical Research Methodology 8: Article no. 60. Jüni, P. 1999. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 282:1054–1060. Jüni, P., M. Egger, D. G. Altman, and G. D. Smith. 2001. Assessing the quality of randomised controlled trials. In Systematic review in health care: Meta-analysis in context, edited by M. Egger, G. D. Smith, and D. G. Altman. London, UK: BMJ Publishing Group.

OCR for page 81
147 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES Jüni, P., F. Holenstein, J. Sterne, C. Bartlett, and M. Egger. 2002. Direction and impact of language bias in meta-analyses of controlled trials: Empirical study. Interna- tional Journal of Epidemiology 31(1):115–123. Kelley, G., K. Kelley, and Z. Vu Tran. 2004. Retrieval of missing data for meta-analysis: A practical example. International Journal of Technology Assessment in Health Care 20(3):296. Khan, K. S., and J. Kleijnen. 2001. Stage II Conducting the review: Phase 4 Selection of studies. In CRD Report No. 4, edited by K. S. Khan, G. ter Riet, H. Glanville, A. J. Sowden and J. Kleijnen. York, U.K.: NHS Centre for Reviews and Dissemination, University of York. Kirkham, J. J., K. M. Dwan, D. G. Altman, C. Gamble, S. Dodd, R. S. Smyth, and P. R. Williamson. 2010. The impact of outcome reporting bias in randomised con - trolled trials on a cohort of systematic reviews. BMJ 340(7747):637–640. Kjaergard, L. L., and B. Als-Nielsen. 2002. Association between competing interests and authors’ conclusions: Epidemiological study of randomised clinical trials published in the BMJ. BMJ 325(7358):249. Knowledge for Health. 2010. About POPLINE. http://www.popline.org/aboutpl.html (accessed June 1, 2010). Kuper, H., A. Nicholson, and H. Hemingway. 2006. Searching for observational stud - ies: What does citation tracking add to PubMed? A case study in depression and coronary heart disease. BMC Medical Research Methodology 6:4. Lee, K., P. Bacchetti, and I. Sim. 2008. Publication of clinical trials supporting success - ful new drug applications: A literature analysis. PLoS Medicine 5(9):1348–1356. Lefebvre, C., E. Manheimer, and J. Glanville. 2008. Searching for studies. In Cochrane handbook for systematic reviews of interventions, edited by J. P. T. Higgins and S. Green. Chichester, U.K.: The Cochrane Collaboration. Lemeshow, A. R., R. E. Blum, J. A. Berlin, M. A. Stoto, and G. A. Colditz. 2005. Search - ing one or two databases was insufficient for meta-analysis of observational studies. Journal of Clinical Epidemiology 58(9):867–873. Lexchin, J., L. A. Bero, B. Djulbegovic, and O. Clark. 2003. Pharmaceutical indus - try sponsorship and research outcome and quality: Systematic review. BMJ 326(7400):1167–1170. Li, J., Q. Zhang, M. Zhang, and M. Egger. 2007. Intravenous magnesium for acute myocardial infarction. Cochrane Database of Systematic Reviews 2:CD002755. Liberati, A., D. G. Altman, J. Tetzlaff, C. Mulrow, P. C. Gotzsche, J. P. A. Ioannidis, M. Clarke, P. J. Devereaux, J. Kleijnen, and D. Moher. 2009. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medi- cine 151(4):W1–W30. Light, R. L., and D. Pillemer. 1984. Summing up: The science of reviewing research. Cam- bridge, MA: Harvard University Press. Linde, K., and S. N. Willich. 2003. How objective are systematic reviews? Differences between reviews on complementary medicine. Journal of the Royal Society of Medicine 96(1):17–22. Lohr, K. 1998. Grading articles and evidence: Issues and options. Research Triangle Park, NC: RTI-UNC Evidence-based Practice Center. Lohr, K. N., and T. S. Carey. 1999. Assessing “best evidence”: Issues in grading the quality of studies for systematic reviews. Joint Commission Journal on Quality Improvement 25(9):470–479.

OCR for page 81
148 FINDING WHAT WORKS IN HEALTH CARE Louden, K., S. Hopewell, M. Clarke, D. Moher, R. Scholten, A. Eisinga, and S. D. French. 2008. A decision tree and checklist to guide decisions on whether, and when, to update Cochrane reviews. In A decision tool for updating Cochrane re- views. Chichester, U.K.: The Cochrane Collaboration. Lundh, A., S. L. Knijnenburg, A. W. Jorgensen, E. C. van Dalen, and L. C. M. Kremer. 2009. Quality of systematic reviews in pediatric oncology: A systematic review. Cancer Treatment Reviews 35(8):645–652. Lynch, J. R., M. R. A. Cunningham, W. J. Warme, D. C. Schaad, F. M. Wolf, and S. S. Leopold. 2007. Commercially funded and United States-based research is more likely to be published: Good-quality studies with negative outcomes are not. Journal of Bone and Joint Surgery (American Volume) 89A(5):1010–1018. MacLean, C. H., S. C. Morton, J. J. Ofman, E. A. Roth, P. G. Shekelle, and Center Southern California Evidence-Based Practice. 2003. How useful are unpublished data from the Food and Drug Administration in meta-analysis? Journal of Clinical Epidemiology 56(1):44–51. Mathieu, S., I. Boutron, D. Moher, D. G. Altman, and P. Ravaud. 2009. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 302(9):977–984. McAuley, L., B. Pham, P. Tugwell, and D. Moher. 2000. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta- analyses? Lancet 356(9237):1228–1231. McDonagh, M. S., K. Peterson, S. Carson, R. Fu, and S. Thakurta. 2010. Drug class re- view: Atypical antipsychotic drugs. Update 3. http://derp.ohsu.edu/final/AAP_fi- nal_report_update%203_version%203_JUL_10.pdf (accessed November 4, 2010). McGauran, N., B. Wieseler, J. Kreis, Y. Schuler, H. Kolsch, and T. Kaiser. 2010. Report- ing bias in medical research: A narrative review. Trials 11:37. McGowan, J., and M. Sampson. 2005. Systematic reviews need systematic searchers. Journal of the Medical Library Association 93(1):74–80. McKibbon, K. A., N. L. Wilczynski, R. B. Haynes, and T. Hedges. 2009. Retrieving randomized controlled trials from Medline: A comparison of 38 published search filters. Health Information and Libraries Journal 26(3):187–202. Miller, J. 2010. Registering clinical trial results: The next step. JAMA 303(8):773–774. Miller, J. N., G. A. Colditz, and F. Mosteller. 1989. How study design affects outcomes in comparisons of therapy II: Surgical. Statistics in Medicine 8(4):455–466. Moher, D., and A. Tsertsvadze. 2006. Systematic reviews: When is an update an up - date? Lancet 367(9514):881–883. Moher, D., P. Fortin, A. R. Jadad, P. Jüni, T. Klassen, J. LeLorier, A. Liberati, K. Linde, and A. Penna. 1996. Completeness of reporting of trials published in languages other than English: Implications for conduct and reporting of systematic re - views. Lancet 347(8998):363–366. Moher, D., B. Pham, A. Jones, D. J. Cook, A. R. Jadad, M. Moher, P. Tugwell, and T. P. Klassen. 1998. Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 352(9128):609–613. Moher, D., D. J. Cook, S. Eastwood, I. Olkin, D. Rennie, and D. F. Stroup. 1999. Im - proving the quality of reports of mega-analyses of randomised controlled trials: The QUOROM statement. Lancet 354(9193):1896–1900. Moher, D., B. Pham, T. P. Klassen, K. F. Schulz, J. A. Berlin, A. R. Jadad, and A. Liberati. 2000. What contributions do languages other than English make on the results of meta-analyses? Journal of Clinical Epidemiology 53(9):964–972. Moher, D., B. Pham, M. L. Lawson, and T. P. Klassen. 2003. The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health Technology Assessment 7(41):1–90.

OCR for page 81
149 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES Moher, D., J. Tetzlaff, A. C. Tricco, M. Sampson, and D. G. Altman. 2007a. Epi- demiology and reporting characteristics of systematic reviews. PLoS Medicine 4(3):447–455. Moher, D., A. Tsertsvadze, A. C. Tricco, M. Eccles, J. Grimshaw, M. Sampson, and N. Barrowman. 2007b. A systematic review identified few methods and strategies describing when and how to update systematic reviews. Journal of Clinical Epi- demiology 60(11):1095–1104. Moja, L., E. Telaro, R. D’Amico, I. Moschetti, L. Coe, and A. Liberati. 2005. Assess- ment of methodological quality of primary studies by systematic reviews: results of the metaquality cross sectional study. BMJ 330:1053. Mojon-Azzi, S. M., X. Jiang, U. Wagner, and D. S. Mojon. 2004. Redundant publica - tions in scientific ophthalmologic journals: The tip of the iceberg? Ophthalmology 111(5):863–866. Montori, V. M., P. J. Devereaux, N. K. Adhikari, K. E. A. Burns, C. H. Eggert, M. Briel, C. Lacchetti, T. W. Leung, E. Darling, D. M. Bryant, H. C. Bucher, H. J. Schünemann, M. O. Meade, D. J. Cook, P. J. Erwin, A. Sood, R. Sood, B. Lo, C. A. Thompson, Q. Zhou, E. Mills, and G. Guyatt. 2005. Randomized trials stopped early for benefit: A systematic review. JAMA 294(17):2203–2209. Moore, T. 1995. Deadly medicine: Why tens of thousands of hearts died in America’s worst drug disaster. New York: Simon & Schuster. Morrison, A., K. Moulton, M. Clark, J. Polisena, M. Fiander, M. Mierzwinski-Urban, S. Mensinkai, T. Clifford, and B. Hutton. 2009. English-language restriction when conducting systematic review-based meta-analyses: Systematic review of published studies. Ottawa, CA: Canadian Agency for Drugs and Technologies in Health. Mrkobrada, M., H. Thiessen-Philbrook, R. B. Haynes, A. V. Iansavichus, F. Rehman, and A. X. Garg. 2008. Need for quality improvement in renal systematic reviews. Clinical Journal of the American Society of Nephrology 3(4):1102–1114. Mulrow, C. D., and A. D. Oxman. 1994. Cochrane Collaboration handbook, The Cochrane Library. Chichester, U.K.: The Cochrane Collaboration. Nallamothu, B. K., R. A. Hayward, and E. R. Bates. 2008. Beyond the randomized clinical trial: The role of effectiveness studies in evaluating cardiovascular thera - pies. Circulation 118(12):1294–1303. Nassir Ghaemi, S., A. A. Shirzadi, and M. Filkowski. 2008. Publication bias and the pharmaceutical industry: The case of lamotrigine in bipolar disorder. Medscape Journal of Medicine 10(9):211. National Library of Medicine. 2008. MEDLINE fact sheet. http://www.nlm.nih.gov/ pubs/factsheets/medline.html (accessed May 28, 2010). New York Academy of Medicine. 2010. Grey literature report. http://www.nyam.org/ library/pages/grey_literature_report (accessed June 2, 2010). Nieminen, P., G. Rucker, J. Miettunen, J. Carpenter, and M. Schumacher. 2007. Statisti- cally significant papers in psychiatry were cited more often than others. Journal of Clinical Epidemiology 60(9):939–946. NLM (National Library of Medicine). 2009. Fact sheet: ClinicalTrials.gov. http://www. nlm.nih.gov/pubs/factsheets/clintrial.html (accessed June 16, 2010). Norris, S., D. Atkins, W. Bruening, S. Fox, E. Johnson, R. Kane, S. C. Morton, M. Oremus, M. Ospina, G. Randhawa, K. Schoelles, P. Shekelle, and M. Viswanathan. 2010. Selecting observational studies for comparing medical interventions. In Methods guide for comparative effectiveness reviews, edited by Agency for Health- care Research and Quality. http://www.effectivehealthcare.ahrq.gov/index. cfm/search-for-guides-reviews-and-reports/?pageaction=displayProduct& productID=454 (accessed January 19, 2011).

OCR for page 81
150 FINDING WHAT WORKS IN HEALTH CARE Nüesch, E., S. Trelle, S. Reichenbach, A. W. S. Rutjes, B. Tschannen, D. G. Altman, M. Egger, and P. Jüni. 2010. Small study effects in meta-analyses of osteoarthritis trials: Meta-epidemiological study. BMJ 341(7766):241. O’Connor, A. B. 2009. The need for improved access to FDA reviews. JAMA 302(2):191– 193. Okike, K., M. S. Kocher, C. T. Mehlman, J. D. Heckman, and M. Bhandari. 2008. Pub- lication bias in orthopedic research: An analysis of scientific factors associated with publication in the Journal of Bone and Joint Surgery (American Volume). 90A(3):595–601. Olson, C. M., D. Rennie, D. Cook, K. Dickersin, A. Flanagin, J. W. Hogan, Q. Zhu, J. Reiling, and B. Pace. 2002. Publication bias in editorial decision making. JAMA 287(21):2825–2828. Online Computer Library Center. 2010. The OAIster® database. http://www.oclc.org/ oaister/ (accessed June 3, 2010). OpenSIGLE. 2010. OpenSIGLE. http://opensigle.inist.fr/ (accessed June 2, 2010). Peinemann, F., N. McGauran, S. Sauerland, and S. Lange. 2008. Disagreement in pri - mary study selection between systematic reviews on negative pressure wound therapy. BMC Medical Research Methodology 8:41. Perlin, J. B., and J. Kupersmith. 2007. Information technology and the inferential gap. Health Affairs 26(2):W192–W194. Pham, B., T. P. Klassen, M. L. Lawson, and D. Moher. 2005. Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. Journal of Clinical Epide- miology 58(8):769–776. Pildal, J., A. Hrobjartsson, K. J. Jorgensen, J. Hilden, D. G. Altman, and P. C. Gøtzsche. 2007. Impact of allocation concealment on conclusions drawn from meta-analyses of randomized trials (2007) vol. 36 (847–857). International Journal of Epidemiology 36(4):847–857. ProQuest. 2010. ProQuest dissertations & theses database. http://www.proquest.com/ en-US/catalogs/databases/detail/pqdt.shtml (accessed June 2, 2010). Ravnskov, U. 1992. Cholesterol lowering trials in coronary heart disease: Frequency of citation and outcome. BMJ 305(6844):15–19. Ravnskov, U. 1995. Quotation bias in reviews of the diet–heart idea. Journal of Clinical Epidemiology 48(5):713–719. RefWorks. 2009. RefWorks. http://refworks.com/content/products/content.asp (ac- cessed July 2, 2010). Relevo, R., and H. Balshem. 2011. Finding evidence for comparing medical interven - tions. In Methods guide for comparative effectiveness reviews, edited by Agency for Healthcare Research and Quality. http://www.effectivehealthcare.ahrq.gov/ index.cfm/search-for-guides-reviews-and-reports/?pageaction=displayProduc t&productID=605 (accessed January 19, 2011). Rising, K., P. Bacchetti, and L. Bero. 2008. Reporting bias in drug trials submitted to the Food and Drug Administration: Review of publication and presentation. PLoS Medicine 5(11):1561–1570. Rosenthal, E. L., J. L. Masdon, C. Buckman, and M. Hawn. 2003. Duplicate publica- tions in the otolaryngology literature. Laryngoscope 113(5):772–774. Ross, J. S., G. K. Mulvey, E. M. Hines, S. E. Nissen, and H. M. Krumholz. 2009. Trial publication after registration in clinicaltrials.gov: A cross-sectional analysis. PLoS Medicine 6(9):e1000144. Rothwell, P. M. 1995. Can overall results of clinical trials be applied to all patients? Lancet 345(8965):1616–1619.

OCR for page 81
151 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES Rothwell, P. M. 2005. External validity of randomised controlled trials: To whom do the results of this trial apply? Lancet 365(9453):82–93. Rothwell, P. M. 2006. Factors that can affect the external validity of randomised con - trolled trials. PLOS Clinical Trials 1(1):e9. Roundtree, A. K., M. A. Kallen, M. A. Lopez-Olivo, B. Kimmel, B. Skidmore, Z. Ortiz, V. Cox, and M. E. Suarez-Almazor. 2008. Poor reporting of search strategy and conflict of interest in over 250 narrative and systematic reviews of two biologic agents in arthritis: A systematic review. Journal of Clinical Epidemiology 62(2):128–137. Royle, P., and R. Milne. 2003. Literature searching for randomized controlled trials used in Cochrane reviews: Rapid versus exhaustive searches. International Jour- nal of Technology Assessment in Health Care 19(4):591–603. Royle, P., and N. Waugh. 2003. Literature searching for clinical and cost-effectiveness studies used in health technology assessment reports carried out for the Na - tional Institute for Clinical Excellence appraisal system. Health Technology As- sessment 7(34):1–51. Royle, P., L. Bain, and N. Waugh. 2005. Systematic reviews of epidemiology in diabe - tes: Finding the evidence. BMC Medical Research Methodology 5(1):2. Sampson, M., and J. McGowan. 2006. Errors in search strategies were identified by type and frequency. Journal of Clinical Epidemiology 59(10):1057.e1–1057.e9. Sampson, M., K. G. Shojania, C. Garritty, T. Horsley, M. Ocampo, and D. Moher. 2008. Systematic reviews can be produced and published faster. Journal of Clinical Epidemiology 61(6):531–536. Sampson, M., J. McGowan, E. Cogo, J. Grimshaw, D. Moher, and C. Lefebvre. 2009. An evidence-based practice guideline for the peer review of electronic search strategies. Journal of Clinical Epidemiology 62(9):944–952. Sanderson, S., I. D. Tatt, and J. P. Higgins. 2007. Tools for assessing quality and suscep- tibility to bias in observational studies in epidemiology: A systematic review and annotated bibliography. International Journal of Epidemiology 36(3):666–676. Savoie, I., D. Helmer, C. J. Green, and A. Kazanjian. 2003. Beyond MEDLINE: Re - ducing bias through extended systematic review search. International Journal of Technology Assessment in Health Care 19(1):168–178. Schein, M., and R. Paladugu. 2001. Redundant surgical publications: Tip of the ice - berg? Surgery 129(6):655–661. Scherer, R. W., P. Langenberg, and E. Von Elm. 2007. Full publication of results initial - ly presented in abstracts. Cochrane Database of Systematic Reviews 2:MR000005. Schmidt, L. M., and P. C. Gøtzsche. 2005. Of mites and men: Reference bias in narra - tive review articles: A systematic review. Journal of Family Practice 54(4):334–338. Schulz, K. F., L. Chalmers, R. J. Hayes, and D. G. Altman. 1995. Empirical evidence of bias: Dimensions of methodological quality associated with estimates of treat - ment effects in controlled trials. JAMA 273(5):408–412. Scopus. 2010. Scopus in detail. http://info.scopus.com/scopus-in-detail/content- coverage-guide/ (accessed May 28, 2010). Shikata, S., T. Nakayama, Y. Noguchi, Y. Taji, and H. Yamagishi. 2006. Comparison of effects in randomized controlled trials with observational studies in digestive surgery. Annals of Surgery 244(5):668–676. Shojania, K. G., M. Sampson, M. T. Ansari, J. Ji, S. Doucette, and D. Moher. 2007. How quickly do systematic reviews go out of date? A survival analysis. Annals of Internal Medicine 147(4):224–233. Silagy, C. A., P. Middleton, and S. Hopewell. 2002. Publishing protocols of systematic reviews: Comparing what was done to what was planned. JAMA 287(21):2831– 2834.

OCR for page 81
152 FINDING WHAT WORKS IN HEALTH CARE Sismondo, S. 2008. Pharmaceutical company funding and its consequences: A qualita- tive systematic review. Contemporary Clinical Trials 29(2):109–113. Song, F., S. Parekh-Bhurke, L. Hooper, Y. K. Loke, J. J. Ryder, A. J. Sutton, C. B. Hing, and I. Harvey. 2009. Extent of publication bias in different categories of research cohorts: A meta-analysis of empirical studies. BMC Medical Research Methodology 9(1):79–93. Song, F., S. Parekh, L. Hooper, Y. K. Loke, J. Ryder, A. J. Sutton, C. Hing, C. S. Kwok, C. Pang, and I. Harvey. 2010. Dissemination and publication of research findings: An updated review of related biases. Health Technology Assessment 14(8):1–193. Sterne, J., M. Egger, and D. Moher, eds. 2008. Chapter 10: Addressing reporting biases. In Cochrane handbook for systematic reviews of interventions, edited by J. P. T. Higgins and S. Green. Chichester, U.K.: The Cochrane Collaboration. Sutton, A. J., S. Donegan, Y. Takwoingi, P. Garner, C. Gamble, and A. Donald. 2009. An encouraging assessment of methods to inform priorities for updating systematic reviews. Journal of Clinical Epidemiology 62(3):241–251. Thompson, R. L., E. V. Bandera, V. J. Burley, J. E. Cade, D. Forman, J. L. Freudenheim, D. Greenwood, D. R. Jacobs, Jr., R. V. Kalliecharan, L. H. Kushi, M. L. McCullough, L. M. Miles, D. F. Moore, J. A. Moreton, T. Rastogi, and M. J. Wiseman. 2008. Re - producibility of systematic literature reviews on food, nutrition, physical activity and endometrial cancer. Public Health Nutrition 11(10):1006–1014. Thomson Reuters. 2010. EndNote web information. http://endnote.com/enwebinfo. asp (accessed July 2, 2010). Tramer, M. R., D. J. Reynolds, R. A. Moore, and H. J. McQuay. 1997. Impact of covert duplicate publication on meta-analysis: A case study. BMJ 315(7109):635–640. Tricco, A. C., J. Tetzlaff, M. Sampson, D. Fergusson, E. Cogo, T. Horsley, and D. Moher. 2008. Few systematic reviews exist documenting the extent of bias: A systematic review. Journal of Clinical Epidemiology 61(5):422–434. Turner, E. H., A. M. Matthews, E. Linardatos, R. A. Tell, and R. Rosenthal. 2008. Selec - tive publication of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine 358(3):252–260. Van de Voorde, C., and C. Leonard. 2007. Search for evidence and critical appraisal. Brussels, Belgium: Belgian Health Care Knowledge Centre. van Tulder, M. W., M. Suttorp, S. Morton, L. M. Bouter, and P. Shekelle. 2009. Empirical evidence of an association between internal validity and effect size in random - ized controlled trials of low-back pain. Spine (Phila PA 1976) 34(16):1685–1692. Vandenbroucke, J. P. 2004. Benefits and harms of drug treatments: Observational stud- ies and randomised trials should learn from each other. BMJ 329(7456):2–3. Vedula, S. S., L. Bero, R. W. Scherer, and K. Dickersin. 2009. Outcome reporting in industry-sponsored trials of gabapentin for off-label use. New England Journal of Medicine 361(20):1963–1971. Voisin, C. E., C. de la Varre, L. Whitener, and G. Gartlehner. 2008. Strategies in as - sessing the need for updating evidence-based guidelines for six clinical topics: An exploration of two search methodologies. Health Information and Libraries Journal 25(3):198–207. von Elm, E., G. Poglia, B. Walder, and M. R. Tramer. 2004. Different patterns of du- plicate publication: An analysis of articles used in systematic reviews. JAMA 291(8):974–980. Walker, C. F., K. Kordas, R. J. Stoltzfus, and R. E. Black. 2005. Interactive effects of iron and zinc on biochemical and functional outcomes in supplementation trials. American Journal of Clinical Nutrition 82(1):5–12.

OCR for page 81
153 STANDARDS FOR FINDING AND ASSESSING INDIVIDUAL STUDIES WAME (World Association of Medical Editors). 2010. Publication ethics policies for medical journals. http://www.wame.org/resources/publication-ethics-policies- for-medical-journals (accessed November 10, 2010). Wennberg, D. E., F. L. Lucas, J. D. Birkmeyer, C. E. Bredenberg, and E. S. Fisher. 1998. Variation in carotid endarterectomy mortality in the Medicare population: Trial hospitals, volume, and patient characteristics. JAMA 279(16):1278–1281. Whitlock, E. P., E. A. O’Connor, S. B. Williams, T. L. Beil, and K. W. Lutz. 2008. Ef- fectiveness of weight management programs in children and adolescents. Rockville, MD: AHRQ. WHO (World Health Organization). 2006. African Index Medicus. http://indexmedi- cus.afro.who.int/ (accessed June 2, 2010). WHO. 2010. International Clinical Trials Registry Platform. http://www.who.int/ictrp/ en/ (accessed June 17, 2010). Wieland, S., and K. Dickersin. 2005. Selective exposure reporting and Medline index - ing limited the search sensitivity for observational studies of the adverse effects of oral contraceptives. Journal of Clinical Epidemiology 58(6):560–567. Wilczynski, N. L., R. B. Haynes, A. Eady, B. Haynes, S. Marks, A. McKibbon, D. Morgan, C. Walker-Dilks, S. Walter, S. Werre, N. Wilczynski, and S. Wong. 2004. Developing optimal search strategies for detecting clinically sound prognostic studies in MEDLINE: An analytic survey. BMC Medicine 2:23. Wilt, T. J. 2006. Comparison of endovascular and open surgical repairs for abdominal aortic aneurysm. Rockville, MD: AHRQ. Wood, A. J. J. 2009. Progress and deficiencies in the registration of clinical trials. New England Journal of Medicine 360(8):824–830. Wood, A. M., I. R. White, and S. G. Thompson. 2004. Are missing outcome data ad - equately handled? A review of published randomized controlled trials in major medical journals. Clinical Trials 1(4):368–376. Wood, L., M. Egger, L. L. Gluud, K. F. Schulz, P. Jüni, D. G. Altman, C. Gluud, R. M. Martin, A. J. Wood, and J. A. Sterne. 2008. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: Meta-epidemiological study. BMJ 336(7644):601–605. Wortman, P. M., and W. H. Yeaton. 1983. Synthesis of results in controlled trials of coronary bypass graft surgery. In Evaluation studies review annual, edited by R. L. Light. Beverly Hills, CA: Sage. Yoshii, A., D. A. Plaut, K. A. McGraw, M. J. Anderson, and K. E. Wellik. 2009. Analysis of the reporting of search strategies in Cochrane systematic reviews. Journal of the Medical Library Association 97(1):21–29. Zarin, D. A. 2005. Clinical trial registration. New England Journal of Medicine 352(15):1611. Zarin, D. A., T. Tse, and N. C. Ide. 2005. Trial registration at ClinicalTrials.gov between May and October 2005. New England Journal of Medicine 353(26):2779–2787.

OCR for page 81