• Greater rigor is needed in the use of observational methods; even the best methods used incorrectly will yield bad results. (Goodman)
• No one method is infallible, multiple methods are needed to answer a question and inform decision making. (Slutsky)
• Validating methods for exploring relationships in high-quality databases is critical to understanding how large databases might best contribute to evidence generation. (Ryan)
• The right question must be asked, and the right analytic methods applied, in order to validly compare observational studies and randomized controlled trials (RCTs). (Hernán and Kaizar)
• A better understanding is needed of the fundamental biologic differences that drive treatment effect heterogeneity. (Horwitz)
• Funder requirements for parallel observational studies and RCTs assessing the same question can aid in evaluating the validity and generalizing results of both approaches. (Goodman)
• Registration of observational studies in databases such as clinicaltrials.gov may counteract the selective reporting of studies and eliminate needless repetition of studies. (Santanello)
• Greater transparency in the publication of approaches and methods used, particularly in how observational studies and RCTs are evaluated in systematic reviews, can help the interpretation of results to inform decision making. (Mulrow and Slutsky)
• Stakeholder engagement is critical in order for studies to be maximally useful to decision makers, and stakeholder engagement of patients and providers is critical. Studies should be pegged to their interests and evidence needs and be paired with the most appropriate methods to deliver actionable results. (Slutsky)
• Challenges such as innumeracy and a lack of understanding of uncertainty will require consideration when approaching the use and dissemination of observational studies results. (Slutsky)
A number of common themes emerged from the workshop presentations and discussions. These themes both touched on the role of observational studies as contributors to clinical evidence and identified priorities for innovation in the methods of conduct of clinical trials on the basis of current gaps and shortfalls. Participants and speakers shared their thoughts on changes in policies, particularly those of funders and journal editors, that can move research toward producing practical evidence useful to health care decision making. Finally, the engagement of stakeholders, patients, clinicians, researchers, and health systems was a common subject of discussion.
Workshop participants who spoke and speakers cautioned against oversimplification of the discussion on the use of appropriate methods to inform decision making. Steven N. Goodman called for greater rigor in the use of observational methods, reminding everyone that even the best methods used incorrectly will yield bad results. The need for the use of multiple methods to answer a question was highlighted, and it was noted that few studies are robust enough to stand on their own and that no one method is infallible.
To get a better understanding of how large databases might best contribute to the generation of evidence, the validation of methods for exploring relationships in high-quality databases was highlighted. Patrick Ryan suggested that the creation of a reference set of positive and negative controls in the context of comparative effectiveness research, as the Observational Medical Outcomes Partnership has done for safety, would be one approach to aid the validation effort. At the same time, validation of novel approaches to experimental methodologies, such as adaptive trial designs and methods, to bolster both observational and experimental approaches was mentioned. The specific development of observational analogs of intent-to-treat and per-protocol-effect methods to allow broader application of innovative statistical methods was highlighted.
In making comparisons between observational studies and randomized controlled trials (RCTs), Miguel A. Hernán and Eloise E. Kaizar emphasized the importance of ensuring that the questions asked be the same and that the confounders be well understood so that comparisons are of apples to apples. Better understanding of the quality of the data available from observational studies, the implications of the quality of the data for use of the evidence, and approaches to improve those data and mitigate current problems were all suggested to be crucial to ensuring that the use of innovative methods leads to the production of credible, useable evidence. The increased capture and use of patient-centered outcomes and patient-contributed data were suggested to be priorities in this area.
Several workshop participants called for the need to move beyond discussions of whether treatment effect heterogeneity is real to better understanding of the fundamental biologic differences which are likely the main source of heterogeneity. Kent highlighted the limitations of current approaches, including analytical approaches that cannot contend with studies that are under-powered to detect sub-group effects and the one-variable-at-a-time approach to subgroup analysis. Instead approaches assessing combinations of variables were suggested, such as multivariate risk models.
Workshop presentations and discussions reinforced the importance of observational studies as a component of a robust clinical research enterprise. They emphasized their complementarity to other methods in supporting health care decision making and highlighted the importance that they receive continued support from funders and be considered to provide valuable contributions by tenure and promotion committees. Encouragement of greater collection and use of patient-contributed information in observational research was also noted to be a priority for funder engagement.
Several participants made specific suggestions on how the complemen-
tarity of observational studies and RCTs could be improved, including the suggestion that observational studies with patients excluded from an RCT be run in parallel. Participants who spoke suggested that this could be done through the use of mechanisms such as registries and could be encouraged by funder and journal requirements. In addition, Steven N. Goodman pointed out, this would be a tremendous opportunity for the development of methods for examining factors that make RCTs and observational studies equivalent or not.
Jean R. Slutsky noted that the field of clinical research needs to start thinking in terms of a body of evidence and the quality of the evidence that contributes to that body of evidence. One of the challenges to this perspective that participants cited is the lack of transparency in observational study methods and the reporting of their results. Selective reporting is a major hindrance to being able to consider the full body of evidence around a research question and can lead to unnecessary repetition of studies and wasted resources. Registration of observational studies, either as part of the clinicaltrial.gov database or through the creation of a new database, was suggested to be one way to mitigate this issue.
In thinking about the dissemination of research results to inform decision making, Slutsky called for greater transparency in published studies, particularly in how observational studies and RCTs are evaluated in systematic reviews. The development of a framework to organize the evidence and evaluate when the evidence is good enough to inform decision making, depending on the methods used and the data source, was suggested to be a practical approach to address this.
Greater stakeholder engagement around observational studies and their role in supporting health care decisions was a theme of many of the discussions throughout the workshop.
Participants who spoke repeatedly noted that for studies to be maximally useful to stakeholders, they must be pegged to stakeholder interests and evidence needs and be paired with methods that are the most appropriate to the delivery of actionable results. Stakeholder engagement, particularly patients, clinicians, and health care delivery systems, in research priority setting was cited as an important step in realizing this goal. Engagement of clinicians as the collectors of data and the consumers of the evidence generated by clinical research was highlighted as being critical to the ability to carry out high-quality studies and to ensure that study findings have an impact on clinical practice. Similarly, in order to dedicate staff time and resources to observational studies, they should be valuable to health care delivery systems. Several speaker and workshop participants
who spoke suggested that patients, the ultimate stakeholders in health care decisions, should be at the center of the research questions for the results to be maximally useful in informing their decisions and in building a foundation for their participation in clinical research.
Workshop participants who spoke highlighted several challenges to a broader approach to stakeholder engagement. The issue of innumeracy, or a lack of familiarity with mathematical concepts, was highlighted as a challenge of particular import for both patients and clinicians.
With this issue in mind, the use of targeted communication strategies was suggested to be an approach to more effective communication about the value of observational studies and to the dissemination of results. Several workshop participants who spoke highlighted similar issues around challenges communicating uncertainty to individual patients, clinicians, and the population. In addition to the use of targeted communication strategies, the incorporation of uncertainty into robust tools for decision making was suggested.