KEY SPEAKER THEMES
• Issues with the quality of available data sources, such as misclassification, misdiagnosis, or missing data, should be taken into account when prioritizing the funding of observational studies.
• Observational studies should be used to evaluate diagnosis, prognosis, and evaluation strategies as well as the benefits and harms of therapy.
• Clear identification of questions and outcomes of interest are important in funding observational studies and publishing their results. This can help ensure transparency and combat selective reporting.
• Certain study designs are appropriate for answering specific types of questions, and these questions should address the needs of decision makers.
• Few studies are robust enough to stand on their own. The field needs to start thinking in terms of a body of evidence and the quality of evidence that contributes to that body.
• Translation and dissemination of study results, and the communication and incorporation of uncertainty into decision tools are areas in need of much work in order for research to inform decision making.
• Funding agencies such as the Patient-Centered Outcomes Research Institute (PCORI) could mandate or strongly encourage the conduct of companion randomized controlled trial (RCT) and observational studies, following patients excluded from RCTs and aligning information collection between the two methods, in order to look at the commonalities between methods and better characterize biases.
• Funding agencies, including PCORI, should fund efforts to validate different methods for exploring relationships in high-quality databases. This would be key to giving the field confidence about these methods.
• PCORI and other funders should set standards regarding the full range of expertise needed to carry out the clinical research it funds.
In the workshop’s final session, three panelists aimed to capture the lessons learned over the previous day and a half of presentations and discussions. The session’s three panelists—Cynthia D. Mulrow, senior deputy editor of the Annals of Internal Medicine ; Jean R. Slutsky, director of the Center for Outcomes and Evidence, Agency for Healthcare Research and Quality (AHRQ); and Steven N. Goodman—were asked to reflect on the take-home message from the workshop and identify potential strategies to move forward.
Speaking from the perspective of an editor at a journal (the Annals of Internal Medicine) that publishes work that can be used to inform patient and provider decisions, Cynthia D. Mulrow said that over the previous year alone she had seen more than 1,000 papers describing observational studies but had published only about 5 percent of those papers, which she said makes her a skeptic about the value of such studies. She said that she worries that this situation is a threat to validity and to the success of the Patient-Centered Outcomes Research Institute (PCORI) and that the field may be setting overly high or overly broad expectations for many different
stakeholders. She said that she worries that PCORI might “fund research that addresses nebulous questions with multiple different types of outcomes that really do not provide good information that can be used for patient decision making.”
To avoid those threats, she said that PCORI needs to spend a significant amount of time identifying and prioritizing the research questions that it wants to have addressed and the research that it wants to fund and that it needs to do so with cross talk among different types of experts, whether they are clinical, patient, or methodological experts. She recommended that PCORI look at where data that measure what the field wants to have measured are available or easily collectible to ensure that the research funded not only matches a clinical question but also can supply the data needed to answer that question. Mulrow added, “I don’t think we should just be amassing information for information’s sake without paying any attention to things like misclassification, misdiagnosis, or large amounts of missing data that are likely to be occurring in a dataset.”
Another issue that the workshop presentations raised for her was the almost exclusive focus on the benefits and harms of therapy. “Medical care focuses on things other than therapy, whether that is diagnosis or prognosis or evaluation strategies that begin a management strategy, so I would like to see PCORI spend some time and some money on funding work other than just therapy,” said Mulrow.
She also said that she would have liked to have heard more about transparent reporting and selective reporting, which she believes are particular problems with observational research. She recommended that any observational studies that PCORI funds should start with clearly identified questions and outcomes of interest so that those who read the resulting papers can be assured that selective reporting has not occurred.
With respect to the last issue, Michael McGinnis asked Mulrow for suggestions on how to better capture the array of observational data or studies that are under way to improve transparency. Mulrow said that she would not recommend spending much money setting up a registry similar to clinicaltrials.gov for observational studies. “I think at this point that it behooves groups that are funding observational research to make clear that they have well-designed protocols that are available to all and that can be used throughout the process of that research,” she said.
Jean R. Slutsky commended the workshop presenters and participants for avoiding the observational study-versus-randomized controlled trial (RCT) conundrum and instead acknowledging the challenge inherent in combining multiple approaches to study different aspects of clinical care.
She also noted that the workshop participants paid particular attention to the fact that certain study designs are appropriate for answering specific types of questions and that these questions need to address the needs of decision makers. The latter point is often something that the field avoids discussing, she observed. She added that the important clinical questions can differ among decision makers. As an example, she cited work that AHRQ did translating the results of Mark A. Hlatky’s work on percutaneous coronary intervention versus coronary artery bypass grafting to patients and clinicians. Clinicians, Slutsky said, were “obsessed with mortality as an endpoint, but patients were obsessed with angina that was disabling and got quite angry when we tried to put the dissemination document in the context of mortality.”
She then listed six ideas that she gleaned from the workshop and that she thought were important for PCORI and other funding agencies such as hers, as well as for researchers. The first idea is that empirical testing is not infallible but is self-correcting. Put another way, few studies are robust enough to stand on their own. The workshop presentations were largely about individual studies and study designs, she stated, and the field needs to start thinking in terms of a body of evidence and the quality of the evidence that contributes to that body of evidence.
Slutsky’s next point was that many studies are designed without rational intelligence. “Sometimes I think clinical studies are not designed with how their application will be used in decision making,” she said. Next, she raised the issue that the field seldom refers to the existing body of evidence as a living, dynamic resource and instead advocates for what she referred to as “individual acronym-based studies.” As a result, the field often waits on the results of some exciting clinical trial and then tries to determine how the data fit into the existing literature. Instead, the goal should be to look at the whole body of evidence and think ahead of time about the type of data that would fit with and increase the usefulness of the existing data.
The point was raised during the workshop that patients have numeracy problems, but so does the medical profession, said Slutsky, which was her fourth idea. She said that she has often been appalled at the level of misunderstanding among physicians, even those just out of medical school, about the basic constructs of relative risk versus absolute risk and progression-free survival versus mortality, for example. “We cannot assume that people understand the level at which a discussion like this takes place,” she stated. “If we are not the right people to do this translation or dissemination of what this means, we have to create a body of science to make sure that happens.” Along the same lines, her next idea was that the field fails to put the same emphasis on the science of how to best communicate uncertainty in the evidence for the population and the individual. Nor, she said, is the field good at incorporating uncertainty into robust tools for decision making.
For her final idea, she agreed with Mulrow that the field must demand a level of transparency in the studies that are published, particularly in how observational studies and RCTs are evaluated in systematic reviews. AHRQ, she noted, is funding the creation of a review database of studies that have been evaluated for quality, but the same level of transparency regarding inputs, algorithms, protocols, and patient-reported outcomes is needed. Transparency, she said as a final comment, is essential when one is talking about decision modeling and when one is choosing the outcomes to be measured, as those outcomes must be the ones that are important to patients.
Steven N. Goodman, speaking from the perspective of a member of PCORI’s Methodology Committee, said that the workshop provided two important lessons that can affect how the Institute acts. The first lesson, he said, pertains to the fact that PCORI is unique in being a research-funding agency with its own legislatively mandated Methodology Committee, something that, he noted, is extraordinary. The lesson is that PCORI has huge opportunities to change its review process so that it can improve the studies that it funds in a way that furthers the many goals that have been mentioned at the workshop. Instead of being a passive recipient of proposals, PCORI can play an active role in changing the culture of clinical trial design.
For example, PCORI could mandate or strongly encourage that any RCT for which a request for funding is submitted include a parallel observation study that would at a minimum follow patients who would not agree to be in the RCT but would agree to be followed. It would represent a tremendous opportunity for the field to conduct such studies in a systematic way that would provide an ongoing opportunity to develop methods for examining the factors that make RCTs and observational studies equivalent or not.
Similarly, PCORI could mandate or strongly encourage that observational studies for which a request for funding is submitted collect the same information that is gathered in related RCTs. By pushing from both sides, he said, PCORI would create opportunities to look at the commonalities between methods and characterize biases in the observational sphere. In addition, PCORI could mandate that both types of studies include more active solicitation of patient-volunteered information through the use of patient portals into the electronic health record (EHR).
The second lesson that Goodman discussed concerned the development of methods. He said that PCORI has the opportunity to fund efforts to validate different methods for exploring the relationships among data
in high-quality databases. Although this type of validation work is rarely funded because it does not discover new relationships, it goes a long way, Goodman said, in giving the field confidence about the methods that it uses. He also noted the importance of development and validation of both adaptive trial designs and methods for sequential decision making in which agents are rerandomized at every key decision-making point.
As a final comment, Goodman said that the workshop clearly demonstrated the value of building a critical mass of methodologists who think deeply about the foundations of both methodology and clinical research and that he would like to see PCORI set standards for the expertise that needs to be assembled within the clinical research teams that it funds. Doing so would “institutionalize the richness of the cross talk that we saw here to make sure that the best wisdom of the best thinkers on the methodological side and on the clinical research and clinical side, as well as the patients, is brought to bear in everything that PCORI does.”
Nancy Santanello said that most large pharmaceutical companies do register their observational studies on clinicaltrials.gov, though she said that doing so was difficult. She noted, too, that the European Medicines Agency now requires all observational safety studies conducted in the European Union to register with the agency using a registry that provides a more user-friendly interface for observational studies. She, and then Richard Platt, seconded the idea that everyone doing observational studies should register their studies, with Platt suggesting that funding agencies and journals require registration as a requirement for funding or publication. Slutsky added that the American Recovery and Reinvestment Act of 2009 funded a patient registry to be integrated into clinicaltrials.gov, though linking of observational studies to the registry is voluntary.
Platt then commented that, in his view, the value of claims data combined with the ability to conduct a full review of the text records for selected patients is underappreciated. An advantage to the use of claims data, he said, is that the population is well-defined and coverage is reasonably complete over the defined time period. At the same time, he believed that the field may be overestimating the ease with which EHR data can be used. In a final remark, Platt suggested that the National Institutes of Health Collaboratory would be a good partner for PCORI if PCORI wants to follow the recommendations to take a more active role in trial design.
Sean Hennessy said that although data-mining approaches have good sensitivity when it comes to identifying adverse drug events, the real issue is specificity, and thus, good methods for evaluating specificity are needed. Following up on Slutsky’s observation that the results of one study are
rarely a sufficient basis to change clinical practice widely, he said that both AHRQ and PCORI are asking researchers to disseminate their findings into practice as soon as they have them. He voiced concern that PCORI could run into trouble over the implementation of results that should not be implemented yet. Slutsky replied that AHRQ has always emphasized considering the audience for dissemination. With the results of a single study, unless they are remarkably definitive, the audience should be other researchers and funding agencies and not the general public.
Joe V. Selby added that PCORI does not ask its applicants to even plan for dissemination. What PCORI does do is ask its applicants to assess the potential for dissemination and to outline what the applicants perceive to be opportunities, should the findings warrant dissemination. He noted that PCORI is working with AHRQ to develop a broad-based dissemination plan. Mulrow remarked that consideration should also be given to how evidence from observational studies should be synthesized before being disseminated.
Sheldon Greenfield asked if it would be possible to create a family of studies with the goal of having the combined findings of these studies in, say, 4 years provide clinical practice guideline makers and systematic reviewers with enough of the right kind of data to make some kind of statement. Goodman added that although it is important to keep a balance between allowing individual investigators to develop their own ideas and to have central control, it should be possible for PCORI to bring together investigators who work in the same field to standardize measures in their studies and perhaps provide an overview that would enable meta-analysis at the end of the studies. “PCORI, as well as NIH [National Institutes of Health], can take the initiative by convening conferences to help standardize these measures across bodies of research,” Goodman said in support of Greenfield’s suggestion. “I think PCORI and other funding agencies can do it without being too dictatorial and start establishing the infrastructure, both intellectual and substantive, for research in high-priority areas to go forward efficiently.”
Miguel A. Hernán commented on the importance of data quality and selective reporting of multiple comparisons. He said that these are at least as important as confounding, and he encouraged funding agencies to pay attention to research on the importance of biases due to both poor data quality and selective reporting on multiple comparisons. He disagreed with the idea of registering observational studies because of the challenge of selective reporting of multiple comparisons and said that methods for quantifying the magnitude of the problem of selective reporting are needed. Santanello argued that registration is important because it forces investigators to write a protocol, ask a scientific question, and develop a plan to analyze the data.
Making the session’s final comment, Patrick Ryan noted that although data quality is important, the key point that Mulrow raised is that the analysis of the data being performed must be valid. “If job number one is generating the highest-quality evidence possible,” he said, “we need to figure out a framework for how to evaluate how much we can believe the result that is generated when a particular method is applied to a particular data source.”
In his closing remarks, Ralph I. Horwitz said that one conclusion that he drew from the workshop is that it is not enough to document when conflicts exist among randomized trials and among observational studies or between them, nor is it enough to document when conflicts exist around treatment heterogeneity. In his opinion, observational research would be advanced enormously if more attention was paid to understanding the sources of conflict in results both within observational studies and between observational studies and randomized trials. Understanding why studies give differing results will be key to helping improve them.
He also said that he hoped that the workshop would help expand the scope of information that contributes to both observational research and RCTs and that investigators would pay more attention to collecting data on the patient experience and using those data to better inform decision making to benefit the patient.
In his closing remarks, Selby summarized some of what he learned from written comments that workshop participants submitted during the lunch break. Many of the comments called for more complete data, higher-quality data, more observations, and more data per observation. They also highlighted the need for more granular data and different types of data, particularly data from the patient’s perspective. He acknowledged the suggestion that PCORI fund empirical studies on the impact of differences in the quality of data and on validation of the results of studies found in one setting with those of studies found in other settings.
Other comments noted the importance of the conduct of simple, clustered RCTs and of consideration of informed consent at both the patient and the institutional levels. The role of biology in disease and biomarkers in treatment effectiveness was raised, as was the role of socioeconomic status.
As a final comment, Selby addressed the subject of the learning health care system and said, “there is not any doubt that if we want research that reflects real-world practice and if we want research that changes real-world practice, the research is going to have to be done in that real world.” That real world is composed of many types of health care systems, and it is going to take the active consent of those systems to host the activities that will