Introduction and Background
Randomized clinical trials (RCTs) currently occupy a central role in assessing the effectiveness of proposed interventions to prevent and treat disease. Clinical trials are sponsored by the pharmaceutical and device industries, by government organizations such as the National Institutes of Health (NIH), by academic organizations, and by private organizations. The design and conduct of clinical trials and the analysis of the resulting data are carried out and/or overseen by the trial sponsor. However, for new drugs and devices, oversight, approval, and ultimate decision authority, in the form of regulation, is the purview of the U.S. Food and Drug Administration (FDA). Currently, more than $7 billion (Drennan, 2003) is spent annually on clinical trials by U.S. pharmaceutical and device companies to evaluate the safety and effectiveness of new drugs, devices, and biologics. (Given the date of this estimate, it is reasonable to assume that the current total is higher.) An NIH panel estimated that clinical trials represented one-third of NIH’s expenditures for clinical research (see Nathan and Wilson, 2003).
At the request of FDA, the National Research Council convened the Panel on the Handling of Missing Data in Clinical Trials, under the Committee on National Statistics, to prepare “a report with recommendations that would be useful for FDA’s development of a guidance for clinical trials on appropriate study designs and follow-up methods to reduce missing data and appropriate statistical methods to address missing data for analysis of results.” The charge further specified:
[t]he panel will use as its main information-gathering resource a workshop that will include participation from multiple stakeholders, including clinical trialists, statistical researchers, appropriate experts from the National Institutes of Health and the pharmaceutical industry, regulators from FDA, and participants in the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH).
In both the workshop and report, the panel will strive to identify ways in which FDA guidance should be augmented to facilitate the cost-effective use of appropriate methods for missingness by the designers and implementers of clinical trials. Such guidance would usefully distinguish between types of clinical trials and missingness situations. For example, it could be useful to provide guidance on such questions as:
When missingness is likely to result in an appreciable bias such that sophisticated methods for reducing bias would be needed, and, conversely, under what circumstances simple methods such as case deletion could be an acceptable practice, and
How to use the leading techniques for variance estimation for each primary estimation method, along with suggestions for implementing these often complex techniques in software packages.
RANDOMIZATION AND MISSING DATA
A key feature of a randomized clinical trial is comparison with a control group, with the assignment to either the control or the treatment group carried out using a random process. This eliminates intentional or unintentional bias from affecting the treatment assignment. Randomization also (probabilistically) balances the control and treatment groups for known and, more importantly, unknown factors that could be associated with the response or outcome of interest. By using randomization, the comparison between the treatment and control groups is made as fair as possible. Thus, randomization provides a basis for inference in the assessment of whether the observed average outcome for the treatment group is or is not sufficiently different than that for the control group to assert that the measured difference is or is not due to random variation. That is, randomization permits generalizations about outcomes.
Unfortunately, this key advantage, derived from the use of random selection for treatment and control groups, is jeopardized when some of the outcome measurements are missing. By missing data we mean when an outcome value that is meaningful for analysis was not collected. So, for example, a quality-of-life measure after death is not meaningful for analysis and should not be referred to as a missing outcome. Since whether or not data are missing can be related to the assigned treatment and to the
response, the absence of these data can bias the estimate of the treatment effect and weaken the resulting inference.
A common taxonomy for missing data, which is defined more rigorously in Chapter 4, distinguishes between missing data that are missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR):
In the case of MCAR, the missing data are unrelated to the study variables: thus, the participants with completely observed data are in effect a random sample of all the participants assigned a particular intervention. With MCAR, the random assignment of treatments is assumed to be preserved, but that is usually an unrealistically strong assumption in practice.
In the case of MAR, whether or not data are missing may depend on the values of the observed study variables. However, after conditioning on this information, whether or not data are missing does not depend on the values of the missing data.
In the case of MNAR, whether or not data are missing depends on the values of the missing data.
If MAR or MNAR holds, then appropriate analysis methods must be used to reduce bias. It is important to note that increasing the number of participants is insufficient for reducing bias.
There are a number of choices for trial outcomes, trial designs, and trial implementation that can substantially increase or decrease the frequency of missing data. Some of the aspects of clinical trials that can affect the amount of missing data include whether data collection continues for participants who discontinue study treatment, the use of outcomes that are at risk of being undefined for some patients, the rate of attrition, and the use of composite outcomes.
Missing Data Due to Discontinuation of Study Treatment It is common for some participants in a clinical trial to discontinue study treatment because of adverse events or lack of efficacy. (And, there may be more than one reason for any specific outcome to be missing, for example, a combination of an adverse effect and a lack of efficacy.) Some trial protocols stipulate that data collection stop or be abbreviated following discontinuation of study treatment. For example, in some trials, data collection is only continued for a short period (e.g., 14 days) following treatment discontinuation, assuming that adverse events after that point in time are unlikely to be attributable to the randomly assigned study treatment. Moreover, in some trials, participants are offered an alternative treatment that is not part of the study following discontinuation. As a result, subsequent data collection may be considered to be uninformative for comparing the randomly assigned treat-
ments. A positive outcome, such as symptom relief, recovery, or cure, may also lead to discontinuation of treatment.
Since treatment discontinuation often arises from changes in a participant’s health status, the data that are not collected after treatment discontinuation are likely to be informative of the change in health status. Another relevant factor is that in nonblinded studies, such as device trials, a participant who knows if he/she does or does not have the study treatment might be more or less likely to either report adverse events or to report more or less efficacy. Such knowledge could be related to whether the participant discontinues treatment.
Use of Outcomes That Are at Risk of Being Undefined for Some Patients Some clinical trials use outcomes that may not be ascertainable for all participants. Examples include: (1) a quality-of-life assessment that cannot be obtained due to the death of the participant, (2) a measurement (e.g., a 6-minute walk test) from a procedure that some participants cannot complete because of their health status, and (3) assessment of renal progression for participants, some of whom undergo kidney transplant during the course of the study. Since all of these situations involve health status, it is likely that whether or not data are missing is related to the changes in health status and hence are MNAR.
Although it is important to define clinical endpoints that are measurable for as many participants as possible in order to reduce the impact of missing data, in doing so one must also consider the impact on the relevance of the primary endpoint. So, for example, forming composite outcomes to include events such as “discontinuation of treatment” or “exposure to rescue treatment,” while useful in reducing the frequency of missing data, may lessen the clinical relevance of the outcome of interest.
Missing Data Because of Attrition in the Course of the Study The longer the planned length of a clinical trial, the greater the chance that participants will drop out of the trial due to their moving out of the area or otherwise experiencing changes in their lives that preclude or complicate further participation. If dropping out due to these situations is known to be unrelated to changes in health status, an MAR assumption for the missing values seems justified; however, if dropping out is related to health status (e.g., a move to live with and be cared for by a parent or offspring), then the MAR assumption is not justified, and the missing data are likely not MAR.
Missing Data in Composite Outcomes Outcomes that are composites of a variety of variables, such as health indices, or combined measures that address the multidimensional nature of the benefit from an intervention may not be defined when any of the variables that are being combined are
missing (although there are composite outcome measures for which this is not necessarily the case).
Missing Data Due to Death The treatment of death in the context of missing data is complicated. There are three kinds of approaches, which are linked to situations. One approach is to consider cause-specific death as a primary endpoint (e.g., death related to cardiovascular event). In this case, death for other reasons (e.g., not related to the clinical study) may properly be treated as a censoring event. For example, death due to an auto accident could be considered to be a censoring event. Particular care must be taken in this situation to ensure that censoring due to death by other reasons can be grouped together with general censoring patterns. It may very well be the case that censoring due to death for other reasons is dependent on the primary endpoint itself, in which case the censoring could be a missing not a random process.
A second approach is to fold death into another outcome to form a composite outcome: for example, time to AIDS-defining illness or death.
The third situation and the main complication for a clinical trial is when death is related to the outcome of interest, as with AIDS-related death in a study where CD4 is the primary outcome. In this case, the estimand must be carefully defined, possibly as CD4 among those who would remain alive on either treatment. This approach is related to principal stratification on a postrandomization event (see Frangakis and Rubin, 2002). Inverse probability weighting can also be used in this case. The key consideration here is that the estimand must represent a causal contrast. A nontrivial complication in interpreting the estimand is that it applies to a subgroup that cannot necessarily be identified; namely, those who would have survived in either treatment group.
Two general lines of attack have been employed to address the problem of missing values in clinical trials. The first is simply to design and carry out the clinical trial in a manner that limits the amount of missing data. As discussed in Chapters 2 and 3, there are a variety of techniques for doing this, and these techniques are not used as much as one would hope. One reason for this is that the designs for limiting missing data may involve tradeoffs against other considerations, such as generalizability or relevance of outcome measure. However, many of these techniques incur relatively minor costs. In any case, we believe that if the benefits of these methods were better appreciated and these methods were more widely implemented, the frequency of missing values could be substantially reduced in many clinical trials.
The second line of attack for the treatment of missing data is to apply analysis methods that exploit partial information in the observed data about the missing data to reduce the potential bias created by the missing
data. Many of the techniques currently used for this purpose are simplistic and rely on relatively extreme assumptions. Superior analysis techniques are often not applied, for several reasons. First, the expense of developing new interventions leads naturally to risk-averse behavior when drug or device developers are faced with the regulatory process. Second, FDA may at times prefer the use of older analysis methods that are better understood. Third, the need to prespecify analyses in the study protocol inhibits the use of complex analysis methods. Fourth, until recently, some of the newer techniques have lacked readily available and tested software. Finally, there seems to be a need for more training of biostatisticians both in industry and at FDA in the use of state-of-the-art missing data methods. The lack of experience with the new methods results in a lack of consensus about how and when these methods should be used in clinical trials.
Improvements in trial design, trial conduct, and the analysis of trial data in the presence of missing data are not adequately recognized in current U.S. and international guidelines for clinical trials. Although these official documents have provided some very useful guidance, overall they are too general, and they therefore fail to be sufficiently prescriptive. That is, they lack detailed suggestions as to when and how specific methods can be implemented. In this report, we provide some guiding principles and specific methods for handling missing data in clinical trials. Our goal is to improve the quality of estimates of treatment effects and their associated estimates of uncertainty in randomized clinical trials.1
THREE KINDS OF TRIALS AS CASE STUDIES
In this report, we use three types of trials to illustrate how clinical trial design and other aspects of trial conduct can be modified to limit the impact of missing data on regulatory decisions; trials for chronic pain, trials for the treatment of HIV, and trials for mechanical circulatory devices for severe symptomatic heart failure. These examples are chosen both because they are important in their own right and because they share many characteristics with a wide variety of other types of clinical trials. In this section, we describe the usual analytic approaches and their deficiencies for these examples.
Trials for Chronic Pain
Clinical trials are used to assess the ability of an intervention to provide symptomatic relief from conditions, such as osteoarthritis, that cause
chronic pain. These trials are typically conducted over 12 weeks, and they are subject to very high rates of treatment discontinuation. The reasons for treatment discontinuation usually differ between the treatment and the control groups. For example, in placebo-controlled trials, discontinuation in the placebo group often stems from inadequate efficacy (i.e., lack of pain relief), while discontinuation in the treatment group more often arises because of poor tolerability (of the medication being tested). Trial designs that involve fixed doses leave few treatment options for patients who experience inadequate efficacy or poor tolerability. Patients who stop study treatment usually switch to a proven (approved) effective therapy, and the trial sponsors typically stop collecting pain response data on those patients who discontinue study treatment.
In current practice, the data from these types of clinical trials have been analyzed by using (single-value) imputation to fill in for the missing outcome values. In particular, it has been common to use the last observation carried forward (LOCF) imputation technique to impute for missing values. LOCF implicitly assumes that a participant who had good pain control in the short term and then dropped out would have had good pain control in the long term. This assumption seems questionable in many settings. Another frequently used, although somewhat less traditional imputation technique is the baseline observation carried forward (BOCF) technique, which assumes that a participant’s pain control is the same as that measured at the beginning of the trial. Since most patients in chronic pain studies, including those on placebos, improve substantially from the baseline over time, BOCF is likely to underestimate the effectiveness of any treatment. Furthermore, use of such imputation schemes, in conjunction with complete data techniques, can result in estimated standard errors for treatment effects that fail to properly reflect the uncertainty due to missing data.
Trials for the Treatment of HIV
The goal of many HIV trials is to determine whether a new drug has safety and efficacy that is comparable with that of an approved drug used for initial antiretroviral treatment (ART). The studies involve samples of ART-naïve participants and use noninferiority designs (U.S. Food and Drug Administration, 2002). The focus for current purposes is on the primary efficacy outcome, which is the percentage of participants with sufficiently low viral load at the end of the reference period. (Other considerations, such as choice of control, noninferiority margin, and blinding, are therefore ignored.) Since combination treatment is the norm for HIV, the typical design in this setting is new drug A plus background treatment compared with current drug B plus the same background treatment, measured over a period of 24 or 48 weeks.
The primary outcome efficacy for these trials is typically based on plasma RNA measurements (see, e.g., U.S. Food and Drug Administration, 2002, Appendix B), in which the primary outcome is the success rate among all participants randomized into the trial. Treatment failures are defined to include (1) study participants who die or switch away from the study drug before 48 weeks, (2) study participants who do not attend the 48-week visit, and (3) study participants who remained on the study drug but who have an HIV RNA level equal to or greater than 50 copies/mL at 48 weeks. This definition can be viewed as a composite outcome in which failure is due to treatment discontinuation, to missing data, and to not meeting the “success” level.
Many such trials have moderate to large numbers of patients who either discontinue treatment before 48 weeks or who do not attend the 48-week visit. Participants are typically not followed after discontinuing treatment, and there are probably various reasons for discontinuation.
One problem with the current approach involves the use of an analysis of the percentage of participants with a viral load of less than 50 copies/mL at 48 weeks for all randomized participants according to their initially assigned treatment. Such an analysis is not possible because data collection is discontinued following failure. With this approach, reasons for “failure” (e.g., losses to follow-up and lack of efficacy due to virologic failure) are given equal weight, which may complicate the interpretation of the results. Furthermore, discontinuation of data collection after failure limits analyses that can be performed on separate components of the composite outcome. Moreover, it can result in failure to capture critical long-term effects of a discontinued study drug whose use may have increased the probability of resistance to alternative therapies.
Trials for Mechanical Circulatory Devices for Severe Symptomatic Heart Failure
For patients with advanced heart failure, heart implantable left ventricular assist devices (LVADs) have been shown to be effective when used as a bridge to heart transplants.2 Furthermore, because many patients are not eligible candidates for transplantation, the use of LVADs as destination therapy has been shown to be effective, and its use is increasing.
Over time, it has been possible to make the devices smaller and more durable. With the new devices, thrombogenic (tendency to clot) and infec-
tion risks are more easily managed, and morbidity and mortality have been reduced during the periods before, during, and immediately after the operation. Thus, there is interest in using these devices as destination therapy in patients with symptomatic heart failure who are severely impaired in spite of optimal medical therapy but who are less sick than patients studied in earlier destination therapy trials. As a case study, we consider a superiority design trial in which the goal is to determine whether an LVAD is superior to optimal medical management for prolonging patients’ survival and optimizing their health status.
In many device trials, blinding of patients and investigators is not possible. A successful trial is one in which the LVAD would substantially improve functional status and not negatively impact survival.
In such a clinical trial, survival status will be ascertained for nearly all patients. However, functional status during follow-up may be missing because of early death (for example, as a consequence of the implantation procedure), failure to attend examinations, or inability to perform functional tests like a 6-minute walk. Also, as several LVADs are already approved for use in patients with more advanced disease, it is expected that some control patients will be implanted during the course of the study as their disease progresses. In addition, some patients will receive heart transplants and this, too, will complicate the interpretation of functional measures. Finally, some patients may have the LVAD removed for other reasons.
Some of the design and data analysis considerations for such a trial would also apply to a trial that compared two LVAD devices (e.g., an unapproved newer design and an approved one with an older design) as destination devices among patients ineligible for transplantation.
A key issue in the design is the definition of the major outcomes related to survival and health status. For destination trials among patients who are not eligible for transplantation, FDA has accepted a composite outcome of death or disabling stroke after 2 years as the outcome. In some trials, a second operation to replace the device is also included in the primary outcome.
Measures of functional status and a patient’s health-related quality of life have been key secondary outcomes in trials that studied patients who are ineligible for transplantation. In a target population that is not as sick as those eligible for transplantation, a measure of health status might be considered as a coprimary outcome because that outcome will be an important consideration in device approval and use. In addition, because the superiority of LVADs to medical therapy has been established in patients ineligible for transplantation, when the above criteria are met by patients in a control group that receives optimal medical management, use of an LVAD will have to be permitted. Thus, those criteria may also have to be considered as a component of a composite outcome.
In the LVAD trials done to date, a major problem has been missing health status data. In addition, obtaining objective assessment of health status (e.g., health-related quality of life, or being able to do a 6-minute walk) is complicated by the fact that it is not possible to have blinding in such trials. In these trials, missing data occur because of death either as a result of the implantation procedure or underlying disease (see discussion above), failure to attend examinations, inability to perform functional tests (e.g., a 6-minute walk), or “questionnaire fatigue” for self-administered quality-of-life instruments. Also, many health status measures include a collection of responses to multiple items that comprise different domains, and item nonresponse is also a problem (i.e., some, but not all items, on a quality-of-life instrument are missing). Analyses have typically used methods that assumed the data were missing at random, but this assumption is clearly not appropriate given the reasons for the missing data. To determine whether the degree of missing data has had a sufficient impact on the analysis to substantially affect the study findings, a sensitivity analysis is required. We discuss how to carry out a sensitivity analysis in this context in Chapter 5.
CLINICAL TRIALS IN A REGULATORY SETTING
This report focuses primarily on issues concerning the treatment of missing data in randomized controlled clinical trials that are intended to support regulatory applications for drugs, medical devices, and biologics. Several aspects of the regulatory setting have particular bearing on how missing data issues are handled. In particular:
Regulators generally must render yes or no decisions rather than just describing the data and possible interpretations.
Clinical trial sponsors, who must make substantial investment decisions in pursuit of regulatory approval, seek predictability regarding what findings would support a favorable decision; and regulators, eager to ensure common requirements across all sponsors and to enable quality development, also prefer to improve predictability where feasible.
Regulators generally require a high level of confidence before making a conclusion of safety and efficacy, preferring in close or ambiguous cases to be “conservative,” that is, to err on the side of withholding approval. This conservatism results, in part, from the fact that a regulatory approval may make further studies unlikely (due to lack of feasibility or funding).
In most cases, clinical trials in the regulatory process are focused on determining the effects of a specific product. Effects that occur after
switching to rescue therapy in patients who did not tolerate or respond well to the study therapy are sometimes disregarded because those effects may well not be attributable to the study therapy in question.
In the regulatory environment, a strong premium is placed on specification of analytic methods prior to a trial. Such specification serves not only to help preserve the type I error (i.e., the error of asserting that a treatment is more effective than the control when it is not), but also to improve the predictability of the decision process. Pretrial specification of the planned primary analyses are of particular importance and therefore receive the greatest attention. However, secondary and sensitivity analyses can also play a key role in the decision process, and they certainly are more valuable than post-hoc, exploratory analyses. Therefore, there is also a need to specify the secondary analysis prior to a trial and to specify in advance the approach for analyzing the sensitivity of the primary analysis to divergences from the statistical models used to accommodate missing data in that analysis (and the sensitivity to other divergences, such as outlying values).
We believe that the need for a dichotomous decision and the tendency for conservatism should create particularly strong incentives on the part of sponsors to minimize the quantity and effects of missing data and to use statistical models, when analyzing the resulting data, that are based on assumptions that are plausible and, when possible, validated. As there are many potential approaches to handling missing data, pretrial specification of an approach to be used in the primary analysis is particularly important to help ensure predictability. However, because the assumptions underlying any one approach to handling missing data may be invalid, prospective definition of sensitivity analyses with different underlying assumptions will help assess the robustness of the conclusions and help support effective decision making.
Obtaining regulatory approval for a therapy involves generating information on many aspects of its effects, often including, but not limited to, short-term effects, long-term effects, effects at various fixed doses, effects in various clinical settings, and effects with various concomitant therapies. In some cases, attempts to address many of these aspects in the same trial may lead to problems with missing data, particularly in assessing long-term effects. Such problems may be avoidable by designing each trial specifically to address fewer aspects, though this would raise the development costs.
Although the above considerations regarding missing data may be particularly applicable to trials in the regulatory setting, many are also relevant to trials in other clinical trial settings. Therefore, we believe that most of the recommendations and discussion in this report are also applicable to trials outside the regulatory setting.
DOMESTIC AND INTERNATIONAL GUIDELINES ON MISSING DATA IN CLINICAL TRIALS
There have been several recent documents that lay out a set of general principles and techniques for addressing the problems raised by missing data in clinical trials. These documents include
Draft Guidance on Important Considerations for When Participation of Human Subjects in Research Is Discontinued, from the Office for Human Research Protections in the U.S. Department of Health and Human Services (2008).
Guidance for Sponsors, Clinical Investigators, and IRBs: Data Retention When Subjects Withdraw from FDA-Regulated Clinical Trials, from the U.S. Food and Drug Administration (2008).
Statistical Principles for Clinical Trials; Step 5: Note for Guidance on Statistical Principles for Clinical Trials, from the European Medicines Evaluation Agency (EMEA) International Conference on Harmonisation (ICH) (1998) Topic E9.
Guideline on Missing Data in Confirmatory Clinical Trials, Committee for Medicinal Products for Human Use (CHMP) from the European Medicines Evaluation Agency (2009).
The first three documents are currently in use; the fourth Guideline on Missing Data in Confirmatory Clinical Trials had been issued only in draft form at the time of this writing.
In this section, we summarize the main points in these documents. They agree on several points:
There is a need to anticipate the amount and nature of missing data during the design of the study and when making analysis plans. Careful planning will help specify a reasonable approach to handling missing data and will also help to specify a range of sensitivity analyses that could explore the impact of departures from the expected missing data pattern.
It is important to collect complete data for all randomized participants, including those who discontinue treatment. This point motivates the important distinction between continuation of treatment and continuation of follow-up for major outcomes.
The CONSORT (consolidated standards of reporting trials) guidelines for reporting the results of trials should be adhered to. Given that there will almost always be some missing data, a trial may still be regarded as providing valid results if the methods of dealing with missing values are sensible.
The use of various single imputation methods is criticized, including the LOCF method.
Given that no universally applicable methods of handling missing values can be recommended, an investigation should be made concerning the sensitivity of the results of analysis to the method of handling missing values, especially if the percentage of missing values is substantial.
The panel believes that the need for conservative methods receives too much emphasis in these guidelines. However, in general, this report can be seen as reinforcing and expanding on many of the suggestions and recommendations found in the four documents. We support and refine many of the basic principles proposed regarding the treatment of missing data in clinical trials, and we provide more detailed suggestions on specific techniques for avoiding missing data in the design and conduct of clinical trials and on appropriate analysis methods when there are missing data.
Recently, O’Neill (2009) stated that the issue of how to best handle missing data in clinical trials was a long-standing problem, especially in regulatory submissions for trials intended to support efficacy and safety and marketing approval, and he called for the development of a consensus as to the proper methods for use. He added that more information was needed on why subjects withdraw from their assigned therapies, and, when they do withdraw, on what amount of bias is introduced in the resulting estimates through the use of various methods. In addition, he pointed out that a key question is how to specify, in a trial protocol, the primary strategy for dealing with missing data when one has yet to observe the patterns of missing values. Finally, FDA’s critical path initiative has identified the issue of missing data as a priority topic.
REPORT SCOPE AND STRUCTURE
The panel believes it is important to provide a consensus on good practice in trial design, trial conduct, and the treatment of missing output values in the analysis of trial data. That is the goal of this report. More particularly, the focus of this report is the treatment of missing data in confirmatory randomized controlled trials of drugs, devices, and biologics, although, as noted above, we believe the material is also relevant for other types of clinical trials, including those carried out by academics and NIH-funded trials, and more generally for various biostatistical investigations. We note that no further mention is made in this report about methods for the treatment of missing data for biologics because they raise no issues that are not already raised in drug trials.
While the main context for our report is randomized trials, regulatory agencies such as FDA also evaluate evidence from trials where randomiza-
tion of interventions is considered impractical, as for example in some trials of devices or surgical procedures. These trials do not possess the balancing property of randomization with respect to the distribution of observed or unmeasured covariates, and hence are subject to potential bias if there are important differences in these distributions across intervention groups.
The threat to validity from missing data is similar for nonrandomized and randomized trials—in fact the threat is potentially greater given the inability to mask the treatments—so the principles of missing data analysis described in this report apply in a similar fashion to nonrandomized trials. These include the need to design and conduct trials to minimize the amount of missing data, the need to use principled missing data adjustments based on scientifically plausible assumptions, the need to conduct sensitivity analyses for potential deviations from the primary assumed mechanisms of missing data, and the need to collect covariate information that is predictive of missingness and the study outcomes. The need for good covariate information is, if anything, even greater for nonrandomized trials, since this information can also be used to reduce differences in intervention groups arising from the nonrandomized allocation of interventions.
This study included only four panel meetings, one of which was a workshop, and therefore cannot be comprehensive. The focus was on identifying principles that could be applied in a wide variety of settings. We recognize that there are a wide variety of types of clinical trials for a wide variety of health issues and that there will always be idiosyncratic situations that will require specialized techniques not directly covered here. Also, it is important to point out that we focus on the assessment of various forms of intervention efficacy: this report does not do any more than touch on the assessment of the safety of medical interventions.
The next two chapters provide details and recommendations on trial designs and trial conduct that are useful for reducing the frequency of missing data. Chapters 4 and 5 describe methods of analysis for data from clinical trials in which some of the values for the outcome or outcomes of interest are missing: Chapter 4 considers drawing inferences when there are missing data, and Chapter 5 considers sensitivity analyses. The final chapter presents the panel’s recommendations.