The Prevention and Treatment of Missing Data in Clinical Trials

Panel on Handling Missing Data in Clinical Trials

Committee on National Statistics

Division of Behavioral and Social Sciences and Education

NATIONAL RESEARCH COUNCIL
OF THE NATIONAL ACADEMIES

THE NATIONAL ACADEMIES PRESS

Washington, D.C.
www.nap.edu



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page R1
Panel on Handling Missing Data in Clinical Trials Committee on National Statistics Division of Behavioral and Social Sciences and Education

OCR for page R1
THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Govern- ing Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engineer- ing, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for appropri- ate balance. This study was supported by contract number HHSF223200810020I, TO #1 between the National Academy of Sciences and the U.S. Food and Drug Adminis- tration. Support for the work of the Committee on National Statistics is provided by a consortium of federal agencies through a grant from the National Science Foundation (award number SES-0453930). Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the organizations or agencies that provided support for this project. International Standard Book Number-13: 978-0-309-15814-5 International Standard Book Number-10: 0-309-15814-1 Additional copies of this report are available from the National Academies Press, 500 Fifth Street, N.W., Lockbox 285, Washington, DC 20055; (800) 624-6242 or (202) 334-3313 (in the Washington metropolitan area); Internet, http://www.nap.edu. Copyright 2010 by the National Academy of Sciences. All rights reserved. Printed in the United States of America Suggested citation: National Research Council. (2010). The Prevention and Treat- ment of Missing Data in Clinical Trials. Panel on Handling Missing Data in Clinical Trials. Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

OCR for page R1
The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distinguished scholars engaged in scientific and engineering research, dedicated to the furtherance of science and technology and to their use for the general welfare. Upon the authority of the charter granted to it by the Congress in 1863, the Acad- emy has a mandate that requires it to advise the federal government on scientific and technical matters. Dr. Ralph J. Cicerone is president of the National Academy of Sciences. The National Academy of Engineering was established in 1964, under the charter of the National Academy of Sciences, as a parallel organization of outstanding engineers. It is autonomous in its administration and in the selection of its members, sharing with the National Academy of Sciences the responsibility for advising the federal government. The National Academy of Engineering also sponsors engineer- ing programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. Dr. Charles M. Vest is presi- dent of the National Academy of Engineering. The Institute of Medicine was established in 1970 by the National Academy of Sciences to secure the services of eminent members of appropriate professions in the examination of policy matters pertaining to the health of the public. The Insti- tute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Harvey V. Fineberg is president of the Institute of Medicine. The National Research Council was organized by the National Academy of Sci- ences in 1916 to associate the broad community of science and technology with the Academy’s purposes of furthering knowledge and advising the federal government. Functioning in accordance with general policies determined by the Academy, the Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in providing services to the government, the public, and the scientific and engineering communities. The Council is administered jointly by both Academies and the Institute of Medicine. Dr. Ralph J. Cicerone and Dr. Charles M. Vest are chair and vice chair, respectively, of the National Research Council. www.national-academies.org

OCR for page R1

OCR for page R1
PANEL ON HANDLING MISSING DATA IN CLINICAL TRIALS RODERICk J.A. LITTLE (Chair), Department of Biostatistics, University of Michigan, Ann Arbor RALPH D’AGOSTINO, Department of Mathematics and Statistics, Boston University kAy DICkERSIN, Department of Epidemiology, Johns Hopkins University SCOTT S. EMERSON, Department of Biostatistics, University of Washington, Seattle JOHN T. FARRAR, Department of Biostatistics and Epidemiology, University of Pennsylvania School of Medicine CONSTANTINE FRANGAkIS, Department of Biostatistics, Johns Hopkins University JOSEPH W. HOGAN, Center for Statistical Sciences, Program in Public Health, Brown University GEERT MOLENBERGHS, International Institute for Biostatistics and Statistical Bioinformatics, Universiteit Hasselt and katholieke Universiteit Leuven, Belgium SUSAN A. MURPHy, Department of Statistics, University of Michigan, Ann Arbor JAMES D. NEATON, School of Public Health, University of Minnesota ANDREA ROTNITzky, Departmento de Economia, Universidad Torcuato Di Tella, Buenos Aires, Argentina DANIEL SCHARFSTEIN, Department of Biostatistics, Johns Hopkins University WEICHUNG (JOE) SHIH, Department of Biostatistics, University of Medicine and Dentistry of New Jersey School of Public Health JAy P. SIEGEL, Johnson & Johnson, Radnor, Pennsylvania HAL STERN, Department of Statistics, University of California, Irvine MICHAEL L. COHEN, Study Director AGNES GASkIN, Administrative Assistant v

OCR for page R1
COMMITTEE ON NATIONAL STATISTICS 2009-2010 WILLIAM F. EDDy (Chair), Department of Statistics, Carnegie Mellon University kATHARINE G. ABRAHAM, Department of Economics and Joint Program in Survey Methodology, University of Maryland ALICIA CARRIQUIRy, Department of Statistics, Iowa State University WILLIAM DuMOUCHEL, Phase Forward, Inc., Waltham, Massachusetts JOHN HALTIWANGER, Department of Economics, University of Maryland V. JOSEPH HOTz, Department of Economics, Duke University kAREN kAFADAR, Department of Statistics, Indiana University SALLIE kELLER, George R. Brown School of Engineering, Rice University LISA LyNCH, Heller School for Social Policy and Management, Brandeis University DOUGLAS MASSEy, Department of Sociology, Princeton University SALLy C. MORTON, Biostatistics Department, University of Pittsburgh JOSEPH NEWHOUSE, Division of Health Policy Research and Education, Harvard University SAMUEL H. PRESTON, Population Studies Center, University of Pennsylvania HAL STERN, Department of Statistics, University of California, Irvine ROGER TOURANGEAU, Joint Program in Survey Methodology, University of Maryland, and Survey Research Center, University of Michigan ALAN zASLAVSky, Department of Health Care Policy, Harvard Medical School CONSTANCE F. CITRO, Director vi

OCR for page R1
Acknowledgments I would like to express appreciation to the following individuals who provided valuable assistance in producing this report. Particular thanks to Robert O’Neill and Tom Permutt at the U.S. Food and Drug Administra- tion (FDA) for initiating the project, providing excellent presentations at the first meeting of the panel, and continuing support in providing timely information. We also thank Frances Gipson, FDA’s technical representa- tive, who assisted greatly in arranging the panel’s first meeting at FDA and acquiring FDA documents throughout the study. The following FDA staff members presented invaluable information to the panel at its first meeting: Sharon Hertz, Henry Hsu, Robert O’Neill, Tom Permutt, Bruce Schneider, Norman Stockbridge, Robert Temple, Steve Winitsky, Lilly yue, and Bram zuckerman. At the panel’s workshop on September 9, 2009, we benefited very much from the presentations of the following knowledgeable experts: Abdel Babiker, Don Berry, James Carpenter, Christy Chuang-Stein, Susan Ellenberg, Thomas Fleming, Dean Follmann, Joseph Ibrahim, John Lachin, Andrew Leon, Craig Mallinckrodt, Devan Mehrotra, Jerry Menikoff, David Ohlssen, and Edward Vonesh. I am particularly indebted to the members of the Panel on Handling Missing Data in Clinical Trials. They worked extremely hard and were always open to other perspectives on the complicated questions posed by missing data in clinical trials. It was a real pleasure collaborating with all of them on this project. I also thank the staff, especially our study director, Michael L. Cohen, who converted the musings of the panel into intelligible prose, arbitrated differences in opinion with good humor, and worked very hard on writing vii

OCR for page R1
viii ACKNOWLEDGMENTS and improving the report. I also thank Agnes Gaskin, who performed her usual exemplary service on all administrative matters. Eugenia Grohman provided extremely useful advice on presenting the material in this report, along with careful technical editing. This report has been reviewed in draft form by individuals chosen for their diverse perspectives and technical expertise, in accordance with proce- dures approved by the Report Review Committee of the National Research Council (NRC). The purpose of this independent review is to provide candid and critical comments that will assist the institution in making its published report as sound as possible and to ensure that the report meets institutional standards for objectivity, evidence, and responsiveness to the study charge. The review comments and draft manuscript remain confidential to protect the integrity of the deliberative process. We wish to thank the following individuals for their review of this report: Christy J. Chuang-Stein, Statistical Research and Consulting Center, Pfizer, Inc.; Shein-Chung Chow, Biostatistics and Bioinformatics, Duke University School of Medicine; Susan S. Ellenberg, Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania School of Medicine; Thomas Fleming, Department of Biostatistics, School of Public Health and Community Medicine, University of Washington; yulei He, Department of Health Care Policy, Harvard Medical School; Robin Henderson, School of Mathematics and Statistics, University of Newcastle; Devan V. Mehrotra, Clinical Biostatistics, Merck Research Laboratories; Donald B. Rubin, Department of Statistics, Harvard University; and Steve Snapinn, Global Biostatistics and Epidemiology, Amgen, Inc. Although the reviewers listed above have provided many constructive comments and suggestions, they were not asked to endorse the conclusions or recommendations nor did they see the final draft of the report before its release. The review of this report was overseen by Gilbert S. Omenn, Center for Computational Medicine and Biology, University of Michigan Medical School and Joel B. Greenhouse, Department of Statistics, Carnegie Mellon University. Appointed by the NRC’s Report Review Committee, they were responsible for making certain that an independent examination of this report was carried out in accordance with institutional procedures and that all review comments were carefully considered. Responsibility for the final con- tent of this report rests entirely with the authoring panel and the institution. Finally, the panel recognizes the many federal agencies that support the Committee on National Statistics directly and through a grant from the National Science Foundation. Without their support and their commitment to improving the national statistical system, the work that is the basis of this report would not have been possible. Roderick J.A. Little, Chair Panel on Handling Missing Data in Clinical Trials

OCR for page R1
Contents GLOSSARy xiii SUMMARy 1 1 INTRODUCTION AND BACkGROUND 7 Randomization and Missing Data, 8 Three kinds of Trials as Case Studies, 12 Trials for Chronic Pain, 12 Trials for the Treatment of HIV, 13 Trials for Mechanical Circulatory Devices for Severe Symptomatic Heart Failure, 14 Clinical Trials in a Regulatory Setting, 16 Domestic and International Guidelines on Missing Data in Clinical Trials, 18 Report Scope and Structure, 19 2 TRIAL DESIGNS TO REDUCE THE FREQUENCy OF MISSING DATA 21 Trial Outcomes and Estimands, 22 Minimizing Dropouts in Trial Design, 27 Continuing Data Collection for Dropouts, 30 Reflecting Loss of Power from Missing Data, 31 Design Issues in the Case Studies, 32 Trials for Chronic Pain, 32 Trials for Treatment of HIV, 34 ix

OCR for page R1
x CONTENTS Trials for Mechanical Circulatory Devices for Severe Symptomatic Heart Failure, 36 3 TRIAL STRATEGIES TO REDUCE THE FREQUENCy OF MISSING DATA 39 Reasons for Dropouts, 39 Actions for Design and Management Teams, 40 Actions for Investigators and Site Personnel, 41 Targets for Acceptable Rates of Missing Data, 43 4 DRAWING INFERENCES FROM INCOMPLETE DATA 47 Principles, 48 Notation, 49 Assumptions About Missing Data and Missing Data Mechanisms, 50 Missing Data Patterns and Missing Data Mechanisms, 50 Missing Completely at Random, 51 Missing at Random, 51 MAR for Monotone Missing Data Patterns, 52 Missing Not at Random, 53 Example: Hypertension Trial with Planned and Unplanned Missing Data, 54 Summary, 54 Commonly Used Analytic Methods Under MAR, 54 Deletion of Cases with Missing Data, 55 Inverse Probability Weighting, 56 Likelihood Methods, 59 Imputation-Based Approaches, 65 Event Time Analyses, 70 Analytic Methods Under MNAR, 70 Definitions: Full Data, Full Response Data, and Observed Data, 71 Selection Models, 72 Pattern Mixture Models, 73 Advantages and Disadvantages of Selection and Pattern Mixture Models, 74 Recommendations, 76 Instrumental Variable Methods for Estimating Treatment Effects Among Compliers, 78 Missing Data in Auxiliary Variables, 81 5 PRINCIPLES AND METHODS OF SENSITIVITy ANALySES 83 Background, 83 Framework, 85

OCR for page R1
xi CONTENTS Example: Single Outcome, No Auxiliary Data, 86 Pattern Mixture Model Approach, 88 Selection Model Approach, 89 Example: Single Outcome with Auxiliary Data, 91 Pattern Mixture Model Approach, 91 Selection Model Approach, 94 Example: General Repeated Measures Setting, 96 Monotone Missing Data, 98 Nonmonotone Missing Data, 103 Comparing Pattern Mixture and Selection Approaches, 103 Time-to-Event Data, 104 Decision Making, 105 Recommendation, 106 6 CONCLUSIONS AND RECOMMENDATIONS 107 Trial Objectives, 108 Reducing Dropouts Through Trial Design, 108 Reducing Dropouts Through Trial Conduct, 109 Treating Missing Data, 110 Understanding the Causes and Degree of Dropouts in Clinical Trials, 111 REFERENCES 115 APPENDIXES A Clinical Trials: Overview and Terminology 123 B Biographical Sketches of Panel Members and Staff 139

OCR for page R1

OCR for page R1
Glossary Active Control: In situations where the experimental therapy is to be an alternative to some existing standard of care, ethical or logistical con- straints may dictate that the experimental therapy be tested against that “active” therapy that has previously shown evidence in an adequate and well-controlled clinical trial as an effective therapy. The ideal would be that patients would be randomized in a double blind fashion to either the experimental therapy or the active control, though the logistical difficulties of producing placebos for each treatment sometimes precludes a double blind study structure. Contrasted with Placebo control: In situations where the experimental therapy is to be added to some existing standard of care, it is best to ran- domize subjects in a double-blind fashion to either the experimental therapy or a placebo control that is similar in appearance. Common Analysis Estimands: Per Protocol: In a per-protocol analysis, the analysis may be restricted to participants who had some minimum exposure to the study treatments, who met inclusion/exclusion criteria, and for whom there were no major protocol violations. The specific reasons for excluding randomized par- ticipants from a per-protocol analysis should be specified in advance of unblinding the data. Intention to Treat: In an intention-to-treat analysis, all participants that satisfy the exclusion criteria are analyzed as belonging to the treatment arms to which they were randomized, regardless of whether they received or adhered to the allocated intervention for the full duration of the trial. xiii

OCR for page R1
xiv GLOSSARY As Treated: In an as-treated analysis, the participants are grouped according to the treatment regimen that they received, which is not necessarily the treatment to which they were initially assigned. Complier-Averaged Causal Effect (CACE): A parameter used to estimate the average effect of the treatment in the subpopulation of individuals that could remain on study or control treatments for the full length of the study. Dropout: Treatment dropout is the result of a participant in a clinical trial discontinuing treatment; analysis dropout is the result of the failure to measure the outcome of interest for a trial participant. Enrichment: Treatments are often only tolerated by or are only efficacious for a subset of the population. To avoid problems associated with treatment discontinuation, and to test a treatment on the subpopulation that can most benefit from it, it can be advantageous to determine whether a potential trial participant is a member of the subpopulation that can either tolerate or benefit from a treatment. This pretesting and selection of participants for trial participation prior to randomization into the treatment and con- trol arms is called enrichment, and can include (1) selecting people with potentially responsive disease, (2) selecting people likely to have an event whose occurrence is the outcome of interest, (3) selecting people likely to adhere to the study protocol, and (4) selecting people who show an early response to the test drug. Last Observation Carried Forward (LOCF): A single imputation technique that imputes the last measured outcome value for participants who either drop out of a clinical trial or for whom the final outcome measurement is missing. Baseline Observation Carried Forward (BOCF): A single imputa- tion technique that imputes the baseline outcome value for participants who either drop out of a clinical trial or for whom the final outcome measure- ment is missing. Noninferiority vs. Superiority Trials: A noninferiority clinical trial com- pares the experimental therapy to some active control with the aim of estab- lishing that the experimental therapy is not unacceptably worse than an active control that showed evidence as an effective treatment in previously conducted adequate and well-controlled clinical trials. A noninferiority trial is often conducted in a setting in which (1) the experimental therapy, if approved, would be used in place of some existing treatment that was previously found to show evidence of effect, (2) it is not ethical or feasible to conduct a placebo controlled trial, (3) it would be clinically appropri-

OCR for page R1
xv GLOSSARY ate to approve a new treatment that is only approximately equivalent to a current standard therapy with respect to some primary clinical outcome, and (4) the new experimental therapy might have other advantages such as a better adverse event profile, ease of administration, etc. Rather than rejecting a null hypothesis of equality between the experimental therapy and control treatment, a noninferiority clinical trial is designed to reject a null hypothesis that the experimental therapy is some specified amount (“the noninferiority margin”) worse than the active control. Selection of the noninferiority margin must consider such issues as the magnitude of effect estimated for the active control in prior clinical trials, any bias that might be present in those previous trials relative to the effect of the active control in the population and setting used in the noninferiority trials, the propor- tion of effect that must be preserved for any approved treatment, etc. A Superiority clinical trial is one in which an experimental therapy would be approved only if that therapy showed statistically credible evidence of superiority over a clinically relevant control therapy in an adequate and well-controlled clinical trial. The superiority trial is designed to reject a null hypothesis of equality between the experimental and control therapies. Randomized Withdrawal: A clinical trial design in which all participants are initially provided the study treatment. Then, participants that have a posi- tive response to the study treatment are randomly selected either to remain on the study treatment or to be switched to a placebo. Positive indications are when those that continue on study treatment are observed to have better outcomes than those who are switched to the placebo. Run-In Design: Similar to an enrichment design, a run-in design is a design incorporating an initial period in which a subset of the participants are selected given indications as to their likelihood of compliance or the mag- nitude of their placebo effect. The key difference between a run-in design and an enrichment design is that the active treatment is not used to identify the subset of participants for study. Titration: In opposition to a fixed dose protocol, titration is the adjustment of dosage to increase the treatment benefit and tolerability for participants during the course of a clinical trial. Washout: (Placebo) washout is a period of time without active treatment that is scheduled before the beginning of use of study treatment, often used to eliminate any residual effects that might remain after a previous period on active treatment.

OCR for page R1