National Academies Press: OpenBook

Integrating Clinical Research into Epidemic Response: The Ebola Experience (2017)

Chapter: Appendix B: Clinical Trial Designs

« Previous: Appendix A: Study Approach and Methods
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×

Appendix B

Clinical Trial Designs

TABLE B-1 Brief Summary of Some Advantages and Disadvantages of Various Clinical Trial Designs

Design Structure Advantages Disadvantages
Traditional RCT

(Evans, 2010; Glasziou et al., 2007; Suresh, 2011)

  • A group of subjects with the target disease is identified and randomized to two or more treatments (e.g., active treatment versus placebo).
  • A randomized participant receives only one treatment (or treatment strategy) during the duration of the trial.
  • Participants are then followed over time and the responses are compared between groups.
  • Allows for valid treatment group comparisons.
  • Provides an estimate of effect that is unbiased and consistent.
  • Can require large sample sizes due to the existence of both within- and between-subject variation.
  • Sample sizes can also be large when the desired effect size to detect is small.
  • Can be expensive, lengthy.
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Design Structure Advantages Disadvantages
Cluster Randomized Trials

(Campbell et al., 2004; Donner and Klar, 2004; Edwards et al., 1999)

  • Intact groups of individuals are randomized to receive different interventions.
  • The ability to study interventions that cannot be directed toward selected individuals.
  • Avoids treatment group contamination.
  • Enhances subject compliance.
  • More complex to design.
  • Requires more participants to obtain equivalent statistical power.
  • Requires more complex analysis.
  • Observations on individuals in the same cluster tend to be correlated (nonindependent), and so the effective sample size is less than the total number of individual participants.
  • After randomization, individuals in the clusters may be approached for consent, which raises the possibility of post-randomization selection bias, or they may not, which raises ethical concerns.
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Design Structure Advantages Disadvantages
Stepped Wedge

(Brown and Lilford, 2006; Hughes, 2007)

  • Sequential roll-out of an intervention to participants (individuals or clusters) over a number of time periods.
  • The order in which the different individuals or clusters receive the intervention is determined at random, and, by the end of the random allocation, all individuals or groups will have received the intervention.
  • Stepped-wedge designs incorporate data collection at each point where a new group (step) receives the intervention.
  • Particularly useful when it is not feasible to provide the intervention to everyone or every community at once.
  • For evaluating the effectiveness of interventions that have been shown to be efficacious in a more limited, research setting and are now being scaled up to the community level.
  • This design is also useful for evaluating temporal changes in the intervention effect.
  • Two key (nonexclusive) situations in which a stepped-wedge design is considered advantageous are:
    1. If there is a prior belief that the intervention will do more good than harm, rather than a prior belief of equipoise, it may be unethical to withhold the intervention from a proportion of the participants or to withdraw the intervention as would occur in a cross-over design.
  • Likely to lead to a longer trial duration than a traditional parallel design, particularly where effectiveness is measured immediately after implementation.
  • Imposes some practical implementation challenges, such as preventing contamination between intervention participants and those waiting for the intervention and ensuring that those assessing outcomes are blind to the participants’ statuses as intervention or control in order to help guard against information bias.
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Design Structure Advantages Disadvantages
  1. There may be logistical, practical, or financial constraints that mean the intervention can only be implemented in stages.
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Design Structure Advantages Disadvantages
Multiarm, Multistage Trial with a Common Control

(Jaki, 2015; Wason et al., 2016)

  • Consist of simultaneously testing several experimental treatments against a common control.
  • Interim analyses are used in order to decide which treatments should continue.
  • Advantages over running separate controlled trials for each experimental treatment are
    1. A shared control group can be used, instead of a separate control group for each treatment;
    2. A direct head-to-head comparison of treatments is conducted, minimizing biases that can be introduced from making comparisons between treatments tested in separate trials;
    3. The use of interim analyses allows ineffective treatments to be dropped early or allows an early stopping of the trial if one treatment is clearly superior (although this advantage applies also in the case of separate trials of each treatment through use of group-sequential designs).
  • Different trials comparing a single treatment against control are often initiated and conducted by different centers. As a result, they have different inclusion and exclusion criteria and may use different primary and secondary endpoints and possibly a different comparator treatment. All of these must be standardized for a multiarm trial that requires negotiations and compromises between investigators.
  • Need to ensure that no bias in the evaluation is introduced in multicenter multiarm studies through imbalances between allocations to treatments at different centers/regions. It is therefore paramount that randomization to all arms (including the control arm) is stratified by center or region to ensure that the risk of bias is minimized.
  • Using standard analysis methods for this purpose will result in an overly enthusiastic (upward-biased) estimate of the effect. Specialized methods that lead to unbiased estimators or reduce the bias are therefore necessary.
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Design Structure Advantages Disadvantages
Delayed Start

(D’Agostino 2009; Velengtas et al., 2012)

  • One group receives active treatment and another group receives placebo during the first period of the trial.
  • Both groups receive active treatment during the second period of the trial.
  • Delayed-start study design separates the disease-modifying effects of administered treatment from short-term beneficial effects on symptoms.
  • The study design also addresses ethical concerns raised with respect to RCTs. More patients receive the active intervention as than in a traditional trial. All participants eventually receive the potentially beneficial medical intervention, while a control group is maintained in the initial phase.
  • Delayed-start design requires sufficient understanding of the study design and clinical progression of the disease to define adequate Phase I and Phase II durations and of the statistical methodology to address analytical considerations.
  • Only the first half of the study is considered double blind; the second half is open label, a limitation that may introduce bias through unblinding.
  • The delayed-start design study may encounter enrollment issues; it needs to recruit patients who are willing to be off the symptomatic therapy for the first half of the study if they are randomized to the control arm.
  • Only patients with mild, early, and more slowly progressive disease may be eligible for this type of study.
  • The studies are susceptible to high dropout rates and patient discontinuation in the Phase I placebo group because these patients do not experience any treatment effects. Differential baseline characteristics between patients in Phase II and discontinued patients may introduce confounding, and compromise results.
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Design Structure Advantages Disadvantages
Adaptive Platform

(Quinlan et al., 2010; Saville and Berry, 2016)

  • A clinical trial with a single master protocol in which multiple treatments are evaluated simultaneously.
  • Adaptive platform designs offer flexible features such as dropping treatments for futility, declaring one or more treatments superior, or adding new treatments to be tested during the course of a trial.
  • Provides the flexibility to redesign clinical trials at interim stage.
  • Enables faster, cheaper drug development by enabling real-time learning and terminating a trial or treatment arms at the earliest time point, enabling the choice of the correct dose(s) for Phase III, and by enabling the selection of the population responding best to treatment.
  • Requires more work and additional effort during planning, implementation, execution, and reporting.
  • Barriers to implementation include
    • Technical concerns
    • Perceptions of regulatory risk
    • Challenges related to change management
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Design Structure Advantages Disadvantages
Single Arm with Comparisons to Historical Controls

(Evans, 2010)

  • A sample of individuals is given experimental therapy and followed over time.
  • Design may be desirable when the patient pool is limited.
  • Used to obtain preliminary efficacy evidence (not confirmatory).
  • Best used when the natural history of the disease is well understood, when placebo effects are minimal or nonexistent, and when a placebo control is not ethically desirable.
  • May be the only (or one of few) options for trials evaluating therapies for which placebos are not ethical and options for controlled trials are limited.
  • There is an inability to distinguish between the effect of the treatment, a placebo, and the effect of natural history.
  • It is difficult to interpret the response without a frame of reference for comparison.
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Design Structure Advantages Disadvantages
Uncontrolled Case Series

(Ford, 2010; Kempen, 2011)

A group or series of case reports involving patients who were given similar treatment. Reports of case series usually contain detailed information about the individual patients before and after an intervention but with no control group.
  • Should have clear definitions of the phenomena being studied.
  • These same definitions should be applied equally to all individuals in the series.
  • All observations should be reliable and reproducible (consider blinding).
  • Informs patients and physicians about natural history and prognostic factors.
  • Easy and inexpensive to do in hospital settings.
  • Helpful in hypothesis formation.

Some appropriate settings for the use of the case series study design:

  • Proof (or disproof) of concept for a new hypothesis
  • Reporting of sentinel events
    • Toxicities of therapies
    • Recognition of epidemics
    • Initial identification of previously unrecognized syndromes
  • Studying outcomes of rare diseases or new treatments (limited usefulness)
  • Cases may not be representative.
  • Outcome may be a chance finding, not characteristic of disease.
  • Cannot easily examine disease etiology.
  • Exposure reflects the underlying population, not the outcome.
  • Begs the question “Compared to what?”
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×

REFERENCES

Brown, C. A., and R. J. Lilford. 2006. The stepped wedge trial design: A systematic review. BMC Medical Research Methodology 6(1):54.

Campbell, M. K., D. R. Elbourne, and D. G. Altman. 2004. CONSORT statement: Extension to cluster randomised trials. BMJ 328(7441):702–708.

D’Agostino, R. B. 2009. The delayed-start study design. New England Journal of Medicine 361(13):1304–1306.

Donner, A., and N. Klar. 2004. Pitfalls of and controversies in cluster randomization trials. American Journal of Public Health 94(3):416–422.

Edwards, S. J. L., D. A. Braunholtz, R. J. Lilford, and A. J. Stevens. 1999. Ethical issues in the design and conduct of cluster randomised controlled trials. BMJ 318(7195):1407–1409.

Evans, S. R. 2010. Clinical trial structures. Journal of Experimental Stroke & Translational Medicine 3(1):8–18.

Ford, D. E. 2010. Study design case series and cross-sectional. Johns Hopkins Institute for Clinical and Translational Research. http://ictr.johnshopkins.edu/wp-content/uploads/import/1274-Ford%20Final%20Rev%20Cross%20Sectional%20July%2012%202011.pdf (accessed January 25, 2017).

Glasziou, P., I. Chalmers, M. Rawlins, and P. McCulloch. 2007. When are randomised trials unnecessary? Picking signal from noise. BMJ 334(7589):349–351.

Hughes, J. P. 2007. Stepped wedge design. In Wiley Encyclopedia of Clinical Trials: 1–8. Hoboken, NJ: John Wiley & Sons, Inc.

Jaki, T. 2015. Multi-arm clinical trials with treatment selection: What can be gained and at what price? Clinical Investigation 5(4):393–399.

Kempen, J. H. 2011. Appropriate use and reporting of uncontrolled case series in the medical literature. American Journal of Ophthalmology 151(1):7–10.e11.

Quinlan, J., B. Gaydos, J. Maca, and M. Krams. 2010. Barriers and opportunities for implementation of adaptive designs in pharmaceutical product development. Clinical Trials 7(2):167–173.

Saville, B. R., and S. M. Berry. 2016. Efficiencies of platform clinical trials: A vision of the future. Clinical Trials 13(3):358–366.

Suresh, K. P. 2011. An overview of randomization techniques: An unbiased assessment of outcome in clinical research. Journal of Human Reproductive Sciences 4(1):8–11.

Velengtas, P., P. Mohr, and D. A. Messner. 2012. Making informed decisions: Assessing the strengths and weaknesses of study designs and analytic methods for comparative effectiveness research. Washington, DC: National Pharmaceutical Council. http://www.npcnow.org/publication/making-informed-decisions-assessing-strengths-and-weaknessesstudy-designs-and-analytic (accessed January 25, 2017).

Wason, J., D. Magirr, M. Law, and T. Jaki. 2016. Some recommendations for multi-arm multistage trials. Statistical Methods in Medical Research 25(2):716–727.

Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 287
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 288
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 289
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 290
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 291
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 292
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 293
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 294
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 295
Suggested Citation:"Appendix B: Clinical Trial Designs." National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: The National Academies Press. doi: 10.17226/24739.
×
Page 296
Next: Appendix C: Ethical Principles for Research with Human Subjects »
Integrating Clinical Research into Epidemic Response: The Ebola Experience Get This Book
×
 Integrating Clinical Research into Epidemic Response: The Ebola Experience
Buy Paperback | $79.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The 2014–2015 Ebola epidemic in western Africa was the longest and most deadly Ebola epidemic in history, resulting in 28,616 cases and 11,310 deaths in Guinea, Liberia, and Sierra Leone. The Ebola virus has been known since 1976, when two separate outbreaks were identified in the Democratic Republic of Congo (then Zaire) and South Sudan (then Sudan). However, because all Ebola outbreaks prior to that in West Africa in 2014–2015 were relatively isolated and of short duration, little was known about how to best manage patients to improve survival, and there were no approved therapeutics or vaccines. When the World Heath Organization declared the 2014-2015 epidemic a public health emergency of international concern in August 2014, several teams began conducting formal clinical trials in the Ebola affected countries during the outbreak.

Integrating Clinical Research into Epidemic Response: The Ebola Experience assesses the value of the clinical trials held during the 2014–2015 epidemic and makes recommendations about how the conduct of trials could be improved in the context of a future international emerging or re-emerging infectious disease events.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!