National Academies Press: OpenBook

Methodological Challenges in Biomedical HIV Prevention Trials (2008)

Chapter: Appendix D: Methods for Analyzing Adherence

« Previous: Appendix C: Supporting Materials for Chapter 2
Suggested Citation:"Appendix D: Methods for Analyzing Adherence." Institute of Medicine. 2008. Methodological Challenges in Biomedical HIV Prevention Trials. Washington, DC: The National Academies Press. doi: 10.17226/12056.
×
Page 245
Suggested Citation:"Appendix D: Methods for Analyzing Adherence." Institute of Medicine. 2008. Methodological Challenges in Biomedical HIV Prevention Trials. Washington, DC: The National Academies Press. doi: 10.17226/12056.
×
Page 246
Suggested Citation:"Appendix D: Methods for Analyzing Adherence." Institute of Medicine. 2008. Methodological Challenges in Biomedical HIV Prevention Trials. Washington, DC: The National Academies Press. doi: 10.17226/12056.
×
Page 247
Suggested Citation:"Appendix D: Methods for Analyzing Adherence." Institute of Medicine. 2008. Methodological Challenges in Biomedical HIV Prevention Trials. Washington, DC: The National Academies Press. doi: 10.17226/12056.
×
Page 248
Suggested Citation:"Appendix D: Methods for Analyzing Adherence." Institute of Medicine. 2008. Methodological Challenges in Biomedical HIV Prevention Trials. Washington, DC: The National Academies Press. doi: 10.17226/12056.
×
Page 249

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Appendix D Methods for Analyzing Adherence A dherence to product use and recommendations for safe sexual behav- ior form an important outcome of any HIV prevention intervention, and also impact the effectiveness of a new product. In Chapter 5, the committee explored the types of questions that analyses of adherence can answer. This section outlines methods for conducting these analyses, and their limitations, using two relatively simple examples: (1) comparing adherence patterns between two study arms, and (2) relating the effect of the intervention to adherence. Investigators can use similar approaches to compare behavior patterns among study arms or analyze adherence and behavior in more sophisticated ways. Comparing Adherence Patterns between Study Arms In this example, assume first that study staff record a summary adher- ence measure Y for each individual at every clinic visit, and that visits are scheduled at regular intervals, such as monthly. Let the value of Y for indi- vidual i at visit t be denoted Yit. Our goal is to compare the longitudinal patterns of these measures between study arms, as a means of comparing the arms with respect to adherence. The adherence measure obtained at each time point could be a count, such as the number of exposures to microbicide gel in the past week, or the number of days in the past week on which a participant took a PrEP dose. The adherence measure could also be an average, such as the percentage of coital acts during the past three days in which a participant used gel, or some other measure deemed relevant for the specific product. (For a 245

246 METHODOLOGICAL CHALLENGES IN HIV PREVENTION TRIALS discussion of summary measures used in randomized trials, see Vrijens and Goetghebeur, 1997.) By modeling the expected adherence E(Yit) as a function of time on study t, and possibly some baseline characteristics, say xi, and a subject’s randomized intervention, say ri, investigators can perform an intent-to-treat analysis of repeated adherence measures, and learn how these measures differ between subpopulations identified through baseline covariates. For example, in the absence of dropouts (Vrijens and Goetghebeur, 1997), analysts could compare daily measures of compliance (as measured by Elec- tronic Drug Monitoring) between study arms using a marginal model. If some subjects never start their randomized intervention, or abandon it at some point during the study, investigators may first compare persistence between intervention arms, such as the percentage of “never takers,” and time from randomization to dropout for those who initiate the intervention. If all randomized participants actually start their assigned intervention, this amounts to applying standard time-to-event analysis methods to the time at which participants discontinue the intervention. These event times are right- censored as of the time of HIV infection by the end of the study (following staggered entry), and possibly but not necessarily by pregnancy. Under the “strong null hypothesis” that the randomized interventions are exchangeable (which means that subjects in both study arms are com- parable in all respects), these censored times are equally distributed between arms. When persistence is an issue, investigators could begin their analysis of compliance patterns by modeling adherence measures from the actual start of product use until the observed time of discontinuation. Interpreta- tion of the results would need to consider both analyses jointly. Now consider the analysis of compliance, or product execution. Under the strong null hypothesis—that is, that the arms are indistinguishable— pregnancies, failure times, censoring times, and patterns of adherence are distributed equally between the study arms. Censoring and failure time are important here, because these determine how long we can observe compli- ance, and because compliance patterns may change over time. Indeed, more frequent sexual activity typically increases the risk of HIV, and hence can decrease the amount of time during which we observe compliance. Similarly, pregnancy is an indicator of unprotected sex, which will also have an impact on “compliance,” because subjects found to be pregnant are commonly taken off product. Whether compliance analyses censor pregnant women at this time, or rather insert zero measures or carry the last observation forward, the resulting patterns should remain similar between study arms under the strong null hypothesis. However, for both modeling and interpretation, it is usually preferable to censor compliance measures at the time the intervention is discontinued because of pregnancy. Now consider a situation where participants’ adherence over time is

APPENDIX D 247 the same in the intervention and control arms, but where the intervention delays the time to HIV infection. Because good adherers in the interven- tion arm would tend to remain uninfected longer, a simple cross-sectional comparison of adherence rates at a specific time point would tend to show higher average adherence levels in the intervention arm. Indeed, the good adherers would tend to drop out sooner from the control arm owing to HIV infection, and thus would no longer be observed for adherence. To retain an unbiased evaluation of adherence levels in each study arm, we would need to adjust for “dropout” related to adherence and sexual behavior. This is possible when the study collects information on time-varying confounders: that is, on subject-specific covariates that vary over time and predict both adherence and the risk of HIV infection. When clinic visits occur irregularly, investigators must similarly account for that in their analysis, as a time trend in adherence and behavior may well exist. (Hernan et al. [2002] explain how to account for dropout in a population- averaged marginal analysis. For other approaches, see, for instance, Little, 1995; Robins et al., 1995; and Molenberghs et al., 2004). Analyzing the Effect of Adherence Patterns on HIV Incidence Participants’ adherence to an intervention and their sexual behavior may evolve over time in response to the perceived effects of the intervention. These two factors are naturally correlated with the risk of HIV infection, even in the absence of any direct biological effect of the intervention. In a blinded trial, participants’ level of adherence should not depend on the randomized study arm. Thus a comparison between arms of times to HIV infection, adjusted for a subject-specific adherence level, would yield an unbiased measure of the effectiveness of the intervention. Investigators can stratify the analysis on the summary adherence measure, or use it as a baseline covariate in a Cox regression analysis, to obtain adherence- adjusted hazard ratios. However, investigators need to use caution with summary measures that average adherence over different time periods for different subjects, as these could introduce confounding, especially when the intervention has a causal effect. More generally, investigators may need to allow for an intervention effect that could change a person’s adherence level. For instance, the COL- 1492 study observed a larger incidence of genital lesions in the experimen- tal arm. Such lesions could change future compliance in that arm. In that case, causal models of the effect of the intervention can enable randomiza- tion-based inference that allows for confounding between compliance and potential intervention-free response. (For examples of such methods, see Mark and Robins, 1993; White et al., 1999.)

248 METHODOLOGICAL CHALLENGES IN HIV PREVENTION TRIALS Randomization-based causal analysis is conceptually simple. Investiga- tors start by proposing a causal model for how a participant’s intervention history influences the residual time to HIV infection. For example, for each woman in the intervention arm, let T0 denote her unobserved potential time to HIV infection in the control arm. Consider further that her observed (constant) adherence level A to the experimental treatment is a driver of the causal effect of treatment. Investigators could then postulate that her observed time to HIV infec- tion in the treatment arm T relates to her potential treatment-free time T0 through a function of treatment level A. For instance, T = T0 × exp(β A). This model states that for an adherence level A = 100 percent in the treatment arm, the time to HIV infection is multiplied by a factor exp (β). Models following this principle are called “structural accelerated failure time models.” They can also handle adherence levels that change over time (Greenland and Robins, 1994; Vandebosch et al., 2005). Similar models work on the hazard scale (Loeys et al., 2005). The principle of randomization-based estimation works backward from this model, to allow for correlation between compliance and intervention- free response. In the intervention arm, investigators transform the observed time to HIV infection T via observed adherence A to the latent intervention- free time T0 using a postulated parameter value β. The estimated value of β is found when the back-transformed times in the treatment arm coincide in distribution with the observed times T0 in the control arm. This equality in distribution can be tested, for instance using a log rank test. Under the null hypothesis of no causal effect, β = 0 corresponds to no transformation of observed times and yields equal distributions. REFERENCES Greenland, S., and J. Robins. 1994. Invited commentary: Ecologic studies: Biases, misconcep- tions, and counterexamples. American Journal of Epidemiology 139(8):747-760. Hernan, M. A., B. A. Brumback, and J. M. Robins. 2002. Estimating the causal effect of zid- ovudine on CD4 count with a marginal structural model for repeated measures. Statistics in Medicine 21(12):1689-1709. Little, R. 1995. Modeling the drop-out mechanism in repeated-measures studies. Journal of the American Statistical Association 90(431):1112-1121. Loeys, T., E. Goetghebeur, and A. Vandebosch. 2005. Causal proportional hazards mod- els and time-constant exposure in randomized clinical trials. Lifetime Data Analysis 11(4):435-449. Mark, S. D., and J. M. Robins. 1993. A method for the analysis of randomized trials with compliance information: An application to the multiple risk factor intervention trial. Controlled Clinical Trials 14(2):79-97.

APPENDIX D 249 Molenberghs, G., H. Thijs, I. Jansen, C. Beunckens, M. G. Kenward, C. Mallinckrodt, and R. J. Carroll. 2004. Analyzing incomplete longitudinal clinical trial data. Biostatistics 5(3):445-464. Robins, J., A. Rotnitzky, and L. Zhao. 1995. Analysis of semiparametric regression-models for repeated outcomes in the presence of missing data. Journal of the American Statistical Association 90(429):106-121. Vandebosch, A., E. Goetghebeur, and L. Van Damme. 2005. Structural accelerated failure time models for the effects of observed exposures on repeated events in a clinical trial. Statistics in Medicine 24(7):1029-1046. Vrijens, B., and E. Goetghebeur. 1997. Comparing compliance patterns between randomized treatments. Controlled Clinical Trials 18(3):187-203. White, I. R., A. G. Babiker, S. Walker, and J. H. Darbyshire. 1999. Randomization-based methods for correcting for treatment changes: Examples from the Concorde trial. Statis- tics in Medicine 18(19):2617-2634.

Next: Appendix E: Committee Biographies »
Methodological Challenges in Biomedical HIV Prevention Trials Get This Book
×
Buy Paperback | $66.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The number of people infected with HIV or living with AIDS is increasing at unprecedented rates as various scientists, organizations, and institutions search for innovative solutions to combating and preventing the disease. At the request of the Bill & Melinda Gates Foundation, Methodological Challenges in Biomedical HIV Prevention Trials addresses methodological challenges in late-stage nonvaccine biomedical HIV prevention trials with a specific focus on microbicide and pre-exposure prophylaxis trials. This book recommends a number of ways to improve the design, monitoring, and analysis of late-stage clinical trials that evaluate nonvaccine biomedical interventions. The objectives include identifying a beneficial method of intervention, enhancing quantification of the impact, properly assessing the effects of using such an intervention, and reducing biases that can lead to false positive trial results.

According to Methodological Challenges in Biomedical HIV Prevention Trials, the need to identify a range of effective, practical, and affordable preventive strategies is critical. Although a large number of promising new HIV prevention strategies and products are currently being tested in late-stage clinical trials, these trials face a myriad of methodological challenges that slow the pace of research and limit the ability to identify and fully evaluate effective biomedical interventions.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!