Methods for Analyzing Adherence

**A**dherence to product use and recommendations for safe sexual behavior form an important outcome of any HIV prevention intervention, and also impact the effectiveness of a new product. In Chapter 5, the committee explored the types of questions that analyses of adherence can answer. This section outlines methods for conducting these analyses, and their limitations, using two relatively simple examples: (1) comparing adherence patterns between two study arms, and (2) relating the effect of the intervention to adherence. Investigators can use similar approaches to compare behavior patterns among study arms or analyze adherence and behavior in more sophisticated ways.

In this example, assume first that study staff record a summary adherence measure *Y* for each individual at every clinic visit, and that visits are scheduled at regular intervals, such as monthly. Let the value of *Y* for individual *i* at visit *t* be denoted *Y*_{it}. Our goal is to compare the longitudinal patterns of these measures between study arms, as a means of comparing the arms with respect to adherence.

The adherence measure obtained at each time point could be a count, such as the number of exposures to microbicide gel in the past week, or the number of days in the past week on which a participant took a PrEP dose. The adherence measure could also be an average, such as the percentage of coital acts during the past three days in which a participant used gel, or some other measure deemed relevant for the specific product. (For a

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 245

Appendix D
Methods for Analyzing Adherence
A
dherence to product use and recommendations for safe sexual behav-
ior form an important outcome of any HIV prevention intervention,
and also impact the effectiveness of a new product. In Chapter 5,
the committee explored the types of questions that analyses of adherence
can answer. This section outlines methods for conducting these analyses,
and their limitations, using two relatively simple examples: (1) comparing
adherence patterns between two study arms, and (2) relating the effect of
the intervention to adherence. Investigators can use similar approaches to
compare behavior patterns among study arms or analyze adherence and
behavior in more sophisticated ways.
COMPARING ADHERENCE PATTERNS BETWEEN STUDY ARMS
In this example, assume first that study staff record a summary adher-
ence measure Y for each individual at every clinic visit, and that visits are
scheduled at regular intervals, such as monthly. Let the value of Y for indi-
vidual i at visit t be denoted Yit. Our goal is to compare the longitudinal
patterns of these measures between study arms, as a means of comparing
the arms with respect to adherence.
The adherence measure obtained at each time point could be a count,
such as the number of exposures to microbicide gel in the past week, or the
number of days in the past week on which a participant took a PrEP dose.
The adherence measure could also be an average, such as the percentage
of coital acts during the past three days in which a participant used gel,
or some other measure deemed relevant for the specific product. (For a

OCR for page 245

METHODOLOGICAL CHALLENGES IN HIV PREVENTION TRIALS
discussion of summary measures used in randomized trials, see Vrijens and
Goetghebeur, 1997.)
By modeling the expected adherence E(Yit) as a function of time on
study t, and possibly some baseline characteristics, say xi, and a subject’s
randomized intervention, say ri, investigators can perform an intent-to-treat
analysis of repeated adherence measures, and learn how these measures
differ between subpopulations identified through baseline covariates. For
example, in the absence of dropouts (Vrijens and Goetghebeur, 1997),
analysts could compare daily measures of compliance (as measured by Elec-
tronic Drug Monitoring) between study arms using a marginal model.
If some subjects never start their randomized intervention, or abandon
it at some point during the study, investigators may first compare persistence
between intervention arms, such as the percentage of “never takers,” and
time from randomization to dropout for those who initiate the intervention.
If all randomized participants actually start their assigned intervention, this
amounts to applying standard time-to-event analysis methods to the time at
which participants discontinue the intervention. These event times are right-
censored as of the time of HIV infection by the end of the study (following
staggered entry), and possibly but not necessarily by pregnancy.
Under the “strong null hypothesis” that the randomized interventions
are exchangeable (which means that subjects in both study arms are com-
parable in all respects), these censored times are equally distributed between
arms. When persistence is an issue, investigators could begin their analysis
of compliance patterns by modeling adherence measures from the actual
start of product use until the observed time of discontinuation. Interpreta-
tion of the results would need to consider both analyses jointly.
Now consider the analysis of compliance, or product execution. Under
the strong null hypothesis—that is, that the arms are indistinguishable—
pregnancies, failure times, censoring times, and patterns of adherence are
distributed equally between the study arms. Censoring and failure time are
important here, because these determine how long we can observe compli-
ance, and because compliance patterns may change over time. Indeed, more
frequent sexual activity typically increases the risk of HIV, and hence can
decrease the amount of time during which we observe compliance.
Similarly, pregnancy is an indicator of unprotected sex, which will also
have an impact on “compliance,” because subjects found to be pregnant
are commonly taken off product. Whether compliance analyses censor
pregnant women at this time, or rather insert zero measures or carry the last
observation forward, the resulting patterns should remain similar between
study arms under the strong null hypothesis. However, for both modeling
and interpretation, it is usually preferable to censor compliance measures
at the time the intervention is discontinued because of pregnancy.
Now consider a situation where participants’ adherence over time is

OCR for page 245

APPENDIX D
the same in the intervention and control arms, but where the intervention
delays the time to HIV infection. Because good adherers in the interven-
tion arm would tend to remain uninfected longer, a simple cross-sectional
comparison of adherence rates at a specific time point would tend to show
higher average adherence levels in the intervention arm. Indeed, the good
adherers would tend to drop out sooner from the control arm owing to HIV
infection, and thus would no longer be observed for adherence.
To retain an unbiased evaluation of adherence levels in each study
arm, we would need to adjust for “dropout” related to adherence and
sexual behavior. This is possible when the study collects information on
time-varying confounders: that is, on subject-specific covariates that vary
over time and predict both adherence and the risk of HIV infection. When
clinic visits occur irregularly, investigators must similarly account for that
in their analysis, as a time trend in adherence and behavior may well exist.
(Hernan et al. [2002] explain how to account for dropout in a population-
averaged marginal analysis. For other approaches, see, for instance, Little,
1995; Robins et al., 1995; and Molenberghs et al., 2004).
ANALYZING THE EFFECT OF ADHERENCE PATTERNS ON
HIV INCIDENCE
Participants’ adherence to an intervention and their sexual behavior
may evolve over time in response to the perceived effects of the intervention.
These two factors are naturally correlated with the risk of HIV infection,
even in the absence of any direct biological effect of the intervention.
In a blinded trial, participants’ level of adherence should not depend on
the randomized study arm. Thus a comparison between arms of times to
HIV infection, adjusted for a subject-specific adherence level, would yield
an unbiased measure of the effectiveness of the intervention. Investigators
can stratify the analysis on the summary adherence measure, or use it as
a baseline covariate in a Cox regression analysis, to obtain adherence-
adjusted hazard ratios. However, investigators need to use caution with
summary measures that average adherence over different time periods for
different subjects, as these could introduce confounding, especially when
the intervention has a causal effect.
More generally, investigators may need to allow for an intervention
effect that could change a person’s adherence level. For instance, the COL-
1492 study observed a larger incidence of genital lesions in the experimen-
tal arm. Such lesions could change future compliance in that arm. In that
case, causal models of the effect of the intervention can enable randomiza-
tion-based inference that allows for confounding between compliance and
potential intervention-free response. (For examples of such methods, see
Mark and Robins, 1993; White et al., 1999.)

OCR for page 245

METHODOLOGICAL CHALLENGES IN HIV PREVENTION TRIALS
Randomization-based causal analysis is conceptually simple. Investiga-
tors start by proposing a causal model for how a participant’s intervention
history influences the residual time to HIV infection. For example, for each
woman in the intervention arm, let T0 denote her unobserved potential time
to HIV infection in the control arm. Consider further that her observed
(constant) adherence level A to the experimental treatment is a driver of
the causal effect of treatment.
Investigators could then postulate that her observed time to HIV infec-
tion in the treatment arm T relates to her potential treatment-free time T0
through a function of treatment level A. For instance,
T = T0 × exp(β A).
This model states that for an adherence level A = 100 percent in the
treatment arm, the time to HIV infection is multiplied by a factor exp (β).
Models following this principle are called “structural accelerated failure
time models.” They can also handle adherence levels that change over time
(Greenland and Robins, 1994; Vandebosch et al., 2005). Similar models
work on the hazard scale (Loeys et al., 2005).
The principle of randomization-based estimation works backward from
this model, to allow for correlation between compliance and intervention-
free response. In the intervention arm, investigators transform the observed
time to HIV infection T via observed adherence A to the latent intervention-
free time T0 using a postulated parameter value β. The estimated value of
β is found when the back-transformed times in the treatment arm coincide
in distribution with the observed times T0 in the control arm. This equality
in distribution can be tested, for instance using a log rank test. Under the
null hypothesis of no causal effect, β = 0 corresponds to no transformation
of observed times and yields equal distributions.
REFERENCES
Greenland, S., and J. Robins. 1994. Invited commentary: Ecologic studies: Biases, misconcep-
tions, and counterexamples. American Journal of Epidemiology 139(8):747-760.
Hernan, M. A., B. A. Brumback, and J. M. Robins. 2002. Estimating the causal effect of zid-
ovudine on CD4 count with a marginal structural model for repeated measures. Statistics
in Medicine 21(12):1689-1709.
Little, R. 1995. Modeling the drop-out mechanism in repeated-measures studies. Journal of
the American Statistical Association 90(431):1112-1121.
Loeys, T., E. Goetghebeur, and A. Vandebosch. 2005. Causal proportional hazards mod-
els and time-constant exposure in randomized clinical trials. Lifetime Data Analysis
11(4):435-449.
Mark, S. D., and J. M. Robins. 1993. A method for the analysis of randomized trials with
compliance information: An application to the multiple risk factor intervention trial.
Controlled Clinical Trials 14(2):79-97.

OCR for page 245

APPENDIX D
Molenberghs, G., H. Thijs, I. Jansen, C. Beunckens, M. G. Kenward, C. Mallinckrodt, and
R. J. Carroll. 2004. Analyzing incomplete longitudinal clinical trial data. Biostatistics
5(3):445-464.
Robins, J., A. Rotnitzky, and L. Zhao. 1995. Analysis of semiparametric regression-models for
repeated outcomes in the presence of missing data. Journal of the American Statistical
Association 90(429):106-121.
Vandebosch, A., E. Goetghebeur, and L. Van Damme. 2005. Structural accelerated failure
time models for the effects of observed exposures on repeated events in a clinical trial.
Statistics in Medicine 24(7):1029-1046.
Vrijens, B., and E. Goetghebeur. 1997. Comparing compliance patterns between randomized
treatments. Controlled Clinical Trials 18(3):187-203.
White, I. R., A. G. Babiker, S. Walker, and J. H. Darbyshire. 1999. Randomization-based
methods for correcting for treatment changes: Examples from the Concorde trial. Statis-
tics in Medicine 18(19):2617-2634.