National Academies Press: OpenBook
« Previous: PART V INTERMEDIATE TECHNOLOGIES IN MEDICALLY BASED PREVENTION TRIALS
Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×

PART VI

Evaluating Results

Evaluating social science interventions for the prevention of HIV infection—that is, interventions that focus on levels of social organization beyond the individual—is fraught with difficulty, and much remains to be done in the development of basic methodologies.

The social science interventions that were presented in Section IV are theoretical models. The development of these models is an important part of basic research, but one cannot go directly from models to evaluation. The next step must be to translate the models into concrete operational forms that can be evaluated. This has already been done in some cases, such as the diffusion theory and leadership-focused models (Watters et al., 1990, Kelly et al., 1990). Only when concrete protocols for interventions have been specified can the evaluation of the impact be worked out in detail. Which models of evaluation to use for which social science interventions remains an open question.

CLASSICAL DESIGNS

It is possible to construct random-assignment evaluation designs for social science interventions, and there are some examples to use for guidance. Random controlled trials should be attempted, but it is important to avoid forming unrealistic expectations about what evaluation methodology can accomplish in these situations. Alternative evaluation methods are unlikely to meet the rigorous standards of evidence obtained through random assignment of individuals to treatment and control groups. Funders should not expect that

Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×

evaluations of these types of interventions can reach the standards regarding consistency and lack of bias that are obtainable with randomized clinical trials.

Workshop participants discussed some possibilities for the use of classical experimental designs. First, social interventions on the level of the egocentric network can, in theory, be evaluated using experimental designs in which some egocentric networks are randomly assigned to one experimental condition and others to a different condition. Second, classical experimental designs can, in theory, be applied to the evaluation of social interventions focused on communities, neighborhoods, the patrons of sets of commercial establishments such as gay bars, or other such aggregations of large numbers of at-risk persons. To achieve adequate power, such experiments require multiple units of intervention, which may be economically problematic (Friedman and Wypijewska, 1995).

Measurement of the intervention efficacy would use either or both of two models:

  1. Prior to the intervention, recruit a representative sample of at-risk persons in each community and assess their HIV status; their norms, beliefs, and values; and their behaviors and networks. Perform the intervention on the entire community (without using the evaluation sample as a resource, since this would make it nonrepresentative for follow-up assessments of intervention effects). Reassess these same variables in follow-up surveys of a representative sample of research subjects.

  2. Use serial cross-sections of representative samples of at-risk persons in each community to assess changes in the above variables. This avoids problems with design, such as interaction between the nature of the intervention and differential follow-up, and also avoids the problem that the initial assessment may lead research participants to react differently from other at-risk persons.

Both models are greatly strengthened by including ethnographic assessment of social structure, interaction patterns, and networks, as well as more quantitative evaluation of changes in behaviors and infections. For example, the comparative efficacy of leadership-focused interventions and of social mobilization models in changing the norms and behaviors of gay men was evaluated by assigning communities randomly to one or the other intervention model and some to a no-intervention control (Kelly et al., 1992). The same method could be used with injection drug users (or other sets of at-risk persons) and social mobilization models. In such studies, if the number of communities is large enough, the randomization will control for preexisting differences in community and network structures, levels of risk behaviors, norms, seroprevalence levels, and the like. However, the randomization will

Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×

not control for interactions between these community characteristics, the intervention, and historical events, or, when assessing the design, for differential rates of follow-up attrition. Thus, it is necessary to record and control for data on other interventions that are set up in the community, for other relevant historical events (such as widespread flooding of a river into streets and homes in one set of communities being studied), and for differences in attrition rates or differential biases in what groups of research participants are followed up successfully.

To undertake such a large-scale multicommunity study, of course, can be extremely difficult. It can require high budgets to conduct interventions in large numbers of communities and to collect data on relevant outcome measures from representative samples of the persons at risk in each community. Furthermore, in some instances, those in position of authority or private citizens on their own initiative might not accept the intervention assignments and might implement alternative interventions with their own resources (or disrupt the project). Thus, it may not be feasible to use random experimental designs to test promising interventions evaluation, and post-hoc adjustments using historical and ethnographic data may be needed to evaluate the experimental outcomes.

QUASI-EXPERIMENTAL DESIGNS

Quasi-experimental designs, including the analysis of fortuitously occurring “natural” experiments, may be more feasible for evaluating social interventions. They will be particularly appropriate for evaluating “spontaneous” action (such as the rise of community groups to take some action against the spread of HIV), which can only be studied as emergent phenomena. The results of such studies can affect whether later spontaneous actions will be encouraged, advised, or opposed. Furthermore, a well-conceived quasi-experimental design can sometimes adjust for more of the problems that are entailed when classical experimental techniques are not used. For example, the staged introduction of leadership-focused intervention into gay communities can be used to examine whether the observed reductions in risk behavior were the effects of the intervention rather than from other causes.

Other intervention models also may be amenable to evaluation. It is possible to conduct a staged-introduction design to evaluate how changes in urban policies or policing practices affect HIV-risk behavior, networks, and infection rates. It may be hard to evaluate the impact of the independent development of drug users' unions or of gay AIDS action groups because they are not the result of interventions (although it may nonetheless be important to evaluate them in order to decide whether to provide them with financial

Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×

support and/or legitimacy). In all of these cases, comparative urban studies techniques and the measurement of behaviors and networks as soon as possible after these events (or even before them, to construct a baseline) provide some tools for evaluation.

ALTERNATIVE STANDARDS AND METHODOLOGIES

Little work appears to have been done to develop alternative standards of evidence regarding the effectiveness of social science interventions. Nor has much work been done to link such alternative standards to specific evaluation strategies and methodologies. One attempt along this line has been made by Carol Weiss, who has advocated what she calls a “theory-driven evaluation.” This involves an exercise in which paths from the beginning of the intervention to its outcomes are described in detail. The evaluation measures have as many points of progress along those paths as possible. Alternatives such as this may not be as readily accepted by other researchers or policymakers as more classical designs would be, but they are worth serious attention.

SPECIFIC RESEARCH NEEDS

Workshop participants discussed the need for greater evaluation of HIV/AIDS preventive interventions in a number of areas. Group-level and social-level outcomes have largely gone unstudied and many important individual-level outcome measures have been ignored. The programs, their intended audiences, and their field implementation are often described so generally and briefly that it is difficult to grasp the essence of the campaigns, much less their effects or impact. Less than half of all HIV prevention media campaigns are evaluated at all, and many fewer meet any rigorous evaluation standards. Flora and colleagues believe that future campaigns need a much more solid research base, including knowledge regarding formative evaluation of messages, summative evaluation of campaign effects on individuals, groups, and communities, and cost and cost-effectiveness analysis (see Appendix C). Research on media campaigns from other fields, such as smoking cessation and prevention, may provide some guidance.

A possible research agenda related to the evaluation of social science interventions was also discussed by workshop participants. For example, research on the evaluation of sociometric social network interventions is a high priority on two levels: first, to see if changing network structures can reduce risk behavior; and second, to see if changing these structures can reduce the

Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×

speed at which HIV spreads (even if risk behaviors remain constant) by changing the extent and patterns of risk linkages among persons at risk.

Another issue in need of study involves the validity of the measures used to assess outcome and the links between short-term measures (proximal outcomes) and longer-term measures (distal outcomes). There is concern about the validity of self-reports of actual behaviors, such as sexual practices and drug use. If one is targeting intervention on changing such behaviors, it is critical to be as clear as possible about the extent to which self-reports can indeed be related or relied upon.

There appears to be a growing consensus that the use of ethnographic methods in the design and evaluation of social science research could strengthen the research endeavor. Attempts to join large-scale quantitative evaluation methods with ethnographic methods should be encouraged, along with additional research on the advantages and disadvantages of such efforts.

Multiple evaluation criteria are required when preventive interventions are complex and genuinely interdisciplinary. For example, if an intervention draws on analyses from sociology, psychology and biomedical technology, it may be important to determine which aspects of the intervention are effective. Alternatively, if the intervention fails or produces only minimal positive outcomes, it is important to know if the problems lie in the analyses of social or psychological phenomena, in technological failure, or the way in which these may interact.

Finally, the social epidemiology research needs addressed in Section I of this report directly affect the quality of the evaluation outcomes. A national program of serial, cross-section measurement of risk behavior, seroprevalence levels, and network and normative characteristics of representative samples of major at-risk groups in a large number of cities could help evaluators determine whether interventions actually were delivered and whether they had the desired impact.

Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×
This page in the original is blank.
Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×
Page 47
Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×
Page 48
Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×
Page 49
Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×
Page 50
Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×
Page 51
Suggested Citation:"PART VI EVALUATING RESULTS." Institute of Medicine. 1995. Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/9207.
×
Page 52
Next: HIGHLIGHTS AND THEMES »
Assessing the Social and Behavioral Science Base for HIV/AIDS Prevention and Intervention: Workshop Summary Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!