National Academies Press: OpenBook
« Previous: Front Matter
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 1
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 2
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 3
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 4
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 5
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 6
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 7
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 8
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 9
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 10
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 11
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 12
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 13
Suggested Citation:"Summary." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 14

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Summary On early 1989, the Public Health Service requested that a pane] be established to recommend strategies for evaluating three of the major AIDS prevention programs sponsored by the Centers for Disease Control (CDC). To that end, the Pane] on the Evaluation of AIDS Interven- tions was formed-and delivered the first of two reports, Evaluating AIDS Prevention Programs, In August 1989. In the Initial report, the pane} recommended randomized experiments as the primary strategy for eval- uat~ng program effectiveness. In addition, Me panel proposed to discuss nonexpenmental strategies in more Kept In our next report. This we do now in Evaluating AIDS Prevention Programs, Expanded Edition. This second volume retains the original Chapters I-5 and Appendices A-B from Be first report and offers new material in Chapter 6 and Appendices C-F on nonexperimental methods and other evaluation issues. BACKGROUND CDC is the pnncipal public health agency responsible for AIDS educa- tion and prevention. Each of the programs CDC selected for the panel's review was designed to intervene at a different level of society: (~) the mu~uphase national AIDS medma campaign is directed at Be general population of the United States as well as at more specific audiences; (2) projects of community-based organizations, primarily health education and risk reduction Interventions, focus on small groups or specific popu- lations; and (3) a network of colmseling and testing projects is targeted to individual clients. The first designated intervention, the national AIDS media campaign, was established in April 1987 by CDC's National AIDS Information and Education Program. Shortly afterward, Be advertising and public relations firm of Ogilvy & Mather was selected by competitive bid to develop the multiphase campaign of public service announcements, the 1

2 | EVALUATING AIDS PREVENTION PROGRAMS most visible component of the program. The media campaign is intended not only to provide information about how HIV is spread and how that spread can be prevented, but also to generate social support for infected individuals. The campaign uses a variety of media in addition to the public service announcements to deliver information to Me general public as well as to specific populations. The second program, the AIDS prevention projects of community- based organizations (CBOs), channels health education and risk reduction information to selected populations that are at high risk for REV infec- tion. CBOs are uniquely positioned to reach hard-to-reach groups such as gay men or intravenous Mug users. CDC's Center for Prevention Ser- vices funds these organizations through three mechanisms: cooperative agreements with state and local health agencies who oversee the commu- nity organizations; a cooperative agreement with the U.S. Conference of Mayors to provide "seed money" to local minority organizations; and a direct funding initiative for commun~ty-based organizations. The Gird program offers individual clients (and occasionally their partners) testing for H[V infection In conjunction with pretest and pastiest counseling. A typical delivery setting is a local heady department or other health services facility (such as a clinic for sexually transmitted diseases) that receives indirect CDC funding channeled through the state health department. CDC also provides guidelines for the content and conduct of the counseling component of these programs. CDC began funding counseling and testing sites In fiscal 1985 through its Center for Prevention Services. Resources are distributed to over 5,000 sites in He form of cooperative agreements with 64 state and local agencies. The panel's findings, conclusions, and general recommendations are presented in the next several sections of this summary, following He structure of the expanded report. We first consider general evaluation and outcome measurement issues (Chapters ~ and 21. We then present issues specific to the three types of AIDS prevention programs: the na- tional media campaign (Chapter 3~; CBO health education/risk reduction projects (Chapter 4~; and the HIV testing and counseling program (Chaps ter 5~. In this edition, the pane} also reviews a number of nonexperimental strategies that researchers have used to evaluate other social programs (Chapter 6~. The last section of He summary presents the panel's specific recommendations. EVALUATION: NEEDS AND IMPLEMENTATION Program evaluation in the context of AIDS prevention is as difficult as program implementation itself, and as necessary. To educate the public

SUMMARY about what is essentially a health emergency, states and cities have rapidly implemented prevention programs, often without the necessary staff or plans for evaluating their effectiveness. The pane! believes that future AIDS prevention efforts should plan and allocate sufficient resources to obtain sound evidence about what works best to alter those behaviors that are known to transmit HIV. The evaluation policy proposed in Chapter ~ hinges on a framework of Free questions: · What interventions are actually delivered? · Do He interventions make a difference? · What interventions or variations work better? Each of these evaluative questions engenders further questions that bear on how credible evidence can be produced, the resources required to pro- duce that evidence, and the methodological problems that affect evidence quality. The evaluation of AIDS intervention programs is not an easy task: it will take time, and it win also require a long-term commitment of effort and resources. Because the environment In which these programs are implemented is constantly changing, and because prevention may require life-Ion" behavioral changes, it is ~nappropnate to view program design, implementation, and evaluation as short-term or one-time events. Given the seriousness of He disease and the benefits associated with prevention, commitment of adequate resources for careful evaluations of the effectiveness of AIDS prevention programs should be viewed as a wise investment in He future. The panel's aim is to be realistic-not discouragin~when it also notes that the difficulties and costs of program evaluation should not be underestimated. Many of the research strategies proposed In this report will require major investments of talent, time, and financial support, as weD as a substantial amount of planning. Once these invesunents have been made, however, and a body of findings and practical experience has accumulated, subsequent evaluations should be easier and less costly to conduct. The pane] notes, however, that because some of the major CDC intervention programs are likely to continue indefinitely, periodic reassessments are warranted to ensure that intervention components con- tinue to be delivered as specified in He program protocol or standards. The nature of the HIV/AIDS epidemic demands an unwavering com- mitment to prevention programs, and ongoing prevention programs re- qu~re a similar commitment to their evaluation. A full complement of sustained evaluation research is needed. The pane} endorses the appro-

4 ~ EVALUATING AIDS PREVENTION PROGRAMS priate use of formative, process, and outcome evaluation and urges Mat evaluation be a part of intervention program design. Formative evaluation involves a small-scale effort to identify and re- solve intervention and evaluation issues systematically before a program is widely implemented. This type of evaluation might be used to provide tentative answers to any of the three evaluation questions, but in this re- port it is formally recommended only for me media campaign, for which "What works better?" is the salient question for formative evaluation. For example, before implementing a full-scale media campaign of public service announcements, researchers can randomly assign participants to view different preliminary advertising layouts, and self-reported knowI- edge, attitudes, and behavioral intentions can then be compared for the different versions. Even when done on a smal1-scale, however, formative evaluation requires financial resources and trained staff. Process evaluation involves finding answers to the question "What services are actually delivered?" and it is appropriate for all three of the major AIDS intervention programs. Process evaluation generally requires few additional program resources. For example, evaluations of who receives HIV testing and counseling could be conducted by gathering and systematically reporting information on the clientele served by the testing sites. The costs associated with such a strategy are not large relative to the costs of He program. A major shortcoming of process evaluation, however, is Hat it can- not demonstrate whether programs are effective. Process evaluation can yield important data on the number of individuals reached by a mes- sage or program, and it can document client flow or illuminate program accessibility. What it cannot do is provide information on whether He program changed the knowledge, beliefs, intentions, or behaviors of He individuals it served. Process evaluation, therefore, is necessary but not sufficient to determine project results. Outcome evaluation assesses the effectiveness of an intervention program and can be used to answer the questions "Do He interventions make a difference?" and "What works better?" The pane] endorses randomized field expenments-when feasible and appropnate-to test He effects of the media campaign and the health education/nsk reduction projects of CBOs and Bus answer He question, "Do He interventions make a difference?" When properly executed, randomization ensures that He observed differences In outcomes between treatment and condom groups do not arise from biased assignment of persons to these groups. Because randomization leads to unbiased estimates, it thereby permits a fair assessment of the absolute size as well as comparative level of

SUMMARY | 5 treatment effects and provides a statistically legitimate statement about the level of confidence that may be placed In that assessment. The value and benefits of randomized experiments must be weighed against their drawbacks. They can be difficult to deploy, both practically and politically, and they require significant investments of time, expertise, and financial support, as well as cooperation from and coordination among projects, federal agencies, and research teams. The panel estimates, for example, that an adequate evaluation of the impact of some set of COO projects would take at least 3 years to complete and that a formative evaluation using a randomized experiment to test effects of different media campaigns would cost at least $200,000. Although we are persuaded that the randomized controlled experi- ment should form the backbone of CDC's strategy for outcome evalua- tion, we also recognize the value of nonexperimental approaches when it is not possible to conduct a randomized trial. Alternative approaches should be investigated, for example, in the case of testing and counseling projects, for which we cannot recommend randomized experiments with no-treatment control groups because it is neither ethical nor possible to direct individuals to a group that is denied testing. To assess the relative effectiveness of different program options in order to determine "what works better," the panel recommends using randomized experiments with alternative-treahnent controls for each of the three major AIDS intervention programs. Because such experiments test alternative or enhanced treatments rather than using a no-treatment control group, they are often more acceptable to participants. It should be noted, however, that such randomized experiments will require in- vestments of expertise, financial support, and time that will be at least as large as those required by experiments that have no-~eatment control groups. Oversight and mon~tonng of aB phases of evaluation are necessary to provide quality control and to safeguard the integrity of the research. Evaluation is often a sensitive issue for both project and evaluation staff because of the possibility Mat it will show that projects are not as effective as Hey believed. The panel believes that independent oversight of an evaluation can help curb the pressures to show artificially positive effects, aIld it also can provide additional expertise for solving difficult technical issues. OUTCOMES The panel's final conclusion concerns program objectives and outcome measures. Carefully defined program goals and objectives are needed to

6 ~ EVALUATING AIDS PREVENTION PROGRAMS select appropriate outcome measures to test the effectiveness of a pro- gram. Evaluations of past intervention efforts have been hampered by poorly defined program objectives, which do not reflect desired proxi- mate outcomes. Therefore the pane] recommends that aD intervention programs, in addition to the general goal of reducing HIV transmission, should have explicit objectives framed in terms of measurable biological, behavioral, or psychological outcomes. In Chapter 2, the pane! concludes that behavioral measures should be the primary outcome variables for most AII)S intervention programs. The pane] considered He usefulness of biological indicators, including HIV incidence data, rates of sexually transmitted diseases, and ferdlity rates (for HIV positive women). For most interventions, the pane] con- eludes Hat these biological measures would be of secondary importance. The proximate outcome most relevant to reducing REV transmission In most situations win be He adoption and maintenance of behaviors that protect uninfected individuals from contacting HIV and protect infected individuals from transmitting HIV. Appropriate changes in "nsly" behav- ior, other things being equal, will reduce H]V transmission in populations in which the virus is heavily seeded, and it will protect populations in which H[V is yet not wed established. Thus, accurate measurements of these behaviors win often be the most relevant indicators of the success of a program in retarding REV transmission. The pane] also considered using direct measurements of the ~nci- dence of new REV infections as a primary outcome variable. The pane] concludes ~at-although Hey may be appropriate in some circumstanced -H[V incidence data would not be a sensitive indicator for populations In which REV infection was rare (e.g., heterosexual adolescents in most parts of the country), and they would provide no inflation about rele- vant outcomes in seropositive persons. Furthermore, HIV incidence data are not only difficult to obtain, but they are also difficult to interpret. HIV incidence rates require extremely large samples and protracted testing to determine a program's effectiveness. Moreover, the rates can reflect other conditions unrelated to the effects of He program, such as He absence of or saturation of the infection in a given locale. THE MEDLA CAMPAIGN In Chapter 3, the panel's discussion of evaluating the media campaign contains two elements that are unique to this program. Specifically, 1 Newborn HIV incidence rates, for example, are discussed in Chapter 6 as potential indicators of project effects.

SUMMARY | 7 the panel adds a question-"Can the campaign make a difference?"- to evaluate campaign efficacy (i.e., the campaigns likely effects if it were implemented optimally), and it also discusses the use of formative evaluation during program development to deteIInine the effectiveness of a campaign message prior to large-scale dissemination. The panel strongly believes that the efficacy of a proposed media campaign or campaign phase should be carefully evaluated in a number of randomly selected test market areas and that the results from these areas should be compared with the results from a number of other randomly selected test areas in which the campaign is not presented. It is often impossible to predict the effects of a given strategy. In order for future efforts to build on past knowledge, negative outcomes are potentially as Important as positive ones. Process evaluation of the multiphase national AIDS media campaign and its program of public service announcements (PSAs) requires knowI- edge of whether He campaign was aired, how often it was shown, and who saw or heard it. In its review of the process evaluation of media programs, the panel finds the data to be less Man complete and urges that program presentation, including PSAs, be measured more consci- entiously. In particular, the committee recommends Rat more attention be given to the following data collection efforts: population surveys to gauge the public's general awareness of the campaign; telephone surveys directed toward Muscular market areas after PSA broadcasts; and co~n- cidental surveys following PSA presentation to help chart the extent of viewing and message recall. The panel also recommends augmentation of Be data collection capabilities of the national AIDS hotline. To assess the relative effects of different media campaigns or differ- ent program components, the panel recommends randomized experiments of alternative approaches. The panel recommends that experimental tri- als of alternative messages in the media campaign be undertaken during Be development of the campaign rather than after its full ~mplementa- tion. To determine whether a campaign is effective, the panel presents a number of approaches Mat use randomized and quasi-expenmental effec- tiveness teals in venous designs (e.g., lagged implementation, switching replications. COMMUNITY-BASED ORGANIZATIONS CBO projects vary considerably, reflecting the diversity of the organ~za- tions through which they are delivered and the communities they serve. The panel believes it is important to recognize and describe this diver- sity, and In Chapter 4 we suggest case studies of selected projects and

8 ~ EVALUATING AIDS PREVENTION PROGRAMS an administrative reporting system for the CBOs. Administrative reports can document the scope of CBOs' efforts, and case studies can provide detailed, ~n-depth descnptions as well as a context in which to explore intervention and evaluation questions. The pane] urges the use of randomized experiments to test the effects of the different CBO projects and to compare the relative effectiveness of different approaches. For new CBO projects, the pane! believes that individuals seeking services might be assigned either to participate in He CBO project or to be part of a control condition in which they avail themselves of other options In Me community. After experience is gained with such experiments, the pane} proposes that a subset of capably staffed CBOs participate In comparative evaluations of alternative treatments in which He elects of their projects will be estimated. The selection of CBO projects for outcome evaluation should be based on the organizations' capacity and willingness to engage in ran- domized tests. Both willingness and capacity can be enhanced by CDC's investment in a contractual process that encourages CBOs to collaborate with experts In randomized experimentation (and vice versa), and by commitment to an oversight mechanism for research a~ninistration that goes well beyond the usual advisory committee duties. Furthermore, the pane] suggests that CDC invest its resources in carefully evaluating a selected sample of projects rather than superficially evaluating all of them. This fewer-but-better approach is justified not only by financial limitations but also on practical grounds: despite their willingness, many projects will not have the case flow, resources, or capacity to participate In a long-term program of expenments. The pane} believes that nonexpenmental before-and-after evaluation designs can be useful for looking at a project's proficiency In delivering services to its participants but that it is a weak design for measuring program effectiveness. The inference of cause and effect from such designs is highly problematic because competing explanations for across- t~me changes In attitudes and behaviors cannot be naled out. HIV TESTING AND COUNSELING The pane] concludes In Chapter 5 that it is neither practical nor ethical to conduct experiments with no-treatment control groups to determine He effects of HIV testing and counseling projects. The assignment of individuals to a control group that does not allow them to learn if they are infected or does not offer counseling has serious consequences for personal planning and medical management of MV disease and AIDS.

SUMMARY ~ 9 While no-treatment control groups are ~nappropnate to determine if counseling and testing makes a difference, there remains a need to understand what is being delivered In the counseling and testing projects and to determine if there are better strategies for facilitating behavioral change. Dete~ng what services are provided includes how wed they are provided, whether they are accessible and attractive, and whether they satisfy a priori standards of quality. The pane] recommends Cat data be gathered from multiple sources-~ncluding testing sites, clients, independent observers, and groups at increased risk for HIV infection-to evaluate five aspects of service delivery: the adequacy of Be testing and counseling protocol, the adequacy of the counseling that is actually provided, the proportion of clients who complete the full protocol, the accessibility of services, and He barriers faced by potential clients in seeking and completing a testing and counseling protocol. The panel recommends using randomized experiments with alternat- ive-treatment controls to determine if some forms of counseling and testing work better than the standard approach. Alternative treatments can test the effects of different settings and rlifferent counseling protocols and services on such outcomes as clients' willingness to return to a site for test results, knowledge of nsks, reduction of nsk-associated behavior, and the minimizing of negative side effects (e.g., psychological distress) of REV testing. RANDOMIZED AND OBSERVATIONAL APPROACHES TO EVALUATION In Chapter 6, as in earlier chapters, the pane] recommends that random- ~zed controlled experiments be used in outcome evaluations of a small number of important and carefully selected AIDS prevention programs. It is the panel's judgment that well-executed randomized experiments provide the most certainty concerning the effects of these programs. This is so because random assignment of individuals or groups to intervention or condor conditions creates subgroups that are not expected (on average) to differ on factors that affect the intervention's outcomes. The pane] devotes one section of this chapter to an in depth exami- nation of randomized experiments and their pitfalls. Because we believe that randomized trials have the best chance of avoiding selection bias, and thus provide the most trustworthy estimates of ejects, we recommend Hat they constitute the backbone of an evaluation strategy for new AIDS prevention services. The pane! notes, however, that such experiments can be compromised, and we recommend studies of ways to reduce such threats to executing experiments as attrition and noncompliance. The

10 EVALUATING AIDS PREVENTION PROGRAMS pane! also specifies a number of conditions under which randomization may not be feasible or appropnate. ~ situations where randomization is not appropriate or feasible, the panel invites serious consideration of nonrandomized studies to evaluate AIDS interventions. The pane] categorizes nonexperimental studies ac- cording to the two general approaches they take to control for selection bias and yield fad compansons. The first approach is the design of com- parability through the creation of nonrandoniized comparison groups-i.e., through quasi-expenments, natural expenments, and matching. The sec- ond approach involves post hoc statistical adjustments for selection bias through model-based data analyses-i.e., through analysis of covanance, structural equation modeling, and selection modeling. An example of a priori design is He quasi-experiment. In this type of evaluation, some individuals are assigned to receive an intervention, and a comparison group is devised of individuals who will not be assigned to receive it, for reasons theoretically unrelated to the outcome variable. Two quasi-expenmental designs that the pane] believes are promising are the ~nte~Tupted time series and the regression displacement/regression discontinuity designs. The pane! discusses some data sources that can be tapped for evaluating future activities with quasi-expenments (the CDC/NIH neonatal screening survey and the National Health Interview Survey). A design vanation is the natural experiment, a situation in which investigators attempt to assess the effects of an exogenously introduced intervention by comparing Heated and untreated groups. The assumption of natural experiments is that the comparison group is identical to the treatment group except for the lack of the intervention. Weakening that assumption? however? is He fact that the investigator typically has no control over selecting and implementing these treatments. In cases in which the investigator can foresee a natural experiment in the making (e.g., AIDS-related legislation that will affect some but not all locales), pre-test measures can be collected, which will enhance He credibility of an evaluation of project effects. The Bird kind of a priori method discussed is matching, which involves selecting a comparison Hat "looks like" a project participant with respect to variables that are believed affect project participation (such as age). Although matching can condor known sources of bias, He pane! believes that the resulting comparisons may often be misleading because Unmown and uncontrolled confounding vanables can influence the outcome. The credibility of a posterior) data analysis (e.g., analysis of covan-

SUMMARY ~ 1 1 ance and modeling approaches) to evaluate AIDS prevention programs rests on the credibility of its underlying theory and the quality of data available to use with these procedures. The pane! is concerned about efforts to establish comparability us- ~ng statistical adjustment and modeling procedures in He AIDS arena because they may fad! (without detection) because of either flaws In data quality or errors in the assumptions that are made about the relationship of variables that affect selection and outcome. First, although it may seem obvious, He pane] wishes to emphasize that the knowledge base is woefully inadequate about the behavioral and other characteristics that are important In changing behaviors that transmit HIV. The absence of a weary of tested theory or emp~ncal evidence shrinks the basis for the assumptions required by these models. Although He panel is not optimistic about our present ability to 1lse structural equation or selection models and data from nonrandomized studies as He primary strategy for evaluating the effectiveness of AIDS prevention programs, we do believe that such models wiB have a role to play and we suspect that this role may grow in the future. In particular, the pane! believes that much might be gained by the judicious use of such models as an adjunct to randomized expenments. Structural equation modeling might be used, for example, to improve our understanding of the individual and contextual factors that mediate between a treatment and an outcome. Furthe~ore, the pane] would observe that as experience accrues in situations where modeling is done in tandem with experiments, we anticipate the development of theory and data that may allow modeling approaches to substitute for some experiments In the future. Finally, the pane] notes that trustworthy evaluation requires a reser- vo~r of evidence whether it comes from randomized or nonrandomized studies. A difficulty in comparing data drawn from several studies, how- ever, arises when studies use different definitions of target audiences, different specification of causal variables, different outcome measures, different wordings in survey instruments, and so on. Not only do these differences make it hard to compare studies and projects, they make re- sults difficult to generalize. Thus, we recommend that subsets of common data elements be repeated across studies as a way to ensure comparability and improve data reliability. SPECIFIC RECOMME NDATIONS The formal recommendations of the Pane] on the Evaluation of AIDS Interventions are listed below. The first group of recommendations cov- ers all of the separate programs, with one exception: recommendation

12 ~ EVALUATING AIDS PREVENTION PROGRAMS 2 does not apply to the counseling and testing program. Additional recommendations Hat are specific to each program are listed separately. All AIDS Intervention Programs 1. To improve interventions that are already broadly implemented, the pane] recommends the use of randomized field experiments of aIter- native or enhanced interventions. 2. Before a new intervention is broadly implemented, the panel recom- mends that it be pilot tested in a randomized field experiment. (This recommendation applies to the media campaign and CBO projects but not to the HIV testing and counseling program.) 3. The pane! recommends that any intensive evaluation of an ~nter- vention be conducted on a subset of projects selected according to explicit cntena. These criteria should include the rep~icabilit~,r of the project, the feasibility of evaluation, and the project's potential effectiveness for prevention of HIV transmission. 4. The pane} recommends that evaluation be conducted and replicated across major types of subgroups, programs, and settings. Atten- tion should be paid to geographic areas with low and high AIDS prevalence, as well as to subpopulations at low and high risk for AIDS. 5. When evaluation is to be conducted by a number of different eval- uation teams, the panel recommends establishing an independent scientific committee to oversee project selection and research efforts, corroborate the impartiality and validity of results, conduct cross-site analyses, and prepare reports on He progress of the evaluations. 6. The pane] recommends that CDC recruit and retain behavioral, social, and statistical scientists trained In evaluation methodology to facili- tate the implementation of the evaluation research recommended in this report. 7. In addition to the overarching goal of eliminating HIV transmission, He pane} recommends that explicit objectives be written for each of the major intervention programs and that these objectives be framed as measurable biological, psychological, and behavioral outcomes. 8. The pane! recommends Hat all evaluation protocols provide for the assessment of potential Hannah] effects, as well as the assessment of desired effects. 9. The panel recommends Hat once goals are met, projects be reevalu- ated periodically to monitor their continued effectiveness.

SUMMARY ~ 13 10. The pane] recommends that the Office of the Assistant Secretary for Health allocate sufficient funds in the budget of each major AIDS prevention program, including future wide-scale programs, to implement He evaluation strategies recommended herein. 11. The pane! recommends that studies be conducted on ways to system- atically increase participant involvement In projects and to reduce attrition through outreach to project dropouts. All teals should assess levels of participation and vanabilitY in He strength of the treatment provided. 12. The pane! recommends that new or improved AIDS prevention ser- vices be implemented as part of coordinated colIaboranve research and demonstration grants requiring controlled randomized teals of the effects of the services. 13. The panel recommends Hat the National Science Foundation sponsor research into He emp~ncal accuracy of estimates of program effects derived from selection mode! methods. 14. The pane] recommends Hat the Public Health Service and other agencies that sponsor the evaluation of AIDS prevention research require the collection of selected subsets of common data elements across evaluation studies to ensure comparability across sites and to establish and improve data validity and reliability. ~ c National AIDS Media Campaign I5. The pane! recommends the expanded use of formative evaluation or developmental research in designing media projects. 16. The pane] recommends systematic comparative tests of paid adver- tising versus public service announcement campaigns. 17. The pane] recommends that items be added to the National Health Interview Survey to evaluate exposure to, recall of, responses to, and changes resulting from new phases of the media campaign. I8. The pane] recommends that CDC increase the usefulness of hotline data for media campaign assessments by collecting evaluation-related data such as the caHer's geographic location, selected caner charac- ter~stics, issuers) of concern, and counselor responses. 19. The pane] recommends that CDC Initiate changes in its time sched- ules for the dissemination of public service announcements to facil- itate the evaluation of He media campaign. To enable the staggered implementation of television broadcasts, changes are needed In (~) the distribution schedule of public service announcements within the National AIDS Information and Education Program's consortium of

14 ; EVALUATING AIDS PREVENTION PROGRAMS media distributors and (2) the penod of time between Public Health Service approval and release of new phases of the campaign. 20. The pane] recommends that CDC initiate changes In its data col- lection and data sharing activities to facilitate the evaluation of the media campaign. To generate needed data, changes are needed In (~) the penod of time for internal approval of data items for the National Health Interview Survey, and (2) expeditious data sharing between the National Center for Health Statistics and over divisions of CDC. Community-Based Organizations 21. The panel recommends that a simple standardized reporting system for heath education/risk reduction projects be developed and used to address the question of what activities are planned and under way. 22. The pane] also recommends the expanded use of case studies. Testing and Counseling 23. The pane} recommends that data be gathered from multiple sources- ~ncluding testing sites, clients, groups at increased risk of HIV infec- tion, and independent observers-to evaluate five aspects of service delivery: the adequacy of the counseling and testing protocol, the adequacy of the counseling that is actually provided, the proportion of clients that complete the full protocol, the accessibility of ser- vices, and He nature of the bamers. if anv to clients Skin ~nr1 completing counseling and testing. 7 teal ~ ~ ~ ~ ~~ ~ ~~ ~~ ~ ~—~ 24. The panel recommends that evaluations of "What works beuer?" focus on the comparative effectiveness of testing and counseling services that (~) are delivered In different settings, (2) have different content, duration, and intensity, and (3) are accompanied by different types of supportive services. 25. The pane} recommends that the National Health Interview Survey (NHIS) be periodically augmented with several questions about ac- cessibility and barriers to HIV testing and counseling services.

Next: 1 Design and Implementation of Evaluating Research »
Evaluating AIDS Prevention Programs: Expanded Edition Get This Book
×
 Evaluating AIDS Prevention Programs: Expanded Edition
Buy Paperback | $60.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

With insightful discussion of program evaluation and the efforts of the Centers for Disease Control, this book presents a set of clear-cut recommendations to help ensure that the substantial resources devoted to the fight against AIDS will be used most effectively.

This expanded edition of Evaluating AIDS Prevention Programs covers evaluation strategies and outcome measurements, including a realistic review of the factors that make evaluation of AIDS programs particularly difficult. Randomized field experiments are examined, focusing on the use of alternative treatments rather than placebo controls. The book also reviews nonexperimental techniques, including a critical examination of evaluation methods that are observational rather than experimental—a necessity when randomized experiments are infeasible.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!