National Academies Press: OpenBook

Evaluating AIDS Prevention Programs: Expanded Edition (1991)

Chapter: 4 Evaluating Health Education and Risk Reduction Projects

« Previous: 3 Evaluating Media Campaigns
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 83
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 84
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 85
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 86
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 87
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 88
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 89
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 90
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 91
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 92
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 93
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 94
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 95
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 96
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 97
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 98
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 99
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 100
Suggested Citation:"4 Evaluating Health Education and Risk Reduction Projects." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 101

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

4 Evaluating Health Education and Risk Reduction Projects This chapter focuses on CDC's new direct grants program to fund health education/nsk reduction projects of community-based organ~za- tions (CBOs). In the past, CDC's Center for Prevention Services (CPS) has provided funding to CBOs via two mechanisms: cooperative agree- ments with states and a cooperative agreement with Be U.S. Conference of Mayors. Now CPS is poised to launch a program that will directly fund CBOs. The national objectives for the CBO projects are the same as those for CDC's counseling and testing program: behavior change, surveillance, and public education (CDC, 19881. An additional purpose for direct funding to CBOs is to reach hard-to-reach groups. Evaluation of community-based projects presents both conceptual and practical problems that are quite different from those encountered in evaluating the media campaign or the HIV testing and counseling program. The major conceptual difficulty arises from the diversity of Be activities that are offered by Be projects; Be major practical difficulty is the present scarcity of comprehensive information describing the projects themselves. BACKGROUND AND OBJECTIVES The traditional CPS instrument for implementing its funding priorities is a state cooperative agreement.] Since 1985, when agreements for funding 1 For specific and well-defined products, CDC has occasionally executed cooperative agreements win professional organizations such as the American Public Health Association, the U.S. Conference of Local Heals Officers, the National Association of County Heals Officers and the Association of Teachers of Prevention Medicine. 83

84 ~ EVALUATING AIDS PREVENTION PROGRAMS CBOs were instituted, 59 states and localities have had rather broad discretion in the allocation of federal dollars consistent with an approved state plan; the ultimate recipients of these dollars have numbered in the hundreds. In addition, CPS has an "ads length" relationship with dozens more CBOs through a cooperative agreement with the U.S. Conference of Mayors. The states and the Conference of Mayors have been responsible for accepting proposals from CBOs, selecting projects for fimding, and providing project oversight and a~n~stration. This arrangement contrasts with the new direct grants relationship that CPS is now implementing to fund minority and other community- based organizations, which is focused on organizations in 27 major metropolitan areas. Approximately 400 proposals have been submit- ted by CBOs under this program, and CPS expects Cat between 55 and 65 projects will soon be funded at an annual support level of $50,000- $225,000 per year for 3 years. The new direct grants program is designed to afford greater access to high-nsk individuals as well as to permit CDC a direct link through which it can provide hands-on technical assistance to organizations for refining and evaluating their intervention projects. Moreover, the new program will pennit CBOs to address problems that they and CDC jointly identify as important and that can be dealt with most effectively through local organizations. Although CPS has not yet had any experience dealing directly with CBOs, another branch of CDC has. Since August 1988, the National AIDS Formation and Education Program (NATEP) has extended grants or contracts to 33 m~nonty CBOs. Some evaluation data have been collected from these projects dunug two or more site visits by project officers, whose main task has been to lend technical assistance to the grantees, and through the filing of an annual report (not standardized) by each CBO. Process evaluations are In progress, but no outcome evaluations have been conducted of Me NAlEP program. Consequently, the new direct grants program presents a special challenge to CPS to work with the diversity represented by CBOs and the specific contextual frameworks that define who is proposed to get what services, when, by what means, and to what end. As one example of the extent of this challenge, the management information systems that have been developed for monitoring, Backing, and evaluating official health agency project activities are likely to be of limited usefulness when applied to the broad range of CBO activities. CBOs tend to be narrowly focused with respect to their mission, pri- mary constituency, and target population. The feature that distinguishes

EVALUATING CBO PROJECTS ~ 85 them from other organizations is their evolution from and their involve- ment with the community they propose to serve. Seldom, however, does any single CBO evolve to a point where it can mobilize all of the relevant local resources because communities, especially urban inner-city commu- nities, are complex, diverse entities that are segmented by race, ethnicity, sexual orientation, and other individual and social characteristics. Thus, the CBOs that have been grantees through the Conference of Mayors agreement, for example, are quite diverse in both the primary constituencies they serve and the activities they propose to undertake. Constituent groups vary by race, age, sexual preference, religious orien- tation, language, addictive behavior, occupation (e.g., farm workers, mi- grant workers, prostitutes), and other factors (e.g., runaway and "throw- away" youth). In terms of project activities, pane! members identified at least IS different types of activity, including conducting educational seminars and workshops to train AIDS educators; compiling, publishing, and distributing printed material; developing counseling sessions; engag- ing In parades and street fairs; sponsoring AIDS-related performing arts activities; establishing focus groups; and producing videos and PSAs. Given their emphasis on service delivery, the CBO projects that have been funded have not paid much attention to evaluation. Thus there is little evidence on the relative effects that such interventions have had on the target populations. At present, none of the CBOs can demonstrate the effectiveness of the interventions they have implemented. To address this lack in the new program, CPS called for the for- mulation of an evaluation plan as one of the criteria (worth 15 points of a possible 100) to be used in selecting projects for funding under its new direct grants program. The CDC guidelines on grant preparation appear to advise applicants to produce evaluation plans for both process and outcome evaluation. CDC staff informed the pane! that although preapplication workshops were held to provide guidance in developing the plans, the evaluation designs submitted by the CBOs were generally inadequate.2 The pane] does not find this result surprising. CBOs are not, in general, In the business of doing evaluation research. They are unlikely to employ the methodologists and statisticians required to develop a competent research design to demonstrate the effects of their programs. Furthermore, the panel's own struggle with this task led it to conclude 21n addition to Inning preapplication workshops, CPS's project of fleers continue to provide extensive technical assistance to CBOs after they are funded. Their work is especially critical in the first 3 to 6 months when, through telephone and site visits, project officers encourage a focused approach to CBO interventions, help practitioners establish realistic goals and work plans, cultivate input from their communities, and document the project's deliver processes.

86 ~ EVALUATING AIDS PREVENTION PROGRAMS that designing adequate evaluations for these programs is a challenging task even with expert personnel. In the rest of this chapter the pane! presents and discusses its strategies for the important and difficult task of evaluating the health education and risk reduction projects that will be undertaken by community-based organizations. WHAT SERVICES ARE DELIVERED? There are three evaluation strategies to be considered in determining what activities were planned by a project and what activities actually took place: case studies of individual projects, gathering data through a standardized administrative reporting system, and conducting a census or sample survey. Case Studies of a Sample of Projects Case studies can be immensely helpful in descnb~ng an activity and learrung about the process involved in conducting it. The major limitation of a case study is that, without a supplemental statistical study of a sample of projects, the generalizability of the information cannot be known. Well-conducted case studies, however, can improve the design of a statistical study of such projects. Indeed, when He universe of projects being studied is diverse or not well understood by researchers, case studies will be a prerequisite for other research (by survey or other methods) since they can identify He questions that ought to be asked. After reviewing abstracts of the CBO projects presently under way, He pane] concluded that descriptive case studies are required and should be undertaken as a first step in evaluation. The case studies should focus on identifying the major approaches to health education and risk reduction Hat are being underralcen in these projects, developing a better understanding of the basic project components and processes, and better specifying the specific outcomes that are anticipated. The strategy that seems most sensible to the pane! is to use case studies win a small sample of sites as a first step, and subsequently to develop survey questionnaires or administrative reporting forms for sys- tematic data collection from ah projects (or a large sample of projects).3 For He sake of concreteness, the pane} provides below an example of 3 The design suggested in the text is a stratified sample. Stratifying projects by type and then randomly choosing sample projects ensures that results will effectively be applicable to all of them. A certain amount of ambiguity will remain with regard to unique project types that fall outside he sampling scheme. When projects are quite diverse and few in number, however, case studies can be conducted of all of them. Each design choice has its Hues: studying a sample is less costly and less time- consuming; studying all the sites is more precise.

EVALUATING CBO PROJECTS ~ 87 the way in which case studies might be used to provide descriptive information. Our example focuses on identifying the approaches to health educa- tion/risk reduction currently being provided to IV drug users. In order to identify the approaches that are currently in use, preliminary case studies might be conducted of prototypical projects funded by CDC and the Na- tional Institute on Drug Abuse (NEVADA). As noted above, the objectives would be to identify the major approaches for evaluation; to develop an understanding of the basic project components and processes; and, since CBO projects differ by type, to prepare a set of protocols appropriate for different types of interventions and target audiences as a basis for more rigorous evaluation. In addition, such case studies could enhance understanding of: · the target groups and subgroups being served; · how individuals learn about services and what other services are available to them; · how a CBO learns about and reaches out to individuals, engages them in receiving services, and limits attrition from the project; · what services or educational material are delivered, by whom, how often, to whom, and in what context; · the accuracy and timeliness of the education or risk reduc- tion information selected groups receive; and · how funds are used. Key elements in the design of such a study would include the selection of a sample of sites for case studies, the collection of data during site visits to each project, and the analysis and interpretation of the data. Sample The sample of projects selected for study might be stratified by sero- prevalence rates among IV drug users in the community, (e.g., high or low); the setting in which activity is conducted (e.g., street outreach or treatment program); and target group (e.g., IV drug users themselves or those at high risk of REV infection by sexual contact with IV drug users). These Free stratification variables produce a 2 x 2 x 2 matrix of candidate projects requiring case studies of a Tninimum of ~ programs if one project is chosen from each cell of the ma~ix.4 Whenever feasible, 4Some cells of this matrix might, of course, be empty. For example, there might be no street outreach programs to the sexual parmers of IV drug users established In low prevalence areas. If there are empty cells in the matrix, the number of case studies required would be smaller.

88 ~ EVALUATING AIDS PREVENTION PROGRAMS however, the pane! believes that more than one project should be selected for each cell In order to provide a crude indication of the variation that exists between projects. In selecting projects within Me cells, priority should be given to projects on We basis of the replicability (in theory) of the treatment offered In He project; the local feasibility of doing a study; and the representativeness of the program or site category. Data Collection To collect data from the projects, site visits of 3 to 4 days would be conducted by a team of 34 persons. The project team should include an AIDS education specialist, an expert on the risk behavior of IV drug users, and an evaluation design specialist. Of needed, a person to abstract project records should also be available.) The site visit team would conduct open-ended interviews with key project staff, other program personnel, key individuals In programs or communities, and at least 5- 10 program participants, or potential participants. The interviews would focus on program activities, options about those activities, and the identification of key content, process, or structural elements that may deterInine the success of a program. Analysis AU project team members would prepare individual case study reports covering the process evaluation topics noted above. A summary report would then be prepared outlining the major findings of the team and identifying the elements of the project that appear appropriate for further data gathering using other procedures. Standardized Administrative Reporting The Center for Prevention Services plans to gather data from Be direct grantees in two ways: site visits made by CDC project officers and quarterly narrative reports provided by the sites. Of the three topic areas for the quarterly narratives, one asks CBOs to descnbe their progress toward meeting project objectives. This section is intended to elicit information about activities offered to date and about the target audience. Site visits are intended to monitor services and audiences. (Program staff may also code the data gathered from site visits and narratives and enter it into a data base.) We encourage data collection on the progress towards goals and the proposed establishment of a data base. However, we believe it would be more desirable to develop a standardized a~n~strative reporting form

EVALUATING CBO PR03ECTS ~ 89 for CBOs. Such a form could solicit more comprehensive periodic data on a number of program elements, including: · the project goals and primary target group; · characteristics of the services delivered (type, to whom, how often); · nature of the quality controls for the delivery of services; · number of voluntary and paid staff; · staff qualifications; and · staff turnover. This is a very preliminary list of the information that should be obtained, but we believe that it is more extensive and will be more consistent than data collected by quarterly narratives. In addition, comprehensive and standardized data are more amenable to coding than elements offered in a narration. The panel anticipates that substantial refinement to the list will be possible after the proposed case studies are completed.5 The pane! notes that obtaining information about project goals is especially important, and it is likely to require careful probing. A health education/risk reduction project might readily assert that its primary goal is "education." However, the information that would be most helpful is much more specific: for example, the project might be using street outreach workers to educate Hispanic persons who use IV drugs about (1) the opportunities for drug treatment that are available in Heir community, (2) the risks posed by sharing of injection equipment, and (3) methods for sterilizing injection equipment. Similarly, `'risk reduction" might take the form of providing bleach or condoms. The intensity, duration, and character of these risk reduction activities might also vary; for example, bleach might be available at any time or only 2 hours a day. Knowing only that a project's goal is "education" or "nsk reduction" will not be sufficient to alBow a detailed classification of projects or ultimately to learn which intervention strategies work best. The information gathered by an administrative fob (together with the results of case studies and site visits by project officers) could pro- vide a reasonably accurate, up-to-date, and comprehensive description of the services being provided by CBOs. In implementing such an ad- ministrative reporting system, the pane] cautions against introducing too much complexity In its design and operation. The purpose is elementary S. Case studies of street outreach projects should include examinations of the sometimes free-form daily logs in which outreach workers record their activities on the streets. Such analysis will furler the development of forms amenable to the sometimes esoteric characteristics of service delivery made by outreach projects.

90 ~ EVALUATING AIDS PREVENTION PROGRAMS description rather than micromanagement, and the level of complexity of the system should reflect that purpose. The pane] notes that CPS has developed a standardized reporting system to monitor the counseling and testing projects the agency supports. The experience CPS has acquired in developing that system In the counseling and testing arena might be exploited for CBO projects, although they are admittedly more varied and so will be more difficult to descnbe. A Census or Sample Survey If simple administrative reports of the kind just described existed and were completed regularly by CBOs—covering goals, strategies, and other program elements (and on the changes that occur In them~they would be useful to CDC to understand what is planned and what is under way. To the extent that the reports are well circulated among the CBOs, they might help other CBOs learn which organizations are engaged in similar activities and to build networks that facilitate mutual education. However, different, in-depth information, obtained independently of tile project itself, would enhance CPS's and the public's understanding of what actually occurs In projects. In part, the need to obtain such independent information arises from the fact that no project is ever delivered quite as advertised. In addition, when projects continue over time, goals, plans, and delivery methods may change as what has been learned is used to improve the intervention. Detailed independent information may be obtained Trough a census or sample survey. A census or sample survey of projects can provide a more informative statistical description of activities than is generated by a~n~strative reports. In the case of CDC's direct grants program, a census of 60 projects may be feasible. However, a sample survey of, for example, 30 projects would probably be sufficient for generating basic statistical information on a number of questions: · What services are provided and to whom? · When, how often, and for how long are services provided? · In what context are these services delivered? A census or sample survey might also gamer information on client flow, and it could probe more deeply than an administrative reporting system into project objectives, characteristics of the services delivered, and other program elements. Responsibility for such a survey might be assigned to CDC's project officers, who already conduct semiannual or more frequent site visits to lend technical assistance and monitor progress. The additional information generated by a census or sample survey might be helpful in describing projects to the public and to Congress and

EVALUATING CBO PROJECTS | 91 in identifying projects that appear to have good potential for use in over locales. In addition to such uses, the statistical information produced by such a survey or census might be useful locally, especially if the CBOs assist In designing questions that are of special interest at the local level. Recommended Combination of Strategies Of the three strategies He pane} considered to answer the question, "What project activities are planned and undertaken?", a census or sample survey seems the least attractive simply because it is premature. At least some routine a~n~strative data and some description or "scouting" of the projects—of the sort embodied by case studies—is essential for Me design and execution of a good survey. Thus, while we endorse the notion of quarterly narrative reports and site visits, we go a step filer. The pane! recommends that a simple standardized reporting system for health education/risk reduction projects be devel- oped anti used to address the question of what activities are planned and under way. The pane! also recommends the expanded use of case studies. In making these recommendations, the pane] believes Cat an a~ninistra- tive reporting system can provide broad statistical information, the case studies can provide enriched descriptions of program processes, charac- ter, and target groups, and they can aid in the design of further studies to collect statistical information. Regardless of which strategies are se- lected, the panel believes it will be sensible to sequence their use so that each strategy can take advantage of the inflation generated earlier in the sequence. ~ most circumstances, case studies should be attempted prior to other techniques since they will provide helpful insights about Be topics that should be covered by surveys or administrative reporting systems. Methotlological Issues The methodological problems involved In the two recommended strate- gies (case studies and administrative reporting) are not unusual. For instance, the design of a simple administrative reporting system would have to take into account the diversity of the health education/risk reduc- tion projects being undertaken by Be CBOs and the limited interest of the CBOs in providing such reports. As noted above, CDC's experience in developing a standardized reporting scheme for testing and counseling sites and its experience with Be AIDS demonstration projects are both

92 ~ EVALUATING AIDS PREVENTION PROGRAMS likely to be helpful. Even a simple reporting system should have a quality control component, however. For the type of reporting system the pane] recommends, this component might include at least rudimentary checks, by telephone or in over ways, of a small sample of the reports that are submitted. Conducting good case studies of a small sample of health edu- cation/risk reduction projects to understand project activities engenders somewhat different problems. The diversity among the projects will make the organization of results difficult. Moreover, it may be anticipated that some sites may object to being "scrutinized." The pane] believes, how- ever, Mat obtaining the cooperation of the projects should not be a major problem if it is made clear that the studies are not being conducted to deterTn~ne Me success or failure of the projects. Obtaining an adequate number of suitable persons to do the site visits might be a problem. The number of individuals expert in such studies, especially in the context of m~nonty and other commun~ty-based Org~zation efforts, is relatively small, and implementing this approach may require the assistance of experts from outside CDC. Some people might be recruited from the pool of researchers engaged In work on CBO efforts in manpower (see Boruch, Dennis, and Carter-Greer, 1988), drug use prevention and treatment (see Hubbard et al., 1988), and over areas. Resources and Aspirations Some of the resources needed to exploit the two recommended options may already be available Trough specialized work that CDC has been supporting In over program areas. In the next chapter we note CDC's experience in developing an administrative reporting system for its coun- seling and testing program. S~m~larly, CDC's efforts in documenting and evaluating the activities of the AIDS community demonstration projects (CDC, 1989a), using survey me~ods and ethnographic approaches, may prove helpful. The development and installation of an administrative reporting sys- tem, including technical support for quality control, can of course be done through a contract. Some related systems have been developed for more specialized CBO efforts, notably in experimental tests of manpower programs (see Betsey, Hollister, and Papageorgiou, 19851. Moreover, the case studies suggested above may help in the development of a system. With regard to time, the pane] believes that the case studies could probably be accomplished within ~ year. Major expenditures would be for project staff, case study personnel, and travel costs. Each case study, which would include staff preparation and write-up time, would require

EVALUATING CBO PR03ECTS ~ 93 about 40 person-days per site and travel costs of $3,000-$4,000. The case studies would be relatively inexpensive, at a cost of $15,000 per site, and one to three full-time staff persons would be required for coordination. For the money, this type of study provides a substantial amount of descriptive information. For example, it would help give some structure to a fairly diverse set of approaches and programs. The information provided through the case studies is essential before controlled stud- ies or even useful surveys or management information systems can be developed. DO THE PROJECTS MAKE A DEFERENCE? CDC has been encouraged to estimate the relative effectiveness of its various projects that is, to determine whether the projects make a dif- ference. Such an estimation requires some basis for comparison. One commonly proposed basis uses the before-and-after design. Before-and-After Evaluation Designs CDC's guidelines for evaluation of the new CBO projects propose the use of a "baseline" preprogram measure as a basis for estimating change that occurs as a consequence of these projects (CDC, 1989b). The panel believes that the nonexpenmental, before-and-after evaluation design is useful ~ the context of process evaluation, (i.e., for looking at the scope and quality of a project's services); however, when the design includes pretest and pastiest measures of attitudes and behaviors In He context of outcome evaluation, its usefulness is more limited. The evidence produced by before-and-after comparisons of a project is often ambiguous at best and misleading at worst because no con- tro} groups are involved. The inference of cause and effect from such designs alone is generally not sustainable, and competing explanations for changes In attitudes and behaviors cannot ordinarily be ruled out.6 The pane} notes that there are typically a large number of AIDS-related activities and events occulting in any given locate, which makes any attribution of behavior change in a population (i.e., any "difference") to a specific project unconvincing. While the design is thus limited, it might still be usefully applied In an evaluation that aims to test He null hypothesis that no change occulted (regardless of cause). When it can be confidently assumed that the uncontrolled factors (e.g., other programs In the community, media, 6The design invokes post hoc, ergo propter hoc argwnents.

94 ~ EVALUATING AIDS PREVENTION PROGRAMS etc.) should increase the amount of change reflected In a posttest, not reduce it, a before-and-after design that shows no change has occurred between pretest and pastiest is informative. Such negative findings might, In fact, be useful information at the first stage of program design and implementation, and so the design could serve a purpose in a forma- tive evaluation. If change does occur, however, the evaluation design must be supplemented with additional research because the extent of the change that can be attributed to the program rather than other events or activities—cannot be reliably inferred. The pane! recognizes that some local projects may consciously ap- ply quasi-experimental methods to design nonrandomized evaluations that produce evidence that may be defensible under certain assumptions (see Campbell and Stanley, 1966~. Indeed, at least one manual has been drafted to assist AIDS prevention programs in the design of quasi- experiments (Mantel! and Di~ttis, in press). When evaluation results are available from well-executed quasi-experiments, Me pane! believes they should be reviewed and, if the results are of sufficient importance to warrant the effort, the evaluation could be repeated using a randomized design. Because of the inferential problems that affect nonexperimental de- signs such as before-and-after studies, the parent committee's earlier report urged that serious consideration be given to randomized experi- ments to estimate Be relative effects of AIDS prevention projects. This advice is extended in the discussion that follows about practical strategies to determine whether CBO projects make a difference. Randomized Field Studies There is ample evidence that randomized experiments designed to un- derstand whether new projects make a difference are sometimes feasible. Past examples of these studies include: schools being randomly assigned to alternative programs in studies of which programs best prevent high-risk behavior; small groups of clients being randomly assigned In com- parative tests of different kinds of counseling programs; and · individuals being randomly assigned to program or control conditions in studies of teacher training, hospice cafe for the terminally ill, and home or hospital care. The randomized study option is more feasible for new projects than for ongoing ones. New projects can justify a control condition in which

EVALUATING CBO PROJECTS | 95 the new services are not offered when the following conditions prevail: (~) resources are scarce and cannot provide the new services to everyone; (2) showing that the project works or does not work relative to scientific standards will lead to better approaches or to more money for approaches that have been shown to work, or both; and (3) related services are available elsewhere, so that having a no-treatment control group is not problematic.7 The pane} recognizes that opportunities to conduct such studies may be few, but they should be pursued wherever possible. One such study is now under way to test the effectiveness of a counseling intervention and to explore more efficient approaches to pro- viding counseling. This study involves a community-based AIDS risk reduction program sponsored by the National Institute of Mental Health, in which researchers are using randomized experiments with delayed- treatment controls. In the first part of the study, Kelly and colleagues (1989) randomly assigned individuals to an experimental or control group to measure the effects of an intensive 12-week counseling program. In the second half of the study, researchers are trying to replicate the pos- itive effects found in the first half with a shorter, 6-week counseling component. It is much more difficult for ongoing projects to create a new control condition, except when the demand for program services exceeds the supply. Then, providing limited service on a random basis to equally deserving individuals or organizations can be justified and implemented, and it does not conflict with professional and social ethics (Riecken et al., 1974; Federal Judicial Center, 1981~. At the present time, the panel does not believe it would be appropriate to interfere with the ongoing operations of the CBO projects funded through the states and the Conference of Mayors in order to impose "control" groups. En reaching this conclusion, the panel was impressed by the practical difficulty of such a strategy, particularly given He relatively modest levels of funding Hat have been provided to these projects. However, for projects that are just now being funded (or that may be funded in the future), the pane} recommends Hat randomized field studies be attempted. The pane] considered two options for conducting these studies: conduct evaluations of all CBO projects as a funding requirement or select a sample of CBOs for evaluation and provide incentives to elicit the cooperation of those CBOs. The pane] rejects the first option of requiring that all CBOs undergo evaluation using randomized field assignments. The recent history of 7 availability of services also means accessibility. Thus, a given CBO intervention that is uniquely accessible to members of a community would not meet the condition for withholding the service.

96 ~ EVALUATING AIDS PREVENTION PROGRAMS field studies In the social sciences suggests that requiring randomized studies of all projects in a program, even In cases in which the projects are designed to be uniform, is not reasonable. Sample sizes in some projects may be too small to use in randomized studies except through a cooperative, multisite effort. Moreover, many sites will not have the capacity, regardless of technical support, or the willingness, regardless of incentives, to cooperate in controlled tests. In addition, not all projects have activities and purposes that lend themselves to comparative tests. Finally, not all projects will be "important" enough to evaluate (in the sense of estimating their effects): that is, the investment of evaluation resources may not be justified, given the small size, limited replicability, or idiosyncratic nature of the project. The pane] therefore argues for the second option of evaluating a sample of CBO projects. To proceed with this approach, the pane} suggests four steps: I. Identify those CBO projects that are important enough to justify the costs of studies and that are technically appropri- ate for comparative randomized assignment. The selection criteria for such projects should include the replicability (in theory) of what is offered in the project; the local feasibility of doing a study; and the representativeness of the project or site category. 2. Develop incentives, notably funding, for the selected CBOs to encourage their cooperation in comparative randomized tests. 3. Use an independent evaluation team to design the exper- ~ment in each site, execute the randomized assignment of participants, monitor the integrity and maintenance of the assignments, and analyze results. The evaluators must be both technically able and capable of working with CBOs. The evaluators will also be responsible for analyzing data, estimating the relative effects of the project on the groups, and reporting to CDC. 4. Establish an advisory board to oversee the evaluations, per- forrn periodic, simultaneous secondary analysis of the data produced by the projects, and conduct cross-site analyses. The cross-site analyses are likely to be essential given the expected diversity of even 10-20 projects.8 Where increased confidence is wanted-such as in decisions to deploy an intervention nationwide- error can be reduced by repeating the experiment in other settings. The resulting cross-site distribution

EVALUATING CBO PROJECTS ~ 97 The feasibility of such strategies has been demonstrated In a variety of areas. The cross-site oversight approach to independent projects, for instance, has been used in the multisite experiments on police handling of domestic violence being conducted by the National Institute of Justice (19891. The single-team approach to evaluation design and execution has worked well In recent assessments of trading programs for a variety of vulnerable populations in projects supported by Me Ford Foundation, U.S. Deparanent of Labor, Rockefeller Foundation, U.S. Department of Education, and others (Turnbull, 1989~. Methodological Issues The methodological problems that will be encountered in carrying out randomized field studies for health education/reduction projects are likely to be similar to those encountered In other social science experiments. They include: · detennining eligibility critena; · identifying and recruiting individuals and ensuring samples of sufficient size; · random assignment and quality controls on the integers of assignments; · tracking individuals; · using sufficiently sensitive measures to assess response to the project; and · coping win missing data problems and fallible measure- ments. However, the diversity typical of health education/nsk reduction projects that are developed by CBOs win multiply these problems. For instance, working with a sample of 20 or more projects might be ap- propnate in order to span the range of activities being conducted by the CBOs, but the practical difficulties of coping with 20 different kinds of response variables may make a sample of this size impossible to handle. Dealing with these problems requires considerable skin, but such skills are available, as are standard approaches to resolving some of the issues. For example, past experience win similar research indicates of effect sizes will show the range of effects likely to be encountered when the program is implemented nationwide. Moreover, cross-site analyses of obtained effect sizes can be tested for homogeneity (the consistency of effect of an intervention), a measure of robustness. No criteria exist for the number of repetitions needed to be secure about results; rather, judgment on the part of the analysts ought to suffice.

98 ~ EVALUATING AIDS PREVENTION PROGRAMS that pre- and pastiest behavior should be assessed by self-reports of par- ticipants to interviewers or ethnographers who are independent of the project being evaluated. Experience in AIDS-related research on sexual behavior and IV Mug use9 wiB be helpful in designing measurement procedures for obtaining complete information on risk behaviors, and cognitive pretesting should be beneficial in the refinement of these proce- dures (see Lessler, Tourangeau, and Salter, 1989~. To maintain credible samples, it would appear that tracking by name may be necessary— although O' Reilly reports some success using an approach that preserves respondent anonymity (CDC, 1989a). If names are used, experience indicates that periodic recontacting of the sample facilitates tracing of persons who change residence and reduces sample at~ition.~° Similarly, appropriate confidentiality guarantees and compensation (in some form) appear to be critical to achieving high levels of cooperation and low levels of sample attrition over time. Resources and Aspirations Controlled experimentation of the kind the pane] suggests is a labor- intensive effort. It should only be undertaken when there is a clear understanding of the major, active components In health education/nsk reduction project or projects at issue and where there is a commitment of support for the 3 or more years necessary to design, execute, and analyze such studies. To calTy out evaluation using controlled field studies will require a joint commitment by the projects, federal agencies, and the research team. Success will hinge on the compatibility of the collaborators and the leadership and scientific support provided by CDC and by the oversight board. The skills needed to conduct high-quality randomized studies of new projects are not commonly found among CBO administrators and staff. The requisite technical skins are more common in academic env~ron- ments, although experience doing AIDS prevention through CBOs is not. These facts suggest that joint efforts involving CBOs and academic re- searchers will be required to carry out randomized studies that produce high-quality evidence. CDC's (1989a) experience with the AIDS com- musty demonstration projects may be helpful In developing these joint efforts insofar as they involve demonstrations with strong ties to academic institutions. The types of collaborative linkages Cat would be useful to 9See Chapters 2 and 3 of dinner, Miller, and Moses (1989) for a review of this research. Such periodic recontact is done for the purpose of staying in touch win sample persons and not necessarily to gather data

EVALUATING COO PROJECTS ~ 99 develop between Me evaluator and the service provider In private or public community organizations are well described elsewhere: see, for instance, Bangser (1985) on tests of programs for the adult mentally retarded; Boruch, Dennis, and Carter-Greer (1988) on tests of programs for minority female single parents; and Garner and Fisher (1988) on randomized field experiments to learn how to reduce domestic violence most effectively. Finally, with regard to the aspirations for such evaluations, We pane} concluded that some projects fielded by CBOs cannot be evaluated in the sense of discovering whether Hey make a difference. Some organizations will not be able to commit the resources of their staffs to lengthy study; some win not be willing to participate; some will be too small (have too few potential clients), and so on. This does not mean, however, that such projects are ineffective. To be credible, judgments about whether a project makes a difference must be based on evidence, and in some projects such evidence may simply be impossible to generate. WHAT WORKS BETTER? The preceding section proposed Hat a small sample of the new projects to be funded under CDC's direct grants program to CBOs be tested against control conditions. The object of such a test is to understand whether the project "works," that is, has beneficial effects. A similar strategy might be used In the context of either new or ongoing projects to determine which of two or more approaches to a problem works better. Such comparative tests might take a number of forms. One example might involve augmenting existing or standard approaches in ways that arguably are at least as effective as the standard for example, unplug the time dedicated to certain education activities or increasing the number of outreach staff to determine whether, indeed, the additional resources lead to remarkably better results. Another possibility might be compar- ative tests that involve adding new elements for example, distribution of bleach, condoms, or other matinal assistance purported to reduce risks—to understand whether these materials produce an effect. Alterna- tively, the new regimen knight involve changes in the type of staff made available to serve He community's needs, the frequency of service or outreach, or over aspects of service delivery. Although such testing will ultimately be valuable, the panel con- cludes that it would be premature to begin developing strategies to con- duct such tests at He present time. Attempts to test He relative efficacy talon the other hand, in the sense of discovering what is being delivered, process evaluations of all projects can be conducted (whether they all should be evaluated is another question).

100 | EVALUATING AIDS PREVENTION PROGRAMS of different intervention approaches mounted by CBOs should await He completion of Be first phase of field studies outlined above. The re- sults of those experiments and the experience gained In conducting them should guide subsequent efforts to test the relative effects of alternative interventions. In reaching this conclusion the panel also notes that when individuals are randomly assigned to alternative regimens within a site, a distinctive problem arises In addressing the question "What works bet- ter?" This problem concerns the capacity of CBOs to run side-by-side variations and Be ability of the evaluator to ensure that individuals as- signed to each variation continue within that vanation. The panel believes that, in many cases, CBOs will find it difficult to provide two different educational regimens. Furthermore, maintaining a separation between Be two groups or regimens can present difficulties. The separation is es- sential to a fair comparison between the groups, but it may be subverted if individuals assigned to one regimen interact often with individuals assigned to the over. The type of interaction Mat can affect comparative tests (e.g., exchanging information or sharing risk reduction matenal) is arguably low for some selected audiences such as IV drug users in areas In which there is no well-established user community. Interaction may be great, however, within some groups (the "sharing" of doses of drugs in some medical clinical Dials has been reported), which may diminish Be ability to detect differences between the effects of program A and program B. REFERENCES Bangser, M. R. (1985) Lessons from Transitional Employment: The STETS Demon- stration for Mentally Retarded Workers. New York: Manpower Demonstration Research Corporation. Betsey, C. L., Hollister, R. G., and Papageorgiou, M. R., eds. (1985) Youth Employment and Training Programs: The YEDPA Years. Report of the NRC Committee on Youth Employment Programs. Washington, D.C.: National Academy Puss. Boruch, R. F., Dennis, M., and Carter-Greer, K. (1988) Lessons from the Rockefeller Foundation's experiments on the minority female single-parent program. Evaluation Review 12~4~:396426. Campbell, D. T., and Stanley, J. S. (1966) Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally. Center for Disease Control (CDC) (1988) Announcement No. 901. September 20. Federal Register 53~1821:36492-36493. Centers for Disease Control (CDC) (1989a) AIDS Community Demonstration Projects Progress Report: 1989. Atlanta, Gal: Centers for Disease Control. Centers for Disease Control (CDC) (1989b) Cooperative Agreements for Minority and Other Community Based Human Immunodeficiency Virus (HIV) Prevention Projects. Announcement Number 908. Washington, Die.: U.S. Department of Health and Human Services.

EVALUATING CBO PROJECTS ~ 1Ol Federal Judicial Center (1981) Experimentation in the low. Washington, D.C.: Federal Judicial Center Garner, J. and Fisher, C. (1988) Policy experiments come of age. September-October. NU Reports 211:2-6. Hubbard, R. L., Marsden, M. E., Cavanaugh, E., Rachal, J. V., and Ginzberg, H. M. (1988) Role of drug-abuse treatment in lionizing the spread of AIDS. Reviews of Infectious Diseases 10:377-384. Kelly, J. A., St. Lawrence, J. S., Hood, H. V., and Brasfield, T. L. (1989) Behavioral intervention to reduce AIDS risk activities. Journal of Consulting and Clinical Psychology 57:60-67. Lessler, J., Tourangeau, R., and Salter, W. (1989) Questionnaire design in the cognitive research laboratory. Vital and Health Statistics 6~1~: May, 1989. Mantell, J. E., and Di~ttis, A. (In press) Evaluating AIDS Prevention Programs: A Guidebook for the Health Educator. New York: Gay Men's Health Crisis, Inc. National Institute of Justice (1989) Spouse Assault Replication Project Review Team. Washington, D.C.: U.S. Department of Justice. Riecken, H. W., and others (1974) Social Experimentation: A Method for Planning and Evaluating Social Programs. New York: Academic Press. Trumbull, B., ed. (1989) Meeting the President's Mandate for Accountability of Education Projects: Report to the U.S. Department of Education. Washington, D.C.: Policy Studies Associates.

Next: 5 Evaluating HIV Testing and Counseling Projects »
Evaluating AIDS Prevention Programs: Expanded Edition Get This Book
×
Buy Paperback | $60.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

With insightful discussion of program evaluation and the efforts of the Centers for Disease Control, this book presents a set of clear-cut recommendations to help ensure that the substantial resources devoted to the fight against AIDS will be used most effectively.

This expanded edition of Evaluating AIDS Prevention Programs covers evaluation strategies and outcome measurements, including a realistic review of the factors that make evaluation of AIDS programs particularly difficult. Randomized field experiments are examined, focusing on the use of alternative treatments rather than placebo controls. The book also reviews nonexperimental techniques, including a critical examination of evaluation methods that are observational rather than experimental—a necessity when randomized experiments are infeasible.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!