Skip to main content

Currently Skimming:

1 Design and Implementation of Evaluating Research
Pages 15-33

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 15...
... Although the characteristics of AIDS intervention programs place some unique demands on evaluation, the techniques for conducting good program evaluation do not need to be invented. Two decades of evaluation research have provided a basic conceptual framework for undertaking such efforts (see, e.g., Campbell and Stanley [19661 and Cook and Campbell [1979]
From page 16...
... Questions about a project's implementation usually fall under the rubric of process evaluation. If the investigation involves rapid feedback to the project staff or sponsors, particularly at the earliest stages of program implementation, the work is celled formative evaluation.
From page 17...
... Process evaluation can also play a role in improving interventions by providing the information necessary to change delivery strategies or program objectives in a changing epidemic. Research designs for process evaluation include direct observation of projects, surveys of service providers and clients, and the monitoring Of a~ninistrative records.
From page 18...
... EVALUATION RESEARCH DESIGN Process and outcome evaluations require different types of research designs, as discussed below. Formative evaluations, which are intended to both assess implementation and forecast effects, use a mix of these designs.
From page 19...
... Record keeping or service Overtones are probably the easiest research designs to implement, although preparing standardized internal forms requires attention to detail about salient aspects of service delivery. Outcome Evaluation Designs Research designs for outcome evaluations are meant to assess principal and relative effects.
From page 20...
... believes that it would be unwise to rely on matching and adjustment strategies as the primary design for evaluating AIDS intervention programs. With differently constituted groups, inferences about results are hostage to uncertainty about the extent to which the observed outcome actually 4This weakness has been noted by CDC in a sourcebook provided to its HIV intervention project grantees (CDC, 1988:F-14)
From page 21...
... Dividing a singly constituted group into two random and therefore comparable subgroups cuts through the tangle of causation and establishes a basis for the valid comparison of respondents who do and do not receive the intervention. Randomized experiments provide for clear causal inference by solving the problem of group comparability, and may be used to answer the evaluation questions "Does He Intervention work?
From page 22...
... For example, a study cited in Chapter 4 used a randomized delayedtreatment experiment to measure the effects of a community-based risk reduction program. However, such a strategy may be impractical for several reasons, including: · sites waiting for funding for an intervention might seek resources from another source; · it might be difficult to enlist the nonfunded site and its clients to participate in the study; 5 The significance tests applied to experimental outcomes calculate the probability that any observed differences between the sample estimates might result from random variations between the groups.
From page 23...
... The Federal Judicial Center (1981) offers five threshold conditions for the use of random assignment.
From page 24...
... This opposition and reluctance could senously jeopardize the production of reliable results if it is translated into noncompliance with a research design. The feasibility of randomized experiments for AIDS prevention programs has already been demonstrated, however (see the review of selected experiments In Turner, Miller, and Moses, 1989:327-3291.
From page 25...
... This is a salient principle in the design and execution of intervention programs as well as in the assessment of their results. The adequacy of the proposed sample size (number of treatment units)
From page 26...
... randomized study is feasible, it is superior to alternative kinds of studies in the strength and cIanty of whatever conclusions emerge, pnmanly because the experimental approach avoids selection biases.7 Other evaluation approaches are sometimes unavoidable, but ordinary the accumulation of valid inflation will go more slowly and less securely than In randomized approaches. Experiments in medical research shed light on the advantages of carefully conducted randomized experiments.
From page 27...
... In proposing that randomized experiments be one of the primary strategies for evaluating the effectiveness of AIDS prevention programs, the pane] also recognizes that there are situations in which randomization will be impossible or, for other reasons, cannot be used.
From page 28...
... Attention should be paid to geographic areas with low and high AIDS prevalence, as well as to subpopulations at low and high risk for AIDS. Research Administration The sponsoring agency interested in evaluating an AIDS intervention should consider the mechanisms through which We research will be caITied out as well as the desirability of both independent oversight and agency in-house conduct and monitoring of the research.
From page 29...
... (Appendix A discusses contracting options in greater depth.) Both of these approaches accord with the parent committee's recommendation that collaboration between practitioners and evaluation researchers be ensured.
From page 30...
... Agency In-House Team As the parent committee noted in its report, evaluations of AIDS interventions require skins that may be in short supply for agencies invested in delivering services (Turner, Miner, and Moses, 1989:3491. Although this situation can be partly alleviated by recruiting professional outside evaluators and retaining an independent oversight group, the pane]
From page 31...
... This is particularly the case for outcome evaluations, which are ordinanly more difficult and expensive to conduct than formative or process evaluations. And those costs win be additive win each type of evaluation that is conducted.
From page 32...
... Smaller investments in evaluation may risk studying an inadequate sample of program types, and it may also invite compromises in research quality. The nature of the HIV/AIDS epidemic mandates an unwavering commitment to prevention programs, and the prevention activities require a similar commitment to the evaluation of those programs.
From page 33...
... (1989) Costeffectiveness analysis of AIDS prevention programs: Concepts, complications, and illustrations.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.