National Academies Press: OpenBook

Evaluating AIDS Prevention Programs: Expanded Edition (1991)

Chapter: A Collaborative Contracting Strategy

« Previous: Appendixes
Suggested Citation:"A Collaborative Contracting Strategy." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 197
Suggested Citation:"A Collaborative Contracting Strategy." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 198
Suggested Citation:"A Collaborative Contracting Strategy." National Research Council. 1991. Evaluating AIDS Prevention Programs: Expanded Edition. Washington, DC: The National Academies Press. doi: 10.17226/1535.
×
Page 199

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

A Collaborative Contracting Strategy this report, the panel has recommended rigorous evaluation strategies for assessing the projects of community-based organizations (CBOs) and counseling and testing sites. In making these recommendations, we have also noted that some projects may be unable or unwilling to participate in evaluation research because they do not have the funding, time, or appropnately trained personnel to undertake the necessary tasks. In this appendix, the pane] lays out a possible strategy that would ensure projects of the necessary resources to conduct evaluation research and that would also lead to separate but mutually informing communities of evaluation experts. The proposed tactic for evaluating projects located in CBOs and In counseling and testing sites is to use a contract bidding procedure rather than the request-for-proposal process. The contract to be bid on would describe Me following: . demographic characteristics of the target population; · the scope of the program; · endpoint objectives behavioral, psychological, or biologi- c~; · program content objectives; · policy objectives; · evaluation objectives for formative and process evaluations and for evaluating whether the project makes a difference and what works better; and · evaluation methods. 197

198 ~ APPENDIX A A prospective contractor, in collaboration with a competent evaluation group, would explain in detail its approach to designing a program to meet the contract requirements. The prospective contractor would bid on the contract, relating the bid to program design and to the contract re- quirement. The evaluation procedure, responsibilities, and budget would be predetermined and included as a contract requirement. The evaluation processes, including random assignment, monitoring, and data collection and analyses, would be dictated primarily by the scope of the program and by whether outcomes are to be assessed internally (as the responsi- bility of the contractor) or externally (as the responsibility of an outside evaluator to analyze multiple CBO contractors). The contracting option also engenders further choices. Among these is deciding whether to develop separate contractual arrangements with each project that agrees to evaluation or to develop a single large con- tract to cover the evaluation of a sample of sites. Developing separate contractual arrangements may involve an independent evaluation team submitting evaluation proposals that show how the CBO would colIabo- rate with such a team and how the evaluation would be calTied out. The independence of the evaluation team is justified on grounds of credibil- ity and scientific integrity; however, the collaboration with the CBO is essential. For CDC to contract separately with six to eight CBO-evaluator groups would be feasible but managerially burdensome. Nevertheless, the strategy arguably is sensible on scientific grounds. In effect, over tile long run the approach builds separate but mutually ~nforTning communi- ties of experts. The current dependence of AIDS research on only a few universities and research institutes is often sound strategy for massive evaluation but does little to develop local capacity for routinely and Ic- cally generated high-quality evaluation. Local evaluative capacity avoids dependence on a single pnncipal investigator who makes decisions about the evaluation of a range of complex projects. Campbell (1987:402) argues that splitting large studies into two or more parallel studies is desirable on grounds that it increases the "size and autonomy of a mutu- ally monitoring scientific community." The latter is essential in building scientific understanding in prevention program research and evaluation. Contracting with one evaluation group that collaborates with, per- haps, six to eight sites is also feasible, to judge from work by Hubbard and colleagues (1988), among others. This approach is managerially less burdensome than contracting independently with evaluator-project com- binations, but has the disadvantages of being vulnerable to the will of a single decision maker (i.e., the principal investigator) and of not building

CONTRACTING 199 lock expense ~ me design, 1mplemen~1ion, Id Isis of ebb wows -~ Has ~ REFERENCES C a, D. (1987) Guldel~es For monhodug me sciendOc commence of p~venOve ~~~emion Joseph camp: ~ exercise in me stole of scientific tidily. ^~: In, I, ~~ &389~30. H~b~, R. L., Dresden, ha. E., C~=gh, E., Robe, J. ha, ad Gi=-, H. M. (1988) Role of ~g-~use =~eD1 ~ limiting me saw of IDS. Revi Nachos D~ 10:377-384.

Next: B Oversight and Coordination Strategy »
Evaluating AIDS Prevention Programs: Expanded Edition Get This Book
×
Buy Paperback | $60.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

With insightful discussion of program evaluation and the efforts of the Centers for Disease Control, this book presents a set of clear-cut recommendations to help ensure that the substantial resources devoted to the fight against AIDS will be used most effectively.

This expanded edition of Evaluating AIDS Prevention Programs covers evaluation strategies and outcome measurements, including a realistic review of the factors that make evaluation of AIDS programs particularly difficult. Randomized field experiments are examined, focusing on the use of alternative treatments rather than placebo controls. The book also reviews nonexperimental techniques, including a critical examination of evaluation methods that are observational rather than experimental—a necessity when randomized experiments are infeasible.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!