. "8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press, 2008.
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Improving Democracy Assistance: Building Knowledge through Evaluations and Research
highly decentralized agency, and country missions have substantial discretion in how they implement and manage their programs.
The committee also recognizes that the USAID contracting process is already dauntingly complex and time-consuming, demanding much of the time that DG officers spend to develop and manage their projects. The committee thus is cautious about recommending specific solutions for the contracting of evaluations, especially as contract and procurement processes are not an area in which the committee has any special expertise. What follows is instead intended as a set of principles, drawn from research and field studies, that the committee believes will help USAID in obtaining sound impact evaluations of DG projects. Examples are offered of possible approaches to the problem, but the actual design and implementation of any changes would rest with USAID. Knowing how difficult the problems of changing contract management practices are with the current reality of USAID programming, the DG evaluation initiative recommended in the next chapter could be an opportunity to try out different approaches.
A key problem, not unique to DG or USAID, is the question of providing incentives to DG staff and implementers to undertake and complete sound and credible impact evaluations. The DG officers and implementers the committee and its field teams met shared a strong desire to be successful in promoting democracy. They are drawn to their work because they believe that democracy is a better form of government and that foreign assistance can help bring about democratic development. The problem, however, is how to promote democracy. From the outset, DG officers and implementers alike recognized that “doing democracy” was going to be much more difficult than other areas in development such as health and agriculture where causal relationships are better understood and impacts easier to measure. There may be formidable barriers to good policy and implementation in these other areas, but at least there is greater consensus about the basic questions of theory and measurement.
The uncertainty about fundamental aspects of DG reinforces the normal human and bureaucratic incentives to avoid documented failure, a problem that has been cited as affecting evaluations across USAID and not simply DG (Clapp-Wincek and Blue 2001, Savedoff et al 2006). In the absence of a strong learning culture that encourages open reflection and recognizes the uncertainties surrounding DG programming, carrying out projects that produce no effect (or a negative effect) could understandably be considered a threat to a USAID officer’s career. Similarly, program implementers worry about their organizations’ futures and the results of