National Academies Press: OpenBook

Improving Democracy Assistance: Building Knowledge Through Evaluations and Research (2008)

Chapter: 9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs

« Previous: 8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 219
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 220
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 221
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 222
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 223
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 224
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 225
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 226
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 227
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 228
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 229
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 230
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 231
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 232
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 233
Suggested Citation:"9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 234

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

9 An Evaluation Initiative to Support Learning the Impact of USAID’s Democracy and Governance Programs Introduction Nearly two decades after the U.S. government and other donors began making major investments in promoting democracy and governance (DG) abroad, a number of international studies found that surprisingly little hard empirical evidence exists about the impact of these investments (see Chapter 2 for a discussion of these studies). New cross-national quantita- tive research suggests that DG funding on average has spurred democracy, but this analysis reveals nothing about the efficacy of specific projects or activities—such as local government capacity building, investments in civil society organizations, or judicial training—that have come to domi- nate the U.S. Agency for International Development (USAID) DG menu (Al-Momani 2003; Finkel et al 2007, 2008; Kalyvitis and Vlachaki 2007; Azpuru et al 2008). Decades of monitoring and process evaluation reports have yielded significant amounts of data on outputs (e.g., local gov- ernments supported, nongovernmental organizations (NGOs) funded, judges trained) and valuable reflections on the process of delivering DG assistance. But as discussed in earlier chapters, they have so far provided little evidence that meets accepted standards of impact evaluation about whether these projects have strengthened local governments, contributed to more robust civil societies, or helped create more legitimate judicial sectors in the countries in which they have been implemented. Five years from now, the committee hopes that the USAID will be in a position not only to clearly and persuasively identify the effects of its DG programs but also to claim leadership in the procedures for conducting 219

220 IMPROVING DEMOCRACY ASSISTANCE sound impact evaluations of them where feasible and appropriate. To do this, USAID must invest in creating an ethos of evaluation, so that at least some of its DG projects are seen as presenting valuable opportunities to learn about what works and what does not in encouraging the growth of democratic institutions and values around the world. Earlier chapters analyzed current USAID approaches to assessment and evaluation and proposed ways to provide the evidence of project impact that USAID needs both for its own programming and for pre- senting and defending its programs to the broader policy community in Washington and internationally. Earlier chapters focused on the specific policy and process changes that the committee believes are needed to help USAID overcome concerns that hinder undertaking sound impact evaluations and to augment USAID’s overall learning to support DG programming. This chapter outlines a suggested strategy for USAID and its Strategic and Operational Research Agenda (SORA) to implement such changes. The committee recommends a special initiative—a synthesis of many of its earlier proposals for what USAID should do in the future—to examine the feasibility of applying the most rigorous impact evaluation methods to DG projects. Recognizing both the current skepticism in the DG assistance community about impact evaluations and the significant organizational barriers that their implementation faces given current U.S. contracting and management practices, the committee’s recommen- dation is relatively modest, more in the way of undertaking a pilot or set of demonstration projects within the current USAID structure. Providing Leadership and Strategic Vision Obtaining more impact evaluations to determine the effects of DG programs is chiefly a matter of setting priorities, and that is the domain of leadership. Strong leadership is essential if USAID is to become an orga- nization that prizes learning about the successes and failures of its DG projects, whether launched in the missions, regional bureaus, or the cen- tral DG office. Because DG programs are such an important—and often controversial—part of U.S. foreign policy, the committee recommends that leadership should come from the top—in the form of a DG evalua- tion initiative led by a senior USAID official. This initiative should be guided by a policy statement outlining the strategic role of investments in impact evaluations of DG programming. It is particularly important that the “vision” behind impact evaluations make clear that gaining knowledge of what works and what does not work is the primary goal. Impact evaluations should thus be targeted as far as possible to study projects as designed and carried out; the discussion in Chapters 6 and 7

AN EVALUATION INITIATIVE TO SUPPORT LEARNING 221 shows that actual projects—not just artificial or deceptively simple ver- sions of them—could likely be given sound impact evaluations, includ- ing the most effective randomized designs. In addition, missions and implementers with generally good records will be positively recognized, and not sanctioned, if they uncover sound evidence that programs do not work or work poorly. This statement would provide a valuable opportunity to adjust the balance of motivations that currently drive monitoring and evaluation (M&E) in DG. The administrator should see the need for this initiative, both to ensure the sound and effective use of the considerable increases in budgetary resources going into DG programs in the past five years and to create a leading edge for revitalizing evaluation across the agency.  The initiative would begin a conscious and deliberate effort to undertake the highest-quality impact evaluations (including randomized designs where possible), in order to restore a better balance among dif- ferent types of M&E activities, which are now largely focused on tracking project outputs or very proximate outcomes. Impact evaluations would help USAID accumulate knowledge that would (1) distinguish project models that work from those that do not, (2) identify the conditions under which particular approaches are more or less effective, and (3) help USAID avoid costly investments that may cause harm or may simply be ineffective. The committee’s charge is limited to recommendations for improving USAID’s ability to evaluate its DC projects but there could be advantages to making this an agency-wide initiative. USAID implements social pro- grams in many parts of the agency, so the changes the committee recom- mends could yield much wider benefits. As discussed in Chapter 2, the World Bank has taken this approach through its Development Impact Evaluation (DIME) Initiative and NGOs such as the Poverty Action Lab at the Massachusetts Institute of Technology and the Evaluation Gap Working Group of the Center for Global Development are working to promote impact evaluations for a range of social programs. This is a time when many policymakers, both within and outside the United States, are calling for reinvigoration and rethinking of foreign assistance programs (among myriad sources, see, e.g., Lancaster 2000, 2006; National Endow-   2006 study from the National Research Council addressed the broader issues of the A decline in evaluation capacity across USAID (NRC 2006).   Information about the evaluation gap initiative can be found at http://www.cgdev.org/ section/initiatives/_active/evalgap. Accessed August 27, 2007. Information about the Abdul Latif Jameel Poverty Action Lab can be found at http://www.povertyactionlab.com/. Accessed on August 3, 2007. Information about the DIME initiative can be found at http://econ. worldbank.org/WBSITE/EXTERNAL/EXTDEC/0,,contentMDK:20381417~menuPK:773951~ pagePK:64165401~piPK:64165026~theSitePK:469372,00.html. Accessed on August 3, 2007.

222 IMPROVING DEMOCRACY ASSISTANCE ment for Democracy 2006; Epstein et al 2007; HELP Commission 2007; Hyman 2008). In addition to its program benefits, a DG evaluation initiative could place USAID among those in the forefront of improving development policy. Although there are sound reasons to think that impact evaluations may often not prove feasible, and committee member Larry Garber has often noted such concerns, the potential gains to accurate and defensible knowledge where such evaluations do prove feasible would be consider- able. USAID is unique among donors in the range of assistance projects and the number of countries in which it operates at any given time. The committee is thus unanimous in recommending that USAID undertake a pilot program to learn whether impact evaluations will yield new insights into the effectiveness of DG projects. Implementing the Vision: The Evaluation Initiative Improving the evaluation of DG programs should embrace a multi- tiered approach. Not all projects need be, or should be, chosen for the most intensive evaluation using the techniques of randomized assignment to treatment and control groups outlined in Chapter 5. Neither USAID staff nor their implementing partners currently have the capacity to implement impact evaluations widely, and these skills require time and experience to develop. Moreover, as already discussed, the skepticism the committee encountered about whether impact evaluations were feasible persuaded members that a well-organized piloting of impact evaluations on a few select programs would be the best way to start. Moving too quickly or too sweepingly could impose an unacceptably high cost on USAID’s efforts to assist the development of democracy and good governance throughout the world. Tasks for the DG Evaluation Initiative The committee strongly recommends that, to accelerate the build- ing of a solid core of knowledge regarding project effectiveness, the DG evaluation initiative should immediately develop and undertake a number of well-designed impact evaluations that test the efficacy of key project models or core development hypotheses that guide USAID DG assistance. A portion of these evaluations should use randomized designs, as these are the most accurate and credible means of ascertain- ing program impact. By key models, the committee refers to projects that (1) are implemented in a similar form across multiple countries and (2) receive substantial funding (examples include projects to support local government, civil society, judicial training). By core hypotheses the

AN EVALUATION INITIATIVE TO SUPPORT LEARNING 223 committee refers to the assumptions guiding USAID project design that, whether drawn from experience or prevailing ideas about how democ- racy is developed and sustained, have not been tested as empirical propo- sitions. Examples include the assumption that public service delivery improves if citizens have oversight over the spending of public monies or the idea that exposure to democratic practices increases people’s faith in democratic institutions. The DG evaluation initiative should identify three or four program models that are widely used in DG promotion and two or three core hypotheses that guide DG thinking on democracy assistance and then plan and conduct impact evaluations of these models/hypotheses across a range of countries or contexts over the next five years. As many of these as possible should be chosen to offer feasible designs for random assignment evaluations. However, for important programs for which USAID desires impact evaluations but for which randomization is not feasible, carefully developed alternative designs, of the types discussed in Chapter 5, should be developed and implemented. At the end of this five-year period, USAID would have: • practical experience in implementing the evaluation designs that can indicate where such approaches are feasible, what the major obstacles are to wider implementation, and whether and how these obstacles can be overcome; • where the evaluations prove feasible, a solid empirical foundation to begin (1) assessing the validity of some of the key assumptions that underlie DG projects and (2) learning which commonly used projects work and which do not in achieving program goals; and • the basis for judging how widely to apply such impact evaluations to DG program evaluations in the future. For the majority of USAID DG projects, however, the goal should be more modest. Where USAID mission directors request them, the ini- tiative should provide support and advice to help the missions request and oversee good-quality impact evaluations that pay attention to all three elements of good evaluation practices: a focus on outcomes, good baseline measurements, and comparison with untreated groups. Evalu- ations should include pre- and postintervention outcome measures, along with, where possible, an analysis of outcomes in a relevant con- trol group. As Chapters 5, 6, and 7 demonstrated, a wide variety of evaluation designs aside from randomized assignments are available to help USAID accumulate systematic evidence of the efficacy of particular approaches in order to guide its decision making as new investments are planned.

224 IMPROVING DEMOCRACY ASSISTANCE To assist in the effort, the committee recommends that the USAID administrator consider establishing a social sciences advisory group for the agency. This group could play a useful role in advising on the design of the evaluation initiative, helping work through issues that arise during implementation, and developing a peer review process for assessing the evaluations undertaken during the initiative. Resources The five-year DG evaluation initiative should be supported with special, dedicated resources outside the usual project structure. Sup- porting the initiative with special funds would be another signal of the strong commitment to change. The committee is not able to provide an estimate of the likely cost of the initiative, in part because the difficulties in obtaining estimates of what USAID currently spends on M&E provide no basis for comparison. Some of the essential components are discussed below to provide a rough basis for making an estimate. But the important point is that the funds not come out of current mission program budgets that are already stretched thin. It is also important that the resources be used to support both the special impact evaluations chosen as the chief task of the DG evaluation initiative and efforts by country missions to improve their evaluations or conduct their own impact evaluations on chosen projects. The initia- tive should thus make its resources and expertise available to mission directors who want its support in conducting impact evaluations or otherwise changing their mix of M&E activities, in order to make the initiative an asset to the DG officers in the field rather than an addi- tional unfunded burden. Capacity One of the biggest challenges facing the initiative relates to capacity. Over the past four decades, the structure of USAID has been transformed, moving away from an in-house professional staff of development experts with a significant and substantive role in projects toward an arrangement in which development projects are prioritized, solicited, approved, and overseen by USAID officers, but projects are largely designed, carried out, and evaluated by outside implementers (NRC 2006). This shift has led to an increasing focus on time-consuming issues of grant and contract man- agement rather than project design and evaluation. This long-term shift has taken place in parallel with the more recent changes in agency policy described in Chapter 2 toward an increased emphasis on project moni- toring and the use of evaluations to respond periodically to management needs, rather than systematically assess project impacts.

AN EVALUATION INITIATIVE TO SUPPORT LEARNING 225 One consequence of these changes in policy and in the responsibili- ties of USAID staff has been the erosion of the skill base and expertise required to design and oversee impact evaluations for a variety of pro- grams and contexts. The DG officers the committee encountered were experienced and knowledgeable in substantive matters, but even if they had training in general social sciences research methods, they rarely had training or experience with impact evaluation design. The evalua- tion capacity of USAID’s DG programs, like other capabilities, has thus increasingly shifted to the implementers who design and carry out the projects. Although the committee found in its own field visits that DG officers were, in general, quite willing to work with the committee’s con- sultants who were evaluation experts and that the DG officers were open to considering new approaches to testing the efficacy of their programs, few of the officers thought they were capable of judging and oversee- ing varied impact evaluation designs without additional assistance and resources. The expertise needed among USAID professionals and, in particular, DG officers to support the initiative deserves particular attention. USAID’s past hiring in the DG field has stressed bringing in individuals with practical or theoretical training in democratic processes and institutions. This will continue to be the main area for DG expertise, but it is clearly distinct from, and not sufficient for, providing expertise in the full range of project evaluation strategies. The World Bank, health care agencies, and other foreign assistance organizations regularly hire Ph.D.-level research- ers whose advanced training focused on carrying out experimental and statistical evaluation analyses to support their subject matter experts. To increase its in-house capacity to support improved evaluations, USAID will need to hire more individuals with Ph.D.s in the social sciences whose training was strong in techniques of experimental and statistical analysis that can be applied to DG projects. The committee recommends that USAID acquire sufficient internal expertise in this area to both guide an initiative at USAID headquarters and provide advice and support to field missions as a key element of the initiative. The DG office, like other parts of USAID, has made use of short-term appointments to augment its expertise. In the committee’s judgment, how- ever, if the recommended evaluation initiative is accepted, the practice of having an occasional Ph.D.-trained experimental analyst as a fellow in the DG office can be helpful but will probably not be sufficient. As discussed further below, valuable assistance could be provided by outside experts through USAID’s various contracting mechanisms, but the leadership and confidence that come with in-house knowledge will be important to the success of the proposed initiative. For many years the lack of staff capacity was offset by an active agency-wide centralized evaluation office (as in most bilateral and multi-

226 IMPROVING DEMOCRACY ASSISTANCE lateral development agencies)—the Center for Development Information and Evaluation (CDIE). The DG office in particular was the subject of many detailed CDIE evaluations, including substantial comparative stud- ies of DG projects (see, e.g., Blair and Hanson 1994, de Zeeuw and Kumar 2006). As already discussed, these were generally process evaluations and not formal impact evaluations, but they did provide systematic research intended to gather lessons and compare experiences. With the increased emphasis on project monitoring, however, CDIE had grown gradually weaker in recent years and was recently absorbed into the office of the new director of foreign assistance in the State Department. Whether or not an independent central evaluation office should be restored is beyond this committee’s charge, but the committee believes the capacity of USAID headquarters to provide significant resources and expertise to assist DG officers in the field (and perhaps other USAID pro- grams as well) who wish to develop impact evaluations of their programs would be a valuable augmentation of USAID’s in-house resources. Partnerships to Add Capacity from Outside USAID While the committee believes that a substantial augmentation of USAID’s internal capacity for evaluation design is necessary for the pro- posed evaluation initiative to be effective, there is no reason that USAID’s efforts to improve evaluation must be purely an in-house affair. The need for supplemental outside capacity is particularly acute with regard to impact evaluations and broad-based learning. There is no need to keep on staff sufficient experts on evaluation design to provide all the assistance requested by country missions in that regard, if USAID can find other means to deliver the required technical support to field staff at critical moments of project design, implementation, and evaluation. And many of USAID’s organizational learning activities can and should be enriched by partnerships with academic institutions and think tanks exploring similar issues. USAID has a number of options through its current contracting mech- anisms to acquire this support. The committee’s discussions in Wash- ington and during its field visits suggest that a significant number of implementers already have or could readily add the necessary expertise in impact evaluations; the problem has been a lack of demand for impact evaluations as parts of calls for proposals, rather than a lack of capacity among implementers. As discussed earlier, the committee believes that   Local grantees, such as NGOs, pose a different problem. Although it was found in the field visits that many local partners understood the concepts of improved outcome measures and impact evaluations, few had the training and capacity to implement new practices without assistance.

AN EVALUATION INITIATIVE TO SUPPORT LEARNING 227 it is important to maintain independence between those implementing a program and those responsible for its evaluation, but this could be achieved in a number of ways. Universities also offer a major source of expertise related to high- quality impact evaluations. Many university-based scholars already serve as consultants to USAID implementers on a range of DG issues. Increas- ingly, scholars are also partnering directly with international development agencies and NGOs to design and undertake systematic program evalua- tions. Mechanisms such as the Democracy Fellows program allow USAID to bring scholars onto its staff for short-term appointments. Moreover, there is ample precedent in USAID for drawing on the expertise and resources of universities rather than individual scholars. Over several decades USAID established itself as a pioneer in research leading to development in the field of agriculture. The agency accom- plished this through a wide array of partnerships (usually constructed in the form of “cooperative agreements”) with U.S. land grant colleges and universities. These were institutions that had long been carrying out the research needed to achieve better agricultural outcomes. Land grant officials were accustomed to working with state agricultural extension services, for example, providing them with technical support to detect, diagnose, and cure outbreaks of diseases and infestations threatening crops and livestock. The research was not limited to agricultural produc- tion itself but dealt with a wide range of issues, including rural credit, in which Ohio State University played a key role, or land tenure, in which the Land Tenure Center at the University of Wisconsin became the world leader. Those partnerships expanded beyond the borders of the United States into international networks of research centers dedicated to agri- cultural research and extension. A prime illustration is Zamorano, in Honduras, but there are many others. When USAID embarked on democracy programs as a major effort distinct from its other programs, it did not make a comparable invest- ment in basic research partnerships with universities to provide addi- tional knowledge and intellectual capacity. In most cases the focus was and remains on doing democracy rather than studying how to do democracy. There were and are important exceptions, and in addition some univer- sities are major implementers of USAID DG programs, such as SUNY Albany’s long-term efforts at legislative strengthening, or the work of the IRIS Center at the University of Maryland on issues related to economic development and governance. Although not necessary for the initial DG evaluation initiative, for the longer term USAID might consider investing resources to develop a   Further information about the IRIS program may be found at http://www.iris.umd.edu/ and about SUNY Albany’s Center for Legislative Development at http://www.albany.edu/cld/.

228 IMPROVING DEMOCRACY ASSISTANCE set of agency-university partnerships designed to facilitate high-quality evaluations and research in particular sectors or issues. These partners should also be involved in designing and implementing a range of dis- cussion/learning activities for DG officers in regard to evaluations and other research on democracy. Possible models include the “centers of excellence” funded by the U.S. Department of Homeland Security or the National Institutes of Health. In addition to providing expertise to advise programming and research to advance knowledge, such agency- university centers could assist DG—and USAID more broadly—in devel- oping a standardized training module on evaluation techniques for DG program staff. Agenda for USAID and SORA As part of its charge from USAID, the committee was asked to rec- ommend a “refined and clear overall research and analytic design that integrates the various research projects under SORA into a coherent whole in order to produce valid and useful findings and recommendations for democracy program improvements.” Various parts of this design have been dealt with in depth in earlier chapters and will not be repeated here. But the committee does want to summarize the essential elements it believes could enable SORA to continue to serve as a major resource for USAID in studying the effectiveness of its programs and providing knowledge to guide policy planning. Retrospective Studies SORA began its work by exploring how USAID might mine the wealth of its experience with DG programs around the world to inform its future work. Based on the study by Bollen et al (2005) and its own investigations, the committee found that the records and evaluations of past USAID DG projects could not provide the requisite baseline, out- come, and comparison group data needed to do retrospective impact evaluations of those projects. Therefore the committee recommends that the most useful retrospective studies that USAID could support, if it chooses to, would be long-term comparative case studies that examine the role of democracy assistance in a variety of trajectories and contexts of democratic development. A diverse and theoretically structured set   As discussed in Chapter 1, in 2000 the Office of Democracy and Governance in the Bureau for Democracy, Conflict, and Humanitarian Assistance created SORA, which consists of a number of research activities. SORA’s goal is to improve the quality of U.S. government DG programs and strategic approaches by (1) analyzing the effectiveness and impact of USAID DG programs since their inception and (2) developing specific findings and recommenda- tions useful to democracy practitioners and policymakers.

AN EVALUATION INITIATIVE TO SUPPORT LEARNING 229 of case studies could provide insights into overall patterns of democ- ratization that could improve strategic assessment and planning (see Chapter 4). If USAID chooses first to take advantage of current research in the academic and policy communities, it could undertake an effort to engage systematically with those producing research and serve as a vital bridge to accumulate and disseminate evidence and findings in the most policy-friendly format possible. If USAID chooses to support case study research of its own, the committee has suggested some key characteristics for a successful research design. Strategic Assessment Chapter 3 made the case for a significant effort by USAID, if possible in cooperation with other donors, to support the development of a set of “meso-level” indicators that would be the best focus for USAID’s efforts to track and assess countries’ progress or problems with democratization. This would be a long-term and expensive effort, but there are already sub- stantial numbers of candidate indicators that could potentially contribute to such an index (see, e.g., the review by Landman 2003). If the United States and other donors are going to continue to support the development of democracy worldwide, the committee strongly believes that it is time to invest the resources needed to provide high-quality indicators compa- rable to those that have been developed over time in other economic and social fields. Whether or not SORA or the Office of Democracy and Gov- ernance became the home for such an effort, its recent experience with a major quantitative assessment of the impact of U.S. democracy assistance (Finkel et al 2007, 2008) and its understanding of the needs of DG officers in Washington and in the field would make it a logical place from which such an initiative could be developed. Improving Monitoring and Evaluation This chapter has outlined the proposed evaluation initiative the com- mittee believes should be the core of the effort to improve USAID’s ability to assess the effectiveness of its projects in the future. The committee’s recommendations for high-level leadership would support day-to-day implementation of the initiative and provide a central focus. One of the frequent comments that the field teams heard from DG officers was the desire for advice and assistance in understanding and developing impact evaluations, and this is a role SORA could readily play. It would also be a logical starting point if the recommendations for a wider effort to restore USAID’s evaluation capacity were implemented (NRC 2006:90- 91). SORA could also be given responsibility for developing the social sciences advisory group and the broader partnerships with universities

230 IMPROVING DEMOCRACY ASSISTANCE that the committee recommends. These could both contribute to the work of the evaluation initiative and support learning from retrospective case studies. Active Learning While it will take time for the results of the evaluation initiative to mount and provide evidence for the positive or negative impact of vari- ous USAID DG projects, USAID can and should take advantage of other avenues to learn about DG assistance. The case studies and other analyses recommended in this report would be an essential part of this effort, as would regular opportunities to discuss DG officers’ experiences and aca- demic research on democratization. Active organizational learning means much more than simply having such research materials available for DG staff to peruse or view on the Web. As discussed in Chapter 8, it means having DG staff actively engaged with such materials through discussions and meetings with the authors of such research, probing to seek the les- sons contained in the research. The continuing pilot effort for the “Voices from the Field” project discussed in Chapter 8 could over time become a key instrument in acquiring and disseminating insights from active prac- titioners as another element in this commitment to learning. The committee thus recommends that part of the agenda for the Office of Democracy and Governance and the final part of the DG evaluation ini- tiative should be a provision for active learning through regular meetings of DG staff with academics, NGOs, and think tank researchers who are exploring such issues as trajectories of democracy, the progress of democ- racy in various regions or nations, and the reception of DG programs in various settings. These need not all be in Washington but could include meetings in the field focused on regional issues or certain types of DG programs (e.g., having a conference in Africa on anticorruption programs that draws in regional DG staff). The planning for such meetings could involve partnerships with academics, think tanks, local partners, or other DG assistance donors. Taken together and supported by the leadership of USAID, the SORA program and the wider efforts of the DG office and USAID that are more broadly discussed throughout this report would provide USAID with the capacity to effectively evaluate and continuously improve its work to support democratic development. Role of Congress and the Executive Branch USAID cannot undertake the evaluation initiative and other efforts recommended here alone. A significant barrier to change is the agency’s

AN EVALUATION INITIATIVE TO SUPPORT LEARNING 231 uneasy relationship with Congress and uncertainty regarding its evolving relationship with the other parts of the Executive Branch. Across the world, and across the U.S. government, there are efforts to improve results, accountability, and organizational knowledge of foreign assistance. The committee hopes that the efforts of SORA and the recom- mendations in this report will form part of this broad movement to reform foreign aid. However, such improvement will only come with a commitment to learning what works and what does not, in a spirit that avoids blame and offers credit for learning that advances the effectiveness of aid. Military and medical institutions have learned that simply punishing failures leads to efforts to hide or cover up problems and thus to those problems being prolonged. Greater progress toward the overall goals is obtained when people are encouraged to report unintended problems or setbacks and are not penalized for them. Congress and the Executive Brach must take a position on foreign aid that learning of a program’s ineffectiveness, although it may lead to ending that particular program, will not be used to undermine foreign aid in general or those who worked on that pro- gram. Indeed, given the currently uncertain knowledge and difficult chal- lenge of advancing democracy in diverse conditions, learning that half or two-thirds of USAID’s DG programs have real and significant effects in helping countries advance should be seen as fundamentally positive and evidence of success, while learning which half or one-third of programs are not effective should be seen as an important step in advancing the target- ing and effectiveness of democracy assistance. Unrealistic expectations for universal success or rapid advances, given USAID’s modest budgets for DG assistance and the complexities and many countervailing forces that prevail in the real world of democracy assistance, will not help the neces- sary learning—which will involve some incremental advances and some cases of learning from setbacks—that would lead to meaningful advances in the field of foreign assistance. Congress, of course, is ultimately responsible for seeing that the public’s money is used wisely, and it should be helped to understand that rigorous impact evaluations are an important tool in seeking that end. But more than that, the committee hopes for a renewed partnership between USAID and other branches of the federal government. Con- gress and Executive Branch policymakers should recognize that USAID DG programs cannot be held responsible for the successes or failures of democratic development in any given country. Even U.S. foreign policy as a whole with all of its instruments, of which USAID DG assistance is only a small part, may be unable to have a substantial impact. In turn, USAID should be held accountable for determining the success or failure of the DG projects it undertakes and for making a systematic effort to document and learn from what works and what does not. USAID should not fear

232 IMPROVING DEMOCRACY ASSISTANCE this process; repeated studies have now shown that, overall, democracy assistance is effective (Finkel et al 2007, 2008). What needs to be done next to improve such assistance is to learn more about which specific projects are being most effective and in what contexts. This simply cannot be done accurately without a strong commitment in both Congress and USAID to making sound impact evaluations a significant part of the agency’s overall M&E and learning activities. Conclusions The committee wants to restate clearly its position that impact evalua- tions, especially randomized evaluations, though the most potent method of evaluating the true effects of DG projects where feasible and appropri- ate, are not the only important form of evaluation or the only path to improved DG programming. Process evaluations, debriefings, and shar- ing of personal insights among DG staff (e.g., “Voices from the Field”), as well as historical studies of democratic trajectories, also are essential components of knowledge building and improving DG activities. Yet per- haps the single most significant deficiency that the committee observed in regard to USAID learning which of its DG projects are most effective and when was the lack of well-designed impact evaluations of such projects. The committee sees an enormous opportunity for USAID to accelerate its learning and the effectiveness of its programming by learning through the proposed evaluation initiative whether and how impact evaluations could be applied to DG projects. More broadly, leadership that creates a strong expectation that high-quality evaluations are critical to USAID’s future missions could improve USAID’s global leadership in gaining knowledge about democracy promotion, give heightened credibility to USAID’s rela- tions with Congress, and—the committee believes—contribute greatly to achieving USAID’s goals of supporting the spread and strengthening of democratic polities throughout the world. REFERENCES Al-Momani, M.H. 2003. Financial Transfer and Its Impact on the Level of Democracy: A Pooled Cross-Sectional Time Series Model. Unpublished Ph.D. thesis, University of North Texas. Azpuru, D., Finkel, S., Pérez-Liñán, A., and Seligson, M.A. 2008. Trends in Democracy As- sistance: What Has the United States Been Doing? Journal of Democracy 91(2):150-159. Blair, H., and Hanson, G. 1994. Weighing in on the Scales of Justice: Strategic Approaches for Donor-Supported Rule of Law Programs. USAID Program and Operations Assessment Report No. 7. Washington, DC: USAID Center for Development Information and Evalu- ation. Available at: http://www.usaid.gov/our_work/democracy_and_governance/publications/ pdfs/pnaax280.pdf. Accessed on August 18, 2007.

AN EVALUATION INITIATIVE TO SUPPORT LEARNING 233 Bollen, K., Paxton, P., and Morishima, R. 2005. Assessing International Evaluations: An Example from USAID’s Democracy and Governance Programs. American Journal of Evaluation 26:189-203. Epstein, S., Serafino, N., and Miko, F. 2007. Democracy Promotion: Cornerstone of U.S. Foreign Policy? Washington, DC: Congressional Research Service. Finkel, S.E., Pérez-Liñán, A., and Seligson, M.A. 2007. The Effects of U.S. Foreign Assistance on Democracy Building, 1990-2003. World Politics 59(3):404-439. Finkel, S.E., Pérez-Liñán, A., Seligson, M.A, and Tate, C.N. 2008. Deepening Our Under- standing of the Effects of U.S. Foreign Assistance on Democracy Building: Final Report. Available at: http://www.LapopSurveys.org. HELP Commission. 2007. Beyond Assistance: The HELP Commission Report on Foreign Assistance Reform. Available at: http://helpcommission.gov/. Accessed on February 23, 2008. Hyman, G. 2008. Assessing Secretary of State Rice’s Reform of U.S. Foreign Assistance. Carn- egie Papers. Washington, DC: Carnegie Endowment for International Peace. Kalyvitis, S.C., and Vlachaki, I. 2007. Democracy Assistance and the Democratization of Recipients. Available at: http://ssrn.com/abstract=888262. Lancaster, C. 2000. Transforming Foreign Aid: United States Assistance in the 21st Century. Washington, DC: Peterson Institute for International Economics. Lancaster, C. 2006. Foreign Aid: Diplomacy, Development, Domestic Policies. Chicago: University of Chicago Press. Landman, T. 2003. Map-Making and Analysis of the Main International Initiatives on De- veloping Indicators on Democracy and Good Governance. Final Report. University of Essex. Available at: http://www.oecd.org/dataoecd/0/28/20755719.pdf. Accessed on April 27, 2008. National Endowment for Democracy. 2006. The Backlash Against Democracy Assistance. Wash- ington, DC: National Endowment for Democracy. NRC (National Research Council). 2006. The Fundamental Role of Science and Technology in International Development: An Imperative for the U.S. Agency for International Development. Washington, DC: The National Academies Press. de Zeeuw, J., and Kumar, K. 2006. Promoting Democracy in Postconflict Societies. Boulder: Lynne Rienner.

Next: Glossary »
Improving Democracy Assistance: Building Knowledge Through Evaluations and Research Get This Book
×
Buy Paperback | $70.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Over the past 25 years, the United States has made support for the spread of democracy to other nations an increasingly important element of its national security policy. These efforts have created a growing demand to find the most effective means to assist in building and strengthening democratic governance under varied conditions.

Since 1990, the U.S. Agency for International Development (USAID) has supported democracy and governance (DG) programs in approximately 120 countries and territories, spending an estimated total of $8.47 billion (in constant 2000 U.S. dollars) between 1990 and 2005. Despite these substantial expenditures, our understanding of the actual impacts of USAID DG assistance on progress toward democracy remains limited—and is the subject of much current debate in the policy and scholarly communities.

This book, by the National Research Council, provides a roadmap to enable USAID and its partners to assess what works and what does not, both retrospectively and in the future through improved monitoring and evaluation methods and rebuilding USAID's internal capacity to build, absorb, and act on improved knowledge.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!