Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 219
Improving Democracy Assistance: Building Knowledge through Evaluations and Research 9 An Evaluation Initiative to Support Learning the Impact of USAID’s Democracy and Governance Programs INTRODUCTION Nearly two decades after the U.S. government and other donors began making major investments in promoting democracy and governance (DG) abroad, a number of international studies found that surprisingly little hard empirical evidence exists about the impact of these investments (see Chapter 2 for a discussion of these studies). New cross-national quantitative research suggests that DG funding on average has spurred democracy, but this analysis reveals nothing about the efficacy of specific projects or activities—such as local government capacity building, investments in civil society organizations, or judicial training—that have come to dominate the U.S. Agency for International Development (USAID) DG menu (Al-Momani 2003; Finkel et al 2007, 2008; Kalyvitis and Vlachaki 2007; Azpuru et al 2008). Decades of monitoring and process evaluation reports have yielded significant amounts of data on outputs (e.g., local governments supported, nongovernmental organizations (NGOs) funded, judges trained) and valuable reflections on the process of delivering DG assistance. But as discussed in earlier chapters, they have so far provided little evidence that meets accepted standards of impact evaluation about whether these projects have strengthened local governments, contributed to more robust civil societies, or helped create more legitimate judicial sectors in the countries in which they have been implemented. Five years from now, the committee hopes that the USAID will be in a position not only to clearly and persuasively identify the effects of its DG programs but also to claim leadership in the procedures for conducting
OCR for page 220
Improving Democracy Assistance: Building Knowledge through Evaluations and Research sound impact evaluations of them where feasible and appropriate. To do this, USAID must invest in creating an ethos of evaluation, so that at least some of its DG projects are seen as presenting valuable opportunities to learn about what works and what does not in encouraging the growth of democratic institutions and values around the world. Earlier chapters analyzed current USAID approaches to assessment and evaluation and proposed ways to provide the evidence of project impact that USAID needs both for its own programming and for presenting and defending its programs to the broader policy community in Washington and internationally. Earlier chapters focused on the specific policy and process changes that the committee believes are needed to help USAID overcome concerns that hinder undertaking sound impact evaluations and to augment USAID’s overall learning to support DG programming. This chapter outlines a suggested strategy for USAID and its Strategic and Operational Research Agenda (SORA) to implement such changes. The committee recommends a special initiative—a synthesis of many of its earlier proposals for what USAID should do in the future—to examine the feasibility of applying the most rigorous impact evaluation methods to DG projects. Recognizing both the current skepticism in the DG assistance community about impact evaluations and the significant organizational barriers that their implementation faces given current U.S. contracting and management practices, the committee’s recommendation is relatively modest, more in the way of undertaking a pilot or set of demonstration projects within the current USAID structure. PROVIDING LEADERSHIP AND STRATEGIC VISION Obtaining more impact evaluations to determine the effects of DG programs is chiefly a matter of setting priorities, and that is the domain of leadership. Strong leadership is essential if USAID is to become an organization that prizes learning about the successes and failures of its DG projects, whether launched in the missions, regional bureaus, or the central DG office. Because DG programs are such an important—and often controversial—part of U.S. foreign policy, the committee recommends that leadership should come from the top—in the form of a DG evaluation initiative led by a senior USAID official. This initiative should be guided by a policy statement outlining the strategic role of investments in impact evaluations of DG programming. It is particularly important that the “vision” behind impact evaluations make clear that gaining knowledge of what works and what does not work is the primary goal. Impact evaluations should thus be targeted as far as possible to study projects as designed and carried out; the discussion in Chapters 6 and 7
OCR for page 221
Improving Democracy Assistance: Building Knowledge through Evaluations and Research shows that actual projects—not just artificial or deceptively simple versions of them—could likely be given sound impact evaluations, including the most effective randomized designs. In addition, missions and implementers with generally good records will be positively recognized, and not sanctioned, if they uncover sound evidence that programs do not work or work poorly. This statement would provide a valuable opportunity to adjust the balance of motivations that currently drive monitoring and evaluation (M&E) in DG. The administrator should see the need for this initiative, both to ensure the sound and effective use of the considerable increases in budgetary resources going into DG programs in the past five years and to create a leading edge for revitalizing evaluation across the agency.1 The initiative would begin a conscious and deliberate effort to undertake the highest-quality impact evaluations (including randomized designs where possible), in order to restore a better balance among different types of M&E activities, which are now largely focused on tracking project outputs or very proximate outcomes. Impact evaluations would help USAID accumulate knowledge that would (1) distinguish project models that work from those that do not, (2) identify the conditions under which particular approaches are more or less effective, and (3) help USAID avoid costly investments that may cause harm or may simply be ineffective. The committee’s charge is limited to recommendations for improving USAID’s ability to evaluate its DC projects but there could be advantages to making this an agency-wide initiative. USAID implements social programs in many parts of the agency, so the changes the committee recommends could yield much wider benefits. As discussed in Chapter 2, the World Bank has taken this approach through its Development Impact Evaluation (DIME) Initiative and NGOs such as the Poverty Action Lab at the Massachusetts Institute of Technology and the Evaluation Gap Working Group of the Center for Global Development are working to promote impact evaluations for a range of social programs.2 This is a time when many policymakers, both within and outside the United States, are calling for reinvigoration and rethinking of foreign assistance programs (among myriad sources, see, e.g., Lancaster 2000, 2006; National Endow- 1 A 2006 study from the National Research Council addressed the broader issues of the decline in evaluation capacity across USAID (NRC 2006). 2 Information about the evaluation gap initiative can be found at http://www.cgdev.org/section/initiatives/_active/evalgap. Accessed August 27, 2007. Information about the Abdul Latif Jameel Poverty Action Lab can be found at http://www.povertyactionlab.com/. Accessed on August 3, 2007. Information about the DIME initiative can be found at http://econ.worldbank.org/WBSITE/EXTERNAL/EXTDEC/0,,contentMDK:20381417~menuPK:773951~pagePK:64165401~piPK:64165026~theSitePK:469372,00.html. Accessed on August 3, 2007.
OCR for page 222
Improving Democracy Assistance: Building Knowledge through Evaluations and Research ment for Democracy 2006; Epstein et al 2007; HELP Commission 2007; Hyman 2008). In addition to its program benefits, a DG evaluation initiative could place USAID among those in the forefront of improving development policy. Although there are sound reasons to think that impact evaluations may often not prove feasible, and committee member Larry Garber has often noted such concerns, the potential gains to accurate and defensible knowledge where such evaluations do prove feasible would be considerable. USAID is unique among donors in the range of assistance projects and the number of countries in which it operates at any given time. The committee is thus unanimous in recommending that USAID undertake a pilot program to learn whether impact evaluations will yield new insights into the effectiveness of DG projects. IMPLEMENTING THE VISION: THE EVALUATION INITIATIVE Improving the evaluation of DG programs should embrace a multitiered approach. Not all projects need be, or should be, chosen for the most intensive evaluation using the techniques of randomized assignment to treatment and control groups outlined in Chapter 5. Neither USAID staff nor their implementing partners currently have the capacity to implement impact evaluations widely, and these skills require time and experience to develop. Moreover, as already discussed, the skepticism the committee encountered about whether impact evaluations were feasible persuaded members that a well-organized piloting of impact evaluations on a few select programs would be the best way to start. Moving too quickly or too sweepingly could impose an unacceptably high cost on USAID’s efforts to assist the development of democracy and good governance throughout the world. Tasks for the DG Evaluation Initiative The committee strongly recommends that, to accelerate the building of a solid core of knowledge regarding project effectiveness, the DG evaluation initiative should immediately develop and undertake a number of well-designed impact evaluations that test the efficacy of key project models or core development hypotheses that guide USAID DG assistance. A portion of these evaluations should use randomized designs, as these are the most accurate and credible means of ascertaining program impact. By key models, the committee refers to projects that (1) are implemented in a similar form across multiple countries and (2) receive substantial funding (examples include projects to support local government, civil society, judicial training). By core hypotheses the
OCR for page 223
Improving Democracy Assistance: Building Knowledge through Evaluations and Research committee refers to the assumptions guiding USAID project design that, whether drawn from experience or prevailing ideas about how democracy is developed and sustained, have not been tested as empirical propositions. Examples include the assumption that public service delivery improves if citizens have oversight over the spending of public monies or the idea that exposure to democratic practices increases people’s faith in democratic institutions. The DG evaluation initiative should identify three or four program models that are widely used in DG promotion and two or three core hypotheses that guide DG thinking on democracy assistance and then plan and conduct impact evaluations of these models/hypotheses across a range of countries or contexts over the next five years. As many of these as possible should be chosen to offer feasible designs for random assignment evaluations. However, for important programs for which USAID desires impact evaluations but for which randomization is not feasible, carefully developed alternative designs, of the types discussed in Chapter 5, should be developed and implemented. At the end of this five-year period, USAID would have: practical experience in implementing the evaluation designs that can indicate where such approaches are feasible, what the major obstacles are to wider implementation, and whether and how these obstacles can be overcome; where the evaluations prove feasible, a solid empirical foundation to begin (1) assessing the validity of some of the key assumptions that underlie DG projects and (2) learning which commonly used projects work and which do not in achieving program goals; and the basis for judging how widely to apply such impact evaluations to DG program evaluations in the future. For the majority of USAID DG projects, however, the goal should be more modest. Where USAID mission directors request them, the initiative should provide support and advice to help the missions request and oversee good-quality impact evaluations that pay attention to all three elements of good evaluation practices: a focus on outcomes, good baseline measurements, and comparison with untreated groups. Evaluations should include pre- and postintervention outcome measures, along with, where possible, an analysis of outcomes in a relevant control group. As Chapters 5, 6, and 7 demonstrated, a wide variety of evaluation designs aside from randomized assignments are available to help USAID accumulate systematic evidence of the efficacy of particular approaches in order to guide its decision making as new investments are planned.
OCR for page 224
Improving Democracy Assistance: Building Knowledge through Evaluations and Research To assist in the effort, the committee recommends that the USAID administrator consider establishing a social sciences advisory group for the agency. This group could play a useful role in advising on the design of the evaluation initiative, helping work through issues that arise during implementation, and developing a peer review process for assessing the evaluations undertaken during the initiative. Resources The five-year DG evaluation initiative should be supported with special, dedicated resources outside the usual project structure. Supporting the initiative with special funds would be another signal of the strong commitment to change. The committee is not able to provide an estimate of the likely cost of the initiative, in part because the difficulties in obtaining estimates of what USAID currently spends on M&E provide no basis for comparison. Some of the essential components are discussed below to provide a rough basis for making an estimate. But the important point is that the funds not come out of current mission program budgets that are already stretched thin. It is also important that the resources be used to support both the special impact evaluations chosen as the chief task of the DG evaluation initiative and efforts by country missions to improve their evaluations or conduct their own impact evaluations on chosen projects. The initiative should thus make its resources and expertise available to mission directors who want its support in conducting impact evaluations or otherwise changing their mix of M&E activities, in order to make the initiative an asset to the DG officers in the field rather than an additional unfunded burden. Capacity One of the biggest challenges facing the initiative relates to capacity. Over the past four decades, the structure of USAID has been transformed, moving away from an in-house professional staff of development experts with a significant and substantive role in projects toward an arrangement in which development projects are prioritized, solicited, approved, and overseen by USAID officers, but projects are largely designed, carried out, and evaluated by outside implementers (NRC 2006). This shift has led to an increasing focus on time-consuming issues of grant and contract management rather than project design and evaluation. This long-term shift has taken place in parallel with the more recent changes in agency policy described in Chapter 2 toward an increased emphasis on project monitoring and the use of evaluations to respond periodically to management needs, rather than systematically assess project impacts.
OCR for page 225
Improving Democracy Assistance: Building Knowledge through Evaluations and Research One consequence of these changes in policy and in the responsibilities of USAID staff has been the erosion of the skill base and expertise required to design and oversee impact evaluations for a variety of programs and contexts. The DG officers the committee encountered were experienced and knowledgeable in substantive matters, but even if they had training in general social sciences research methods, they rarely had training or experience with impact evaluation design. The evaluation capacity of USAID’s DG programs, like other capabilities, has thus increasingly shifted to the implementers who design and carry out the projects. Although the committee found in its own field visits that DG officers were, in general, quite willing to work with the committee’s consultants who were evaluation experts and that the DG officers were open to considering new approaches to testing the efficacy of their programs, few of the officers thought they were capable of judging and overseeing varied impact evaluation designs without additional assistance and resources. The expertise needed among USAID professionals and, in particular, DG officers to support the initiative deserves particular attention. USAID’s past hiring in the DG field has stressed bringing in individuals with practical or theoretical training in democratic processes and institutions. This will continue to be the main area for DG expertise, but it is clearly distinct from, and not sufficient for, providing expertise in the full range of project evaluation strategies. The World Bank, health care agencies, and other foreign assistance organizations regularly hire Ph.D.-level researchers whose advanced training focused on carrying out experimental and statistical evaluation analyses to support their subject matter experts. To increase its in-house capacity to support improved evaluations, USAID will need to hire more individuals with Ph.D.s in the social sciences whose training was strong in techniques of experimental and statistical analysis that can be applied to DG projects. The committee recommends that USAID acquire sufficient internal expertise in this area to both guide an initiative at USAID headquarters and provide advice and support to field missions as a key element of the initiative. The DG office, like other parts of USAID, has made use of short-term appointments to augment its expertise. In the committee’s judgment, however, if the recommended evaluation initiative is accepted, the practice of having an occasional Ph.D.-trained experimental analyst as a fellow in the DG office can be helpful but will probably not be sufficient. As discussed further below, valuable assistance could be provided by outside experts through USAID’s various contracting mechanisms, but the leadership and confidence that come with in-house knowledge will be important to the success of the proposed initiative. For many years the lack of staff capacity was offset by an active agency-wide centralized evaluation office (as in most bilateral and multi-
OCR for page 226
Improving Democracy Assistance: Building Knowledge through Evaluations and Research lateral development agencies)—the Center for Development Information and Evaluation (CDIE). The DG office in particular was the subject of many detailed CDIE evaluations, including substantial comparative studies of DG projects (see, e.g., Blair and Hanson 1994, de Zeeuw and Kumar 2006). As already discussed, these were generally process evaluations and not formal impact evaluations, but they did provide systematic research intended to gather lessons and compare experiences. With the increased emphasis on project monitoring, however, CDIE had grown gradually weaker in recent years and was recently absorbed into the office of the new director of foreign assistance in the State Department. Whether or not an independent central evaluation office should be restored is beyond this committee’s charge, but the committee believes the capacity of USAID headquarters to provide significant resources and expertise to assist DG officers in the field (and perhaps other USAID programs as well) who wish to develop impact evaluations of their programs would be a valuable augmentation of USAID’s in-house resources. Partnerships to Add Capacity from Outside USAID While the committee believes that a substantial augmentation of USAID’s internal capacity for evaluation design is necessary for the proposed evaluation initiative to be effective, there is no reason that USAID’s efforts to improve evaluation must be purely an in-house affair. The need for supplemental outside capacity is particularly acute with regard to impact evaluations and broad-based learning. There is no need to keep on staff sufficient experts on evaluation design to provide all the assistance requested by country missions in that regard, if USAID can find other means to deliver the required technical support to field staff at critical moments of project design, implementation, and evaluation. And many of USAID’s organizational learning activities can and should be enriched by partnerships with academic institutions and think tanks exploring similar issues. USAID has a number of options through its current contracting mechanisms to acquire this support. The committee’s discussions in Washington and during its field visits suggest that a significant number of implementers already have or could readily add the necessary expertise in impact evaluations; the problem has been a lack of demand for impact evaluations as parts of calls for proposals, rather than a lack of capacity among implementers.3 As discussed earlier, the committee believes that 3 Local grantees, such as NGOs, pose a different problem. Although it was found in the field visits that many local partners understood the concepts of improved outcome measures and impact evaluations, few had the training and capacity to implement new practices without assistance.
OCR for page 227
Improving Democracy Assistance: Building Knowledge through Evaluations and Research it is important to maintain independence between those implementing a program and those responsible for its evaluation, but this could be achieved in a number of ways. Universities also offer a major source of expertise related to high-quality impact evaluations. Many university-based scholars already serve as consultants to USAID implementers on a range of DG issues. Increasingly, scholars are also partnering directly with international development agencies and NGOs to design and undertake systematic program evaluations. Mechanisms such as the Democracy Fellows program allow USAID to bring scholars onto its staff for short-term appointments. Moreover, there is ample precedent in USAID for drawing on the expertise and resources of universities rather than individual scholars. Over several decades USAID established itself as a pioneer in research leading to development in the field of agriculture. The agency accomplished this through a wide array of partnerships (usually constructed in the form of “cooperative agreements”) with U.S. land grant colleges and universities. These were institutions that had long been carrying out the research needed to achieve better agricultural outcomes. Land grant officials were accustomed to working with state agricultural extension services, for example, providing them with technical support to detect, diagnose, and cure outbreaks of diseases and infestations threatening crops and livestock. The research was not limited to agricultural production itself but dealt with a wide range of issues, including rural credit, in which Ohio State University played a key role, or land tenure, in which the Land Tenure Center at the University of Wisconsin became the world leader. Those partnerships expanded beyond the borders of the United States into international networks of research centers dedicated to agricultural research and extension. A prime illustration is Zamorano, in Honduras, but there are many others. When USAID embarked on democracy programs as a major effort distinct from its other programs, it did not make a comparable investment in basic research partnerships with universities to provide additional knowledge and intellectual capacity. In most cases the focus was and remains on doing democracy rather than studying how to do democracy. There were and are important exceptions, and in addition some universities are major implementers of USAID DG programs, such as SUNY Albany’s long-term efforts at legislative strengthening, or the work of the IRIS Center at the University of Maryland on issues related to economic development and governance.4 Although not necessary for the initial DG evaluation initiative, for the longer term USAID might consider investing resources to develop a 4 Further information about the IRIS program may be found at http://www.iris.umd.edu/ and about SUNY Albany’s Center for Legislative Development at http://www.albany.edu/cld/.
OCR for page 228
Improving Democracy Assistance: Building Knowledge through Evaluations and Research set of agency-university partnerships designed to facilitate high-quality evaluations and research in particular sectors or issues. These partners should also be involved in designing and implementing a range of discussion/learning activities for DG officers in regard to evaluations and other research on democracy. Possible models include the “centers of excellence” funded by the U.S. Department of Homeland Security or the National Institutes of Health. In addition to providing expertise to advise programming and research to advance knowledge, such agency-university centers could assist DG—and USAID more broadly—in developing a standardized training module on evaluation techniques for DG program staff. AGENDA FOR USAID AND SORA As part of its charge from USAID, the committee was asked to recommend a “refined and clear overall research and analytic design that integrates the various research projects under SORA into a coherent whole in order to produce valid and useful findings and recommendations for democracy program improvements.”5 Various parts of this design have been dealt with in depth in earlier chapters and will not be repeated here. But the committee does want to summarize the essential elements it believes could enable SORA to continue to serve as a major resource for USAID in studying the effectiveness of its programs and providing knowledge to guide policy planning. Retrospective Studies SORA began its work by exploring how USAID might mine the wealth of its experience with DG programs around the world to inform its future work. Based on the study by Bollen et al (2005) and its own investigations, the committee found that the records and evaluations of past USAID DG projects could not provide the requisite baseline, outcome, and comparison group data needed to do retrospective impact evaluations of those projects. Therefore the committee recommends that the most useful retrospective studies that USAID could support, if it chooses to, would be long-term comparative case studies that examine the role of democracy assistance in a variety of trajectories and contexts of democratic development. A diverse and theoretically structured set 5 As discussed in Chapter 1, in 2000 the Office of Democracy and Governance in the Bureau for Democracy, Conflict, and Humanitarian Assistance created SORA, which consists of a number of research activities. SORA’s goal is to improve the quality of U.S. government DG programs and strategic approaches by (1) analyzing the effectiveness and impact of USAID DG programs since their inception and (2) developing specific findings and recommendations useful to democracy practitioners and policymakers.
OCR for page 229
Improving Democracy Assistance: Building Knowledge through Evaluations and Research of case studies could provide insights into overall patterns of democratization that could improve strategic assessment and planning (see Chapter 4). If USAID chooses first to take advantage of current research in the academic and policy communities, it could undertake an effort to engage systematically with those producing research and serve as a vital bridge to accumulate and disseminate evidence and findings in the most policy-friendly format possible. If USAID chooses to support case study research of its own, the committee has suggested some key characteristics for a successful research design. Strategic Assessment Chapter 3 made the case for a significant effort by USAID, if possible in cooperation with other donors, to support the development of a set of “meso-level” indicators that would be the best focus for USAID’s efforts to track and assess countries’ progress or problems with democratization. This would be a long-term and expensive effort, but there are already substantial numbers of candidate indicators that could potentially contribute to such an index (see, e.g., the review by Landman 2003). If the United States and other donors are going to continue to support the development of democracy worldwide, the committee strongly believes that it is time to invest the resources needed to provide high-quality indicators comparable to those that have been developed over time in other economic and social fields. Whether or not SORA or the Office of Democracy and Governance became the home for such an effort, its recent experience with a major quantitative assessment of the impact of U.S. democracy assistance (Finkel et al 2007, 2008) and its understanding of the needs of DG officers in Washington and in the field would make it a logical place from which such an initiative could be developed. Improving Monitoring and Evaluation This chapter has outlined the proposed evaluation initiative the committee believes should be the core of the effort to improve USAID’s ability to assess the effectiveness of its projects in the future. The committee’s recommendations for high-level leadership would support day-to-day implementation of the initiative and provide a central focus. One of the frequent comments that the field teams heard from DG officers was the desire for advice and assistance in understanding and developing impact evaluations, and this is a role SORA could readily play. It would also be a logical starting point if the recommendations for a wider effort to restore USAID’s evaluation capacity were implemented (NRC 2006:90-91). SORA could also be given responsibility for developing the social sciences advisory group and the broader partnerships with universities
OCR for page 230
Improving Democracy Assistance: Building Knowledge through Evaluations and Research that the committee recommends. These could both contribute to the work of the evaluation initiative and support learning from retrospective case studies. Active Learning While it will take time for the results of the evaluation initiative to mount and provide evidence for the positive or negative impact of various USAID DG projects, USAID can and should take advantage of other avenues to learn about DG assistance. The case studies and other analyses recommended in this report would be an essential part of this effort, as would regular opportunities to discuss DG officers’ experiences and academic research on democratization. Active organizational learning means much more than simply having such research materials available for DG staff to peruse or view on the Web. As discussed in Chapter 8, it means having DG staff actively engaged with such materials through discussions and meetings with the authors of such research, probing to seek the lessons contained in the research. The continuing pilot effort for the “Voices from the Field” project discussed in Chapter 8 could over time become a key instrument in acquiring and disseminating insights from active practitioners as another element in this commitment to learning. The committee thus recommends that part of the agenda for the Office of Democracy and Governance and the final part of the DG evaluation initiative should be a provision for active learning through regular meetings of DG staff with academics, NGOs, and think tank researchers who are exploring such issues as trajectories of democracy, the progress of democracy in various regions or nations, and the reception of DG programs in various settings. These need not all be in Washington but could include meetings in the field focused on regional issues or certain types of DG programs (e.g., having a conference in Africa on anticorruption programs that draws in regional DG staff). The planning for such meetings could involve partnerships with academics, think tanks, local partners, or other DG assistance donors. Taken together and supported by the leadership of USAID, the SORA program and the wider efforts of the DG office and USAID that are more broadly discussed throughout this report would provide USAID with the capacity to effectively evaluate and continuously improve its work to support democratic development. ROLE OF CONGRESS AND THE EXECUTIVE BRANCH USAID cannot undertake the evaluation initiative and other efforts recommended here alone. A significant barrier to change is the agency’s
OCR for page 231
Improving Democracy Assistance: Building Knowledge through Evaluations and Research uneasy relationship with Congress and uncertainty regarding its evolving relationship with the other parts of the Executive Branch. Across the world, and across the U.S. government, there are efforts to improve results, accountability, and organizational knowledge of foreign assistance. The committee hopes that the efforts of SORA and the recommendations in this report will form part of this broad movement to reform foreign aid. However, such improvement will only come with a commitment to learning what works and what does not, in a spirit that avoids blame and offers credit for learning that advances the effectiveness of aid. Military and medical institutions have learned that simply punishing failures leads to efforts to hide or cover up problems and thus to those problems being prolonged. Greater progress toward the overall goals is obtained when people are encouraged to report unintended problems or setbacks and are not penalized for them. Congress and the Executive Brach must take a position on foreign aid that learning of a program’s ineffectiveness, although it may lead to ending that particular program, will not be used to undermine foreign aid in general or those who worked on that program. Indeed, given the currently uncertain knowledge and difficult challenge of advancing democracy in diverse conditions, learning that half or two-thirds of USAID’s DG programs have real and significant effects in helping countries advance should be seen as fundamentally positive and evidence of success, while learning which half or one-third of programs are not effective should be seen as an important step in advancing the targeting and effectiveness of democracy assistance. Unrealistic expectations for universal success or rapid advances, given USAID’s modest budgets for DG assistance and the complexities and many countervailing forces that prevail in the real world of democracy assistance, will not help the necessary learning—which will involve some incremental advances and some cases of learning from setbacks—that would lead to meaningful advances in the field of foreign assistance. Congress, of course, is ultimately responsible for seeing that the public’s money is used wisely, and it should be helped to understand that rigorous impact evaluations are an important tool in seeking that end. But more than that, the committee hopes for a renewed partnership between USAID and other branches of the federal government. Congress and Executive Branch policymakers should recognize that USAID DG programs cannot be held responsible for the successes or failures of democratic development in any given country. Even U.S. foreign policy as a whole with all of its instruments, of which USAID DG assistance is only a small part, may be unable to have a substantial impact. In turn, USAID should be held accountable for determining the success or failure of the DG projects it undertakes and for making a systematic effort to document and learn from what works and what does not. USAID should not fear
OCR for page 232
Improving Democracy Assistance: Building Knowledge through Evaluations and Research this process; repeated studies have now shown that, overall, democracy assistance is effective (Finkel et al 2007, 2008). What needs to be done next to improve such assistance is to learn more about which specific projects are being most effective and in what contexts. This simply cannot be done accurately without a strong commitment in both Congress and USAID to making sound impact evaluations a significant part of the agency’s overall M&E and learning activities. CONCLUSIONS The committee wants to restate clearly its position that impact evaluations, especially randomized evaluations, though the most potent method of evaluating the true effects of DG projects where feasible and appropriate, are not the only important form of evaluation or the only path to improved DG programming. Process evaluations, debriefings, and sharing of personal insights among DG staff (e.g., “Voices from the Field”), as well as historical studies of democratic trajectories, also are essential components of knowledge building and improving DG activities. Yet perhaps the single most significant deficiency that the committee observed in regard to USAID learning which of its DG projects are most effective and when was the lack of well-designed impact evaluations of such projects. The committee sees an enormous opportunity for USAID to accelerate its learning and the effectiveness of its programming by learning through the proposed evaluation initiative whether and how impact evaluations could be applied to DG projects. More broadly, leadership that creates a strong expectation that high-quality evaluations are critical to USAID’s future missions could improve USAID’s global leadership in gaining knowledge about democracy promotion, give heightened credibility to USAID’s relations with Congress, and—the committee believes—contribute greatly to achieving USAID’s goals of supporting the spread and strengthening of democratic polities throughout the world. REFERENCES Al-Momani, M.H. 2003. Financial Transfer and Its Impact on the Level of Democracy: A Pooled Cross-Sectional Time Series Model. Unpublished Ph.D. thesis, University of North Texas. Azpuru, D., Finkel, S., Pérez-Liñán, A., and Seligson, M.A. 2008. Trends in Democracy Assistance: What Has the United States Been Doing? Journal of Democracy 91(2):150-159. Blair, H., and Hanson, G. 1994. Weighing in on the Scales of Justice: Strategic Approaches for Donor-Supported Rule of Law Programs. USAID Program and Operations Assessment Report No. 7. Washington, DC: USAID Center for Development Information and Evaluation. Available at: http://www.usaid.gov/our_work/democracy_and_governance/publications/pdfs/pnaax280.pdf. Accessed on August 18, 2007.
OCR for page 233
Improving Democracy Assistance: Building Knowledge through Evaluations and Research Bollen, K., Paxton, P., and Morishima, R. 2005. Assessing International Evaluations: An Example from USAID’s Democracy and Governance Programs. American Journal of Evaluation 26:189-203. Epstein, S., Serafino, N., and Miko, F. 2007. Democracy Promotion: Cornerstone of U.S. Foreign Policy? Washington, DC: Congressional Research Service. Finkel, S.E., Pérez-Liñán, A., and Seligson, M.A. 2007. The Effects of U.S. Foreign Assistance on Democracy Building, 1990-2003. World Politics 59(3):404-439. Finkel, S.E., Pérez-Liñán, A., Seligson, M.A, and Tate, C.N. 2008. Deepening Our Understanding of the Effects of U.S. Foreign Assistance on Democracy Building: Final Report. Available at: http://www.LapopSurveys.org. HELP Commission. 2007. Beyond Assistance: The HELP Commission Report on Foreign Assistance Reform. Available at: http://helpcommission.gov/. Accessed on February 23, 2008. Hyman, G.2008. Assessing Secretary of State Rice’s Reform of U.S. Foreign Assistance. Carnegie Papers. Washington, DC: Carnegie Endowment for International Peace. Kalyvitis, S.C., and Vlachaki, I. 2007. Democracy Assistance and the Democratization of Recipients. Available at: http://ssrn.com/abstract=888262. Lancaster, C. 2000. Transforming Foreign Aid: United States Assistance in the 21st Century. Washington, DC: Peterson Institute for International Economics. Lancaster, C. 2006. Foreign Aid: Diplomacy, Development, Domestic Policies. Chicago: University of Chicago Press. Landman, T. 2003. Map-Making and Analysis of the Main International Initiatives on Developing Indicators on Democracy and Good Governance. Final Report. University of Essex. Available at: http://www.oecd.org/dataoecd/0/28/20755719.pdf. Accessed on April 27, 2008. National Endowment for Democracy. 2006. The Backlash Against Democracy Assistance. Washington, DC: National Endowment for Democracy. NRC (National Research Council). 2006. The Fundamental Role of Science and Technology in International Development: An Imperative for the U.S. Agency for International Development. Washington, DC: The National Academies Press. de Zeeuw, J., and Kumar, K. 2006. Promoting Democracy in Postconflict Societies. Boulder: Lynne Rienner.
OCR for page 234
Improving Democracy Assistance: Building Knowledge through Evaluations and Research This page intentionally left blank.