National Academies Press: OpenBook
« Previous: Front Matter
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 1
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 2
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 3
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 4
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 5
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 6
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 7
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 8
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 9
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 10
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 11
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 12
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 13
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 14
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 15
Suggested Citation:"Summary." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 16

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Summary Background Over the past 25 years, the United States has made support for the spread of democracy to other nations an increasingly important element of its national security policy. Many other multilateral agencies, countries, and nongovernmental organizations (NGOs) also are involved in provid- ing democracy assistance. These efforts have created a growing demand to find the most effective means to assist in building and strengthening democratic governance under varied conditions. Within the U.S. government the U.S. Agency for International Devel- opment (USAID) has principal responsibility for providing democracy assistance. Since 1990, USAID has supported democracy and governance (DG) programs in approximately 120 countries and territories, spending an estimated total of $8.47 billion (in constant 2000 U.S. dollars) between 1990 and 2005. The request for DG programs for fiscal year 2008 was $1.45 billion, which includes some small programs in the U.S. Department of State. Despite these substantial expenditures, our understanding of the actual impacts of USAID DG assistance on progress toward democracy remains limited—and is the subject of much current debate in the policy and scholarly communities. Admittedly, the realities of democracy pro- gramming are complicated, given the emphasis on timely responses in politically sensitive environments and flexibility in implementation to account for fluid political circumstances. These realities pose particular challenges for the evaluation of democracy assistance programs. Nonethe- 

 IMPROVING DEMOCRACY ASSISTANCE less, USAID seeks to find ways to determine which programs, in which countries, are having the greatest impact in supporting democratic institu- tions and behaviors and how those effects unfold. To do otherwise would risk making poor use of scarce funds and to remain uncertain about the effectiveness of an important national policy. Yet USAID’s current evaluation practices do not provide compelling evidence of the impacts of DG programs. While gathering valuable infor- mation for project tracking and management, these evaluations usually do not collect data that are critical to making the most accurate and cred- ible determination of project impacts—such as obtaining baseline mea- sures of targeted outcomes before a project is begun or tracking changes in appropriately selected (or assigned) comparison groups to serve as a control or reference group. USAID has been seeking better evidence for the effects of its DG proj- ects. In 2000 the Office of Democracy and Governance created the Strate- gic and Operational Research Agenda (SORA). Under SORA, USAID has commissioned studies of its DG evaluations and underwritten a recent cross-national study of the effects of its democracy assistance programs since 1990. A very encouraging finding from that study is that democracy assistance does matter for democratic progress. The study (Finkel et al 2007; see also the second-phase study, Finkel et al 2008) found that, when controlling for a wide variety of other factors, higher levels of democracy assistance are, on average, associated with movement to higher levels of democracy. These results provide the clearest evidence to date that democracy assistance contributes toward achieving its desired goals. Unfortunately, it is also true that in a number of highly important cases—such as Egypt and post-Soviet Russia—large volumes of democ- racy assistance have yielded disappointing results. In addition to knowl- edge about general effects, USAID needs to know the positive or negative effects of specific projects and why DG assistance has been more success- ful in some contexts than in others. SORA turned to the National Research Council (NRC) for assistance in how to gain greater insight into which democracy assistance projects are having the greatest impacts. This report is intended to provide a road map to enable USAID and its partners to build, absorb, and act on improved knowledge about assisting the devel- opment of democracy in a variety of contexts. Charge to the Committee The USAID Office of Democracy and Governance asked the NRC for help in developing improved methods for learning about the effectiveness and impact of its work, both retrospectively and in the future. Specifically, the project is to provide:

SUMMARY  1. A refined and clear overall research and analytic design that inte- grates the various research projects under SORA into a coherent whole in order to produce valid and useful findings and recommendations for democracy program improvements. 2. An operational definition of democracy and governance that disag- gregates the concept into clearly defined and measurable components. 3. Recommended methodologies to carry out retrospective analysis. The recommendations will include a plan for cross-national case study research to determine program effectiveness and inform strategic plan- ning. USAID will be able to use this plan as the basis of a scope of work to carry out comparative retrospective analysis, allowing the agency to learn from its 25 years of investment in DG programs. 4. Recommended methodologies to carry out program evaluations in the future. The recommendations for future analysis will focus on more rigorous approaches to evaluation than currently used to assess the impact of democracy assistance programming. They should be applicable across the range of DG programs and allow for comparative analysis. 5. An assessment of the feasibility of the final recommended meth- odologies within the current structure of USAID operations and defining policy, organizational, and operational changes in those operations that might improve the chances for successful implementation. Overall Research and Analytic Design In response to the first charge, the committee unanimously recom- mends a four-part strategy for gaining increased knowledge to support USAID’s DG policy planning and programming. These are: Recommendation 1:  Undertaking a pilot program of impact evalua- tions designed to demonstrate whether such evaluations can help USAID determine the effects of its DG projects on targeted policy-relevant out- comes. A portion of these impact evaluations should use randomized designs since, where applicable and feasible, they are the designs most likely to lead to reliable and valid results in determining project effects and because their use in DG projects has been limited. USAID should begin the pilot program by focusing on a few widely used DG program categories. The pilot evaluations should not supplant current evaluations and assessments, but impact evaluations could gradually become a more important part of USAID’s portfolio of monitoring and evaluation (M&E) activities as the agency gains experience with such evaluations and deter- mines their value. (See Chapters 5 through 7 for a discussion of impact evaluations and how they might be applied to DG projects and Chapter 9 for the committee’s recommendations.)

 IMPROVING DEMOCRACY ASSISTANCE Recommendation 2:  Developing more transparent, objective, and widely accepted indicators of changes in democratic behavior and institutions at the sectoral level—that is, at the level of such sectors as the rule of law, civil society, government accountability, effective local government, and quality of elections. Current aggregate national indicators of democracy, such as Freedom House or Polity scores, are neither at the right level for identifying the impacts of particular USAID DG projects nor accurate and consistent enough to track modest or short-term movements of countries toward or away from greater levels of democracy. (See Chapter 3.) Recommendation 3:  Using more diverse and theoretically structured clusters of case studies of democratization and democracy assistance to develop hypotheses to guide democracy assistance planning in a diverse range of settings. Whether USAID chooses to support such studies or gather them from ongoing academic research, it is important to look at how democracy assistance functions in a range of different initial condi- tions and trajectories of political change. Such case studies should seek to map out long-term trajectories of political change and to place democracy assistance in the context of national and international factors affecting those trajectories, rather than focus mainly on specific democracy assis- tance programs. (See Chapter 4.) Recommendation 4:  Rebuilding USAID’s institutional mechanisms for absorbing and disseminating the results of its work and evaluations, as well as its own research and the research of others, on processes of democratization and democracy assistance. In recent years, USAID has lost much of its capacity to assess the impact and effectiveness of its programs. Without an active program of organizational learning so that senior personnel and DG officers have structured opportunities to dis- cuss the results of pilot evaluations, compare their experiences with DG programs, and discuss the research carried out by USAID and especially other scholars, implementers, and donors, the fruits of the committee’s first three recommendations will not be usefully integrated with the expe- rience of DG officers in a way that will improve DG program planning, design, and outcomes. (See Chapters 8 and 9.) Discussion and Strategies for Implementation The following sections provide more detail on the reasons behind these recommendations and discuss organizational issues at USAID that will affect the agency’s ability to implement them.

SUMMARY  Recommendation 1:  Undertaking a Pilot Program of Impact Evaluations Charges 4 and 5 asked the committee to recommend methodologies for future program evaluations and to evaluate their feasibility. These issues are addressed first, however, because the committee believes that, among the charges it was given, improving USAID’s ability to more precisely ascertain the effects of future DG programs has more potential to build knowledge of what works best in DG programming than either retrospective analyses (given the limits found in the collection of data on past DG projects) or improving the definition of democracy. The commit- tee thus investigated USAID’s current evaluation methods and explored a range of designs for improved evaluations that could be applied to DG projects. The committee also commissioned teams of consultants to visit three diverse missions—in Albania, Peru, and Uganda—to assess the fea- sibility of applying those designs—in particular impact evaluations—to actual ongoing or planned DG projects. Of course, these evaluations, like all of USAID’s evaluations and research, must be part of a broader learn- ing strategy if the agency is to benefit; these organizational aspects are discussed separately below. What Are Impact Evaluations? Most current evaluations of USAID DG projects, while informative and serving varied purposes for project managers, lack the designs or data needed to provide compelling evidence of whether those projects had their intended effects. An impact evaluation aims to separate the effects of a specific DG project from the vast range of other factors affecting the progress of democracy in a given country and thus to make the most precise and credible determination of how much DG projects contribute to desired outcomes. As the committee uses the term, what distinguishes an impact evalu- ation is the effort to determine what would have happened in the absence of the project by using comparison or control groups, or random assign- ment of assistance across groups or individuals, to provide a reference against which to assess the observed outcomes for groups or individuals who received assistance. Randomized designs offer the most accuracy and credibility in determining program impacts and therefore should be the first choice, where feasible, for impact evaluation designs. However, such designs are not always feasible or appropriate, and a number of other designs also provide useful information to determine the impact of many different kinds of assistance projects. For example, when there is only one group or institution receiving assistance, comparisons may be made across time by using a set of carefully timed measures before and

 IMPROVING DEMOCRACY ASSISTANCE after the project while controlling statistically for long-term trends or key events. Impact evaluations are designed according to standard protocols of evaluation research; yet the choice of a particular design and decisions about how to adapt the design to a particular project require skilled crafts- manship as much as science. Current Approaches to Evaluation in USAID The committee’s review of current approaches to the evaluation of development assistance in general, and USAID DG programs in particu- lar, found that: • Very few of the evaluations undertaken by international or mul- tilateral development and democracy donors are designed as impact evaluations. There are signs that this is changing as some donors and international agencies are beginning to implement new approaches to evaluation. The Millennium Challenge Corporation and the World Bank in particular have undertaken efforts to increase the use of randomized designs in evaluations of their economic assistance and anticorruption projects. A few NGOs also have undertaken randomized impact evalua- tions of their democracy assistance efforts. • Within USAID the number of evaluations has declined for all types of assistance programs. The evaluations undertaken for DG programs generally focus on implementation and management concerns and have not collected the data needed for sound impact evaluations. For example, most past evaluations of DG projects have not made comparable baseline and postproject data measurements on key outcomes, and almost all past evaluations lacked data on comparison groups that did not receive assis- tance. This makes it nearly impossible to develop a retrospective analysis from the data in those evaluations to accurately determine the effects of DG programs. • There is a tendency, at one and the same time, to evaluate democ- racy projects mainly in terms of very proximate outcome measures that mainly assess how well the project was implemented and yet to judge the ultimate success of DG projects by whether they coincide with changes in country-level measures of national democracy such as Freedom House scores. Neither course best serves USAID’s interests in determining the effects of its DG programs. Those effects are best judged by focusing on policy-relevant objectives at the local or sectoral level that are plausible outcomes of those projects. • Once research and evaluation are completed, there are few orga- nizational mechanisms for broad discussion among DG officers or for

SUMMARY  integration of research and evaluation findings with the large range of analysis being carried on outside the agency. • DG officials are genuinely interested in procedures that will help them better learn and demonstrate the impact of their projects. Yet there is considerable concern among many at USAID regarding whether mis- sions would gain from designing or implementing rigorous impact evalu- ations, especially those using randomized assignments. This is mostly due to deep skepticism as to the applicability of this methodology to DG programs but also to the overall decline in support for evaluations within USAID, to a lack of specific expertise on impact evaluation design, and to issues in contracting timetables and procedures that discourage adoption of what is perceived as a more complicated approach to evaluation. • More generally, while there are many calls from policymakers, USAID officials, and other international and national agencies and donors to better determine the effects of DG programs, there is also widespread skepticism regarding whether impact evaluations will, in fact, provide that information. One member of the committee, Larry Garber, emphati- cally shares these concerns. Among both scholars and policy profession- als, skeptics worry that the designs for impact evaluations will prove too cumbersome or inflexible to work in fluid and politically sensitive conditions in the field; that such evaluations will be too costly or time- consuming; or that such studies, in particular randomized designs, are either unethical for or ill suited to the actual projects being carried out in DG programs. Feasibility of Impact Evaluations for DG Projects Recognizing the need to take such concerns seriously, the committee examined a wide range of impact evaluation designs and worked with DG officers at several missions to assess the feasibility of such designs for their current or planned activities. The committee’s field studies found that a much larger portion of USAID’s DG programs than expected— forming roughly half of the projects that were examined in Uganda and several projects in Peru and Albania—appear to be amenable, in the view of the committee’s consultants, to randomized assignment designs. Nor did these designs necessarily require major departures from current pro- gram procedures. Often just more attention to how programs were rolled out or allocated among groups scheduled to receive assistance, combined with measurements on both the groups currently receiving assistance and those scheduled to receive it in the future, would create a reasonable randomized assignment design. In cases where randomized assignment designs were not feasible, the field teams were able to develop other

 IMPROVING DEMOCRACY ASSISTANCE designs that could offer a significant improvement in the ability to assess project effectiveness. In addition, the committee found that many of the surveys that USAID is already carrying out provide excellent baseline and compari- son data for DG projects; thus the data for impact evaluations that use matched or adjusted comparison groups (rather than randomization) are in some cases already being collected and could be utilized for little additional cost. The field teams thus concluded that it was quite feasible, at least in theory, to conduct high-quality impact evaluations of varied designs that will help USAID better discern the impacts of its DG programs. However, the committee knows that there is much skepticism regarding these procedures and, in particular, concerns—noted by Mr. Garber and by others in the democracy assistance donor community—about whether the complexity and sensitivity of DG programs will permit sound impact evaluations, especially those using randomized assignments, to be carried out. Therefore the full committee agreed that the value of such impact evaluations will have to be demonstrated in USAID’s own experience. Strategies for Implementation • The committee unanimously recommends that USAID move cautiously but deliberately to implement pilot impact evaluations of several carefully selected projects, including a portion with randomized designs, and expand the use of such impact evaluations as warranted by the results of those pilot evaluations and the needs expressed by USAID mission directors. • Moreover, the committee recommends that these pilot evalua- tions be undertaken as part of a DG evaluation initiative with senior leadership that will also focus on improving USAID’s capacity to under- take impact evaluations and make resources and expertise available to mission directors seeking to learn about and apply impact evaluations to their projects. This DG evaluation initiative is described in more detail below. Recommendation 2:  Developing Better Sectoral- Level Indicators Measuring Democracy In response to Charge 2, the committee reviewed the most widely used indicators of a country’s overall democratic status and considered a number of alternative approaches to developing an operational definition of democracy. This led to four key findings:

SUMMARY  • The concept of democracy cannot, in the present state of scien- tific knowledge of democracies and democratization, be defined in an authoritative (nonarbitrary) and operational fashion. It is an inherently multidimensional concept, and there is little consensus over its attri- butes. Definitions range from minimal—a country must choose its leaders through contested elections—to maximal—a country must have universal suffrage, accountable and limited government, sound and fair justice and extensive protection of human rights and political liberties, and economic and social policies that meet popular needs. Moreover, the definition of democracy is itself a moving target; definitions that would have seemed reasonable at one time (such as describing the United States as a democ- racy in 1900 despite no suffrage for women and major discrimination and little office-holding among minorities) are no longer considered reason- able today. • Existing empirical indicators of overall democracy in a country suffer from flaws that include problems of definition and aggregation, imprecision, measurement errors, poor data coverage, and a lack of agree- ment among scales intended to measure the same qualities. There is thus no way to utilize existing macro-level indicators in a way that provides sound policy guidance or reliably tracks modest or short-term changes in a country’s democratic status. Existing indicators work best simply to roughly categorize countries as “fully democratic,” “authoritarian,” or “mixed or in between” and to identify large-scale or long-term move- ments in levels of democracy. They are particularly weak in assessing dif- ferences among the nondemocratic and mixed regimes that are the most important settings for USAID’s DG work. • By contrast, indicators focused on specific sectors of democracy in a country (the sectoral level) would help USAID (1) track trends across various dimensions of democracy through time, (2) make precise com- parisons across countries and regions, (3) understand the components and possible sequences of democratic transition, (4) analyze causal relation- ships (e.g., between particular facets of democracy and economic growth), and (5) assess the democratic profile (i.e., strengths and weaknesses across various dimensions of democracy) of countries where USAID operates. • While the United States, other donor governments, and interna- tional agencies that are making policy in the areas of health or economic assistance are able to draw on databases that are compiled and updated at substantial cost by government or multilateral agencies mandated to collect such data, no comparable source of data on democracy at either the macro or sectoral level currently exists. Data on democracy are instead currently compiled by various individual academics on irregular and shoestring budgets, or by NGOs or commercial publishers, using different definitions and indicators of democracy.

10 IMPROVING DEMOCRACY ASSISTANCE Strategies for Implementation These findings have led the committee to make a recommendation that committee members believe would significantly improve USAID’s (and others’) ability to track countries’ progress and make the type of strategic assessments that will be most helpful for DG programming. • USAID and other policymakers should explore making a sub- stantial investment in the systematic collection of democracy indica- tors at a disaggregated sectoral level—focused on the components of democracy rather than (or in addition to) the overall concept. If they wish to have access to data on democracy and democratization com- parable to the data relied on by policymakers and foreign assistance agencies in the areas of public health or trade and finance, a substantial government or multilateral effort to improve, develop, and maintain international data on levels and detailed aspects of democracy would be needed. This should not only involve multiple agencies and actors in efforts to initially develop a widely accepted set of sectoral data on democracy and democratic development but should also seek to insti- tutionalize the collection and updating of democracy data for a broad clientele, along the lines of the economic, demographic, and trade data collected by the World Bank, the United Nations, and the International Monetary Fund. • Although creating better measures at the sectoral level to track democratic change is a long-term process, there is no need to wait on such measures for determining the impact of USAID’s DG projects. USAID has already compiled an extensive collection of policy-relevant indicators to track specific changes in government institutions or citizen behavior, such as levels of corruption, levels of participation in local and national deci- sion making, quality of elections, professional level of judges or legisla- tors, or the accountability of the chief executive. Since these are, in fact, the policy-relevant outcomes that are most plausibly affected by DG projects, the committee recommends that measurement of these factors rather than sectoral-level changes be used to determine whether the projects are having a significant impact on the various elements that compose democratic governance. Recommendation 3:  Using Case Studies of Democratization and Democracy Assistance The third charge to the committee was to recommend a plan for com- parative historical case studies of DG assistance. A clustered set of case studies, tracing the processes through which advances toward democracy were made from various sets of initial conditions, is an appropriate mode of investigation for these issues. Such case studies could be particularly

SUMMARY 11 valuable in mapping out varied trajectories of political development and identifying the role that democracy assistance could play in such trajec- tories in relation to various actors and events. Nonetheless, committee members were unable to agree on a firm recommendation that USAID should invest its own funds in such case studies since substantial case study research on democratization is being undertaken by academics and NGOs. To learn more about the role of its DG assistance projects in varied conditions and their role in varied trajectories of democratization, USAID could seek to gain from ongoing academic research. Since much potentially relevant academic research is not written for a policy audience, however, USAID would need to struc- ture its interactions with researchers to ensure that it gains useful and relevant information. Strategies for Implementation • If USAID decides to invest in supporting case study research, the committee recommends using a competitive proposal solicitation process to elicit the best designs. USAID should not specify a precise case study design, but instead should specify key criteria that propos- als must meet. These should include (1) the criteria for choosing cases should be explicit and theoretically driven; (2) the cases should include a variety of initial conditions or contexts in which USAID DG projects oper- ate; (3) the cases should include at least one, if not several, countries in which USAID and other donors have made little or no investment in DG projects; and (4) the cases should include countries with varied outcomes regarding democratic progress or stabilization. • In addition to case studies, a variety of other research methods, both formal and informal (including debriefings of USAID field officers, statistical analyses of international data, and surveys) can shed light on patterns of democratization as well as how DG projects actually operate in the field and how they are received. USAID should include these varied sources of information as part of the regular organizational learning activities the committee recommends next. Recommendation 4:  Rebuilding USAID’s Institutional Mechanisms for Learning Regardless of whether USAID conducts many or fewer impact evalu- ations and contracts for case studies or works with case studies funded by think tanks or other organizations, little of what is learned will effectively guide and improve DG programming without some mechanism within USAID for learning from its own and others’ research on democracy and democratization. For USAID to benefit from this committee’s proposed

12 IMPROVING DEMOCRACY ASSISTANCE pilot study of impact evaluations, it will need to have regular means of disseminating the results of those and other evaluations throughout the agency and discussing the lessons learned from them. For USAID to benefit from ongoing academic research and the studies of DG assistance being undertaken by think tanks and NGOs, it will be necessary for the agency to organize regular structured interactions between such research- ers and its DG staff. While it will take some time for USAID to learn from undertaking the pilot impact evaluations, it will gain immediately from augmenting its overall learning activities and increasing opportunities for DG staff to actively engage with current research and studies on democratization. Though some committee members believe that the impact evaluations will be more novel and instructive than most current case study and policy reports on democratization, several committee members wish to emphasize the considerable value to policymakers and DG officers of the many books, articles, and reports that have been prepared in recent years by academics, think tanks, and practitioners. Whatever the meth- odological flaws of these case studies and process evaluations from a rigorous social sciences perspective, the committee notes that this expand- ing literature has provided important lessons and insights for crafting effective DG programs. Thus the committee is unanimous in finding that a renewed emphasis on engaging USAID DG personnel in discus- sion and analysis of current research on democratization and democracy assistance—including both varied types of evaluations and a broad range of scholarship—would be worthwhile and should begin even before the pilot evaluations have been completed. Unfortunately, in recent years USAID has substantially reduced its institutional mechanisms for creating, disseminating, and absorbing knowledge. The Center for Development Information and Evaluation (CDIE), which served as the hub of systemic evaluation for USAID aid projects, has been dissolved. Moreover, USAID’s support of conferences and learning activities for mission directors and DG staff to share experi- ences and discuss the latest research has declined. And although central collection of evaluations is already a requirement, in practice much useful information, including evaluations and other project documents, survey data and reports, and mission director and DG staff reports, remains dis- persed and difficult to access. Strategies for Implementation Rebuilding organizational learning capacity within USAID will require a number of steps, some minor and some potentially involving major shifts in organizational procedures. The committee thus recom-

SUMMARY 13 mends that the steps below be undertaken by a special DG evaluation initiative led by a senior policymaker or official within USAID who will have the ability to recommend agency-wide changes, as many of the obstacles to improved learning about DG programs stem from agency- wide procedures and organizational characteristics. While in some ways this will replace the capabilities lost with CDIE, in some ways the com- mittee hopes the new initiative will go beyond that. The committee’s charge is limited to recommendations for improv- ing USAID’s ability to evaluate its DG projects, but the committee notes that there could be advantages to making this an agency-wide initiative. USAID implements social programs in many parts of the agency, so the changes the committee recommends could yield much wider benefits. A DG Evaluation Initiative In support of Recommendations 1 and 4, the committee recommends that USAID develop a five-year DG evaluation initiative, led by a senior USAID official and with special funding, for the following: 1.  Undertaking Pilot Impact Evaluations The committee strongly recommends that to accelerate the build- ing of a solid core of knowledge regarding project effectiveness, the DG evaluation initiative should immediately develop and undertake a number of well-designed impact evaluations that test the efficacy of key project models or core development hypotheses that guide USAID DG assistance. A portion of these evaluations should use randomized designs, as these are the most accurate and credible means of ascertain- ing program impact. As randomized designs have also been the most controversial, especially in the DG area, it would be most valuable for the evaluation initiative to help USAID gain experience with and determine the value of these designs for learning the impacts of DG projects. By key models the committee refers to programs that (1) are imple- mented in a similar form across multiple countries and (2) receive sub- stantial funding (e.g., local government support, civil society, judicial training). By core hypotheses the committee refers to assumptions guid- ing USAID program design that, whether drawn from experience or prevailing ideas about how democracy is developed and sustained, have not been tested as empirical propositions. 2.  Increasing USAID’s Capabilities in Project Evaluation Supporting the DG evaluation initiative with special, dedicated resources outside the usual project structure would be another sig- nal of a strong commitment to change. It is also important that these

14 IMPROVING DEMOCRACY ASSISTANCE resources and accompanying expertise in evaluation design be made available to missions implementing DG programs, so that more rigorous evaluations become an opportunity for missions to gain support, rather than an additional unfunded burden. Any changes to M&E of DG pro- grams will be carried out in the field by over 80 missions and hundreds of implementing partners. Even with the centralization of program and budget decision making undertaken in the foreign assistance reforms of 2006, USAID is a highly decentralized agency and mission staff have substantial discretion in how they implement and manage their programs. The initiative should thus make its resources and expertise available to mission directors who want its support in conducting impact evalu- ations or otherwise changing their mix of M&E activities, in order to make the initiative an asset to the DG directors in the field rather than an additional unfunded burden. 3.  Providing Technical Expertise In recent years as USAID has reduced the number of evaluations it conducts, the agency has also failed to hire experts in the latest evalu- ation practices to guide and oversee its contracting and research. The committee recommends that USAID acquire sufficient internal exper- tise in this area to both guide an initiative at USAID headquarters and provide advice and support to field missions, as a key element of the initiative. 4. Improving the Ease of Undertaking Impact Evaluations of DG Projects While many evaluations are currently only sought well after a project has begun or even only after its completion, impact evaluations generally require before-and-after measures and data from comparison or control groups that should be designed into the program from its inception and often cannot be obtained at all once a program is well under way. Pres- sures to get projects under way, as well as many current contracting prac- tices, thus work against implementing and sustaining impact evaluation designs. One task of the DG evaluation initiative should be to address these issues and explore how to ease the task of undertaking impact evaluations within USAID’s contracting and program procedures. The initiative should also examine incentives for both DG officers and project implementers to carry out sound impact evaluations of selected DG projects. 5.  Consider Creating a Social Sciences Advisory Group To assist in the evaluation effort, the committee recommends that the administrator consider establishing a social sciences advisory group

SUMMARY 15 for USAID. This group could play a useful role in advising on the design of the DG evaluation initiative, helping to work though issues that arise during implementation, and developing a peer review process for assess- ing the evaluations undertaken during the initiative. 6.  Rebuilding Institutional Learning Capacity This initiative should be guided by a policy statement outlining the strategic role of developing USAID as a learning organization in the democracy sector. The committee believes that increasing USAID’s capacity to learn what works and what does not should include provi- sions for regular face-to-face interactions among DG officers, imple- menters, and outside experts to discuss recent findings, both from the agency’s own evaluations of all kinds and studies by other donors, think tanks, and academics. Videoconferencing and other advanced technologies can be an important supplement, but personal contact and discussion would be extremely important to sharing experiences of suc- cess and failure as the evaluation initiative went forward. This includes lessons about both the effectiveness of DG projects and successes and failures in implementing impact evaluations. Such meetings are especially important for ensuring that the varied insights derived from impact and process evaluations, academic stud- ies, and examinations of democracy assistance undertaken by indepen- dent researchers, NGOs, think tanks, and other donors are absorbed, dis- cussed, and drawn into USAID DG planning and implementation. While only USAID has the ability to develop and carry out rigorous evaluations of its projects’ impacts, many organizations are carrying out studies of various aspects of democracy assistance, and USAID’s staff can benefit from the wide range of insights, hypotheses, and “lessons learned” that are being generated by the broader community involved with democracy promotion. Results of the Initiative At the end of this five-year period, USAID would have: • Practical experience in implementing impact evaluation designs that will indicate where such approaches are feasible, what the major obstacles are to wider implementation, and whether and how these obsta- cles can be overcome. • Where the evaluations prove feasible, a solid empirical foundation for assessing the validity of some of the key assumptions that underlie DG projects and rigorous determinations of the impact of commonly used DG projects in achieving program goals.

16 IMPROVING DEMOCRACY ASSISTANCE • A core of expertise within USAID on the latest evaluation methods and practices. • Institutionalized learning practices across the organization to keep officials engaged, informed, and up-to-date on the latest findings from within and outside USAID regarding democracy and democracy assistance. Conclusion The committee stresses that the goal of USAID should not be merely incremental improvement of its project evaluations, or funding additional case studies, but building the entire capacity of the agency to generate, absorb, and disseminate knowledge regarding democracy assistance and its effects. This will necessarily involve (1) gaining experience with varied impact evaluation designs, including randomized studies, to ascertain how useful they could be for determining the effects of DG projects; (2) focusing on disaggregated, sectoral-level measures to track democratic change; (3) expanding the diversity of case studies that are used to inform thinking on DG planning; and (4) adopting mechanisms and activities to support the active engagement of DG staff and mission personnel with new research on democratization and DG assistance. REFERENCES Finkel, S.E., Pérez-Liñán, A., and Seligson, M.A. 2007. The Effects of U.S. Foreign Assistance on Democracy Building, 1990-2003. World Politics 59(3):404-439. Finkel, S.E., Pérez-Liñán, A., Seligson, M.A., and Tate, C.N. 2008. Deepening Our Under- standing of the Effects of U.S. Foreign Assistance on Democracy Building: Final Report. Available at: http://www.LapopSurveys.org.

Next: 1 Democracy Assistance and USAID »
Improving Democracy Assistance: Building Knowledge Through Evaluations and Research Get This Book
×
 Improving Democracy Assistance: Building Knowledge Through Evaluations and Research
Buy Paperback | $70.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Over the past 25 years, the United States has made support for the spread of democracy to other nations an increasingly important element of its national security policy. These efforts have created a growing demand to find the most effective means to assist in building and strengthening democratic governance under varied conditions.

Since 1990, the U.S. Agency for International Development (USAID) has supported democracy and governance (DG) programs in approximately 120 countries and territories, spending an estimated total of $8.47 billion (in constant 2000 U.S. dollars) between 1990 and 2005. Despite these substantial expenditures, our understanding of the actual impacts of USAID DG assistance on progress toward democracy remains limited—and is the subject of much current debate in the policy and scholarly communities.

This book, by the National Research Council, provides a roadmap to enable USAID and its partners to assess what works and what does not, both retrospectively and in the future through improved monitoring and evaluation methods and rebuilding USAID's internal capacity to build, absorb, and act on improved knowledge.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!