Skip to main content

Currently Skimming:

2 Evaluation in USAID DG Programs: Current Practices and Problems
Pages 43-70

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 43...
... Which DG projects will work best in a given country under current conditions? Learning how well various projects work in specific conditions requires well-designed impact evaluations that can determine how much specific activities contribute to desired outcomes in those conditions.
From page 44...
... To provide a context, we begin with a brief description of the current state of evaluations of development assistance programs in general. Then existing USAID assessment, monitoring, and evaluation practices for DG programs are described.
From page 45...
... Thus, while some areas of development assistance, such as public health, have a long history of using impact evaluation designs to assess whether policy interventions have their intended impact, social programs are generally much less likely to employ such methods. In 2006 the Center for Global Development (CGD)
From page 46...
... However, such evaluations have a difficult time determining precisely how much any observed changes in key outcomes can be attributed to a foreign assistance project. This is because they often are unable to re-create appropriate baseline data if such data were not gathered before the program started and because they generally do not collect data on appropriate comparison groups, focusing instead on how a given DG project was carried out for its intended participants.
From page 47...
... These require the three parts noted above: collection of baseline data; collection of appropriate outcome data; and collection of the same data for comparable individuals, groups, or communities that, whether by assignment or for other reasons, did and did not receive the intervention. The most credible and accurate form of impact evaluation uses randomized assignments to create a comparison group; where feasible this is the best procedure to gain knowledge regarding the effects of assistance projects.
From page 48...
... For these reasons, among others, impact evaluations of DG programs are at present the most rarely carried out of the various kinds of evaluations described here. Indeed, many individuals throughout the community of democracy assistance donors and scholars have doubts about the feasibility and utility of conducting rigorous impact evaluations of DG projects.
From page 49...
... project are also motivating other international assistance agencies and organizations. The desire to understand "what works and what doesn't and why" in an effort to make more effective policy decisions and to be more accountable to taxpayers and stakeholders has led a host of agencies to consider new ways to determine the effects of foreign assistance projects.
From page 50...
... Perhaps the most visible leader in efforts to increase the use of impact evaluations is MCC, which has set a high standard for the integration of impact evaluation principles into the design of programs at the earliest stages and for the effective use of baseline data and control groups: There are several methods for conducting impact evaluations, with the use of random assignment to create treatment and control groups pro ducing the most rigorous results. Using random assignment, the control group will have -- on average -- the same characteristics as the treatment group.
From page 51...
... . The Swedish Agency for International Development Cooperation (SIDA)
From page 52...
... Additionally, the evaluation work has a control function to assess the quality of the development cooperation and determine whether resources applied are commensurate with results achieved." Additional attention is being paid to communicating the results of such evaluations with other agencies and stakeholders; this emphasis on communicating results is widely shared in the donor community. The Danish Ministry of Foreign Affairs has embarked on an extensive study of both its own and multilateral agencies' evaluations of development and democracy assistance (Danish Ministry of Foreign Affairs 2005)
From page 53...
... aimed at achieving development results." In short, there is growing agreement -- across think tanks, blue-ribbon panels, donor agencies, and foreign ministries -- that current evaluation practices in the area of foreign assistance in general, and of democracy assistance in particular, are inadequate to guide policy and that substantial efforts are needed to improve the knowledge base for policy planning. Thus, USAID is not alone in struggling with these issues.
From page 54...
... . These evaluations generally remain the traditional process evaluations using teams of outside experts undertaken while a project is under way or after it has been completed. The second significant policy shaping USAID evaluation practices is the Government Performance and Results Act (GPRA)
From page 55...
... Since the formal assessment tool was adopted in late 2000, more than 70 assessments have been conducted in 59 countries. To achieve their strategic objectives, all USAID operating units develop a Results Framework and a Program Monitoring Plan that include subobjectives that tie more closely to specific projects (see Figure 2-1 for an illustrative results framework)
From page 56...
... 56 Figure 2-1 Illustrative Results Framework.
From page 57...
... In January 2006, Secretary of State Condeleeza Rice initiated a series of reforms, centered on the budget and program planning process, intended to bring greater coherence to U.S. foreign assistance programs (USAID 2006)
From page 58...
... Three Key Problems with Current USAID Monitoring and Evaluation Practices Focusing on Appropriate Measures Regarding DG Activities As noted above, USAID has developed many good indicators to track the results of its DG projects. USAID is clearly aware of the important differences between various levels of indicators -- those dealing with attaining targeted outputs, those dealing with the institutional or behavioral changes sought by the program, those dealing with broad sectoral changes at the country level, and those dealing with national levels of democracy.
From page 59...
... or ombudsman's office; • Proportion of citizens who positively evaluate government responsiveness to their demands; • Existence of competitive local elections; • Percentage of total subnational budget under the control of participatory bodies. USAID has also funded various agencies to collect valuable data on outcome indicators.
From page 60...
... As mentioned above, the Foreign Assistance Performance Indicators are intended to measure "both what is being accomplished with U.S. foreign assistance funds and the collective impact of foreign and hostgovernment efforts to advance country development" (U.S.
From page 61...
... Output measures track the specific activities of a project, such as the number of individuals trained or the organizations receiving assistance. Outcome measures track policy-relevant factors that are expected to flow from a particular project (e.g., a reduction in corruption in a specific agency, an increase in the autonomy and effectiveness of specific courts, an improvement in the fairness and accuracy of election vote counts)
From page 62...
... But if the national leaders have already excluded viable opposition candidates from running, or deprived them of media access, the resulting flawed elections should not mean that USAID's specific election project was not effective. As a senior USAID official with extensive experience in many areas of foreign assistance has written regarding this problem: To what degree should a specific democracy project, or even an entire USAID democracy and governance programme, be expected to have an
From page 63...
... But these broader processes are subject to so many varied forces -- from strategic interventions to ongoing conflicts to other donors actions and the efforts of various groups in the country to obtain or hold on to power -- that macro-level indicators are a misleading guide to whether or not USAID projects are in fact having an impact. USAID efforts in such areas as strengthening election commissions, building independent media, or supporting opposition political parties may be successful at the project level but only become of vital importance to changing overall levels of democracy much later, when other factors internal to the country's political processes open opportunities for political change (McFaul 2006)
From page 64...
... If one concern regarding USAID's evaluation processes is that they may rely too much on meso- and macro-measures to judge program success, the committee also found a related concern regarding USAID's data collection for M&Es: USAID spends by far the bulk of its M&E efforts on data at the "output" level, the first category in Table 2-1. Current M&E Practices and the Balance Among Types of Evaluations In the current guidelines for USAID's M&E activities given earlier, only monitoring is presented as "an ongoing, routine effort requiring data gathering, analysis, and reporting on results at periodic intervals." Evaluation, by contrast, is presented as an "occasional" activity to be undertaken "only when needed." The study undertaken for SORA by Bollen et al (2005)
From page 65...
... But from the committee's conversations, the primary "lessons" taken away by some personnel at USAID were that such rigorous impact evaluations were not worth the time, effort, and money given what they expected to get from them or did not work. While certainly only a limited number of projects should be subject to full evaluations, proper impact evaluations cannot be carried out unless "ongoing and routine efforts" to gather appropriate data on policyrelevant outcomes before, during, and after the project are designed into an M&E plan from the inception of the project.
From page 66...
... The committee discusses the means to help USAID become a more effective learning organization in Chapters 8 and 9. Conclusions This review of current evaluation practices regarding development assistance in general and USAID's DG programs in particular leads the committee to a number of findings: • The use of impact evaluations to determine the effects of many parts of foreign assistance, including DG, has been historically weak across the development community.
From page 67...
... • Evaluation is a complex process, so that improving the mix of evaluations and their use, and in particular increasing the role of impact evaluations in that mix, will require a combination of changes in USAID practices. Gaining new knowledge from impact evaluations will depend on developing good evaluation designs (a task that requires special skills and expertise)
From page 68...
... 2007. Challenges of Evaluating Democracy Assistance: Perspec tives from the Donor Side.
From page 69...
... U.S. Foreign Assistance Performance Indicators for Use in De veloping FY2007 Operational Plans, Annex 3: Governing Justly and Democratically: In dicators and Definitions.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.