National Academies Press: OpenBook

Improving Democracy Assistance: Building Knowledge Through Evaluations and Research (2008)

Chapter: 8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning

« Previous: 7 Additional Impact Evaluation Designs and Essential Tools for Better Project Evaluations
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 199
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 200
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 201
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 202
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 203
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 204
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 205
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 206
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 207
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 208
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 209
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 210
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 211
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 212
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 213
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 214
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 215
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 216
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 217
Suggested Citation:"8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 218

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning Introduction Chapter 6 addressed some of the real and perceived obstacles to car- rying out impact evaluations of democracy and governance (DG) projects and discussed ways that they could, in principle, be overcome. But a much more general problem exists: organizational conditions that discourage staff from the U.S. Agency for International Development (USAID) and implementers from undertaking high-quality impact evaluations. Review- ing agency policies and practices with the goal of reducing barriers to and strengthening incentives for conducting sound impact evaluations is essential. Just as important, USAID must create and nurture the capac- ity to learn what works and what does not by sharing information and experiences widely and openly. This chapter first addresses the specific issues of improving organizational capacity for impact evaluations and then turns to the more general problem of creating the conditions for organizational learning. Issues in Obtaining High-Quality Impact Evaluations Any changes made to the general guidance for monitoring and evalu- ation (M&E) of DG projects will be carried out in the field in over 80 country missions by hundreds of implementing partners. Even with the centralization of program and budget decision making undertaken in the Foreign Assistance Reforms of 2006 (USAID 2006), USAID remains a 199

200 IMPROVING DEMOCRACY ASSISTANCE highly decentralized agency, and country missions have substantial dis- cretion in how they implement and manage their programs. The committee also recognizes that the USAID contracting process is already dauntingly complex and time-consuming, demanding much of the time that DG officers spend to develop and manage their projects. The committee thus is cautious about recommending specific solutions for the contracting of evaluations, especially as contract and procure- ment processes are not an area in which the committee has any special expertise. What follows is instead intended as a set of principles, drawn from research and field studies, that the committee believes will help USAID in obtaining sound impact evaluations of DG projects. Examples are offered of possible approaches to the problem, but the actual design and implementation of any changes would rest with USAID. Knowing how difficult the problems of changing contract management practices are with the current reality of USAID programming, the DG evaluation initiative recommended in the next chapter could be an opportunity to try out different approaches. Incentive Issues A key problem, not unique to DG or USAID, is the question of provid- ing incentives to DG staff and implementers to undertake and complete sound and credible impact evaluations. The DG officers and implementers the committee and its field teams met shared a strong desire to be success- ful in promoting democracy. They are drawn to their work because they believe that democracy is a better form of government and that foreign assistance can help bring about democratic development. The problem, however, is how to promote democracy. From the outset, DG officers and implementers alike recognized that “doing democracy” was going to be much more difficult than other areas in development such as health and agriculture where causal relationships are better understood and impacts easier to measure. There may be formidable barriers to good policy and implementation in these other areas, but at least there is greater consensus about the basic questions of theory and measurement. The uncertainty about fundamental aspects of DG reinforces the nor- mal human and bureaucratic incentives to avoid documented failure, a problem that has been cited as affecting evaluations across USAID and not simply DG (Clapp-Wincek and Blue 2001, Savedoff et al 2006). In the absence of a strong learning culture that encourages open reflection and recognizes the uncertainties surrounding DG programming, carrying out projects that produce no effect (or a negative effect) could understandably be considered a threat to a USAID officer’s career. Similarly, program implementers worry about their organizations’ futures and the results of

CREATING THE CONDITIONS 201 being associated with a documented failure, knowing that it is generally not the way to win future contracts or grants. In the democracy promo- tion area, where there is little hard evidence about what works and why and where many crucial factors that might make for success or failure are beyond the control of DG officers and their implementers, there is a natural tendency to confine measurements of success to those things over which one has some hope of control, such as project outputs and very proximate outcomes. In addition, a host of time and resource pressures generally lead implementers not to take time before program rollout to gather extensive baseline data or to conserve precious resources for actual DG program support by keeping evaluation costs to a minimum (or, as the commit- tee discovered, sometimes using funds from the M&E budget to support programming in the later stages of a project when resources grew tight). The clear priority for getting programs started as quickly as possible, and doing as much as possible with limited budgets, necessarily leads to a far lower priority for impact evaluation procedures, as these generally require some time and effort spent on collecting baseline data and data from comparison or control groups. Without strong incentives to com- plete sound impact evaluations on at least some DG programs and some rewards for doing so, these pressures make it highly unlikely that such evaluations will be designed into DG programs. One task of the DG eval- uation initiative recommended in the next chapter should be to address these issues and explore how to ease the task of undertaking impact evaluations within USAID’s contracting and program procedures. The initiative should also examine incentives for both DG officers and DG project implementers to carry out sound impact evaluations of selected DG projects. Coordination Issues Regarding Strategic Assessments USAID already undertakes a fairly time-consuming process of base- line assessment as part of its development of strategic objectives (see Chapter 2). At present, however, the strategic assessments guide policy planning (including choice of DG projects), which then result in calls for proposals. Evaluations enter later, if at all, in a way quite separate from the initial assessment process. It would be far more productive for good impact evaluation if the strategic assessments also sought to identify which projects (if any) should be targeted for impact evaluations to determine their effects. Then any baseline information collected as part of the assessments could be designed, and made available, to support the desired impact evaluation. For example, any national or regional surveys, or interviews with possible

202 IMPROVING DEMOCRACY ASSISTANCE or intended participants, could be usefully incorporated into subsequent evaluations. Perhaps even more important, the strategic assessment pro- cess must identify critical hypotheses guiding the planned democracy assistance program (e.g., that increasing local mobilization or nongov- ernmental organizations (NGOs) will reduce corruption), so that they can be clearly specified and designated for impact evaluations in the calls for proposals, if such evaluations are desired. Contracting Issues The committee’s research and field visits also found that the current process of awarding contracts and grants actually works against conduct- ing impact evaluations in a number of specific ways: • DG officers are chosen for expertise in democracy assistance and aid delivery, not for expertise in evaluation designs. Thus DG officers often felt they lacked expertise among their mission staff to prescribe or judge what would be an effective, high-quality impact evaluation design. • Implementers, who often believed they had the expertise to under- take a richer variety of M&E activities, including impact evaluations, thought that USAID gave priority to doing the proposed work rather than M&E, and especially if budgets were tight, ambitious M&E plans would work against them in bidding for projects. • Systematic communication among DG officers and between DG officers and implementers is limited, so there is little opportunity to share experiences and compare, and perhaps correct, perceptions of each other’s expectations. • Given the multiple steps in the contracting/grant-making process, there are many points at which decisions can be made that restrict or eliminate the opportunity to design impact evaluations into projects from the outset or not to carry them out fully once a project has begun. • On the positive side, the basic system for program monitoring and use of indicators in place through the Automated Directives System is a good foundation, even if current practice could be improved (USAID ADS 2007). Thus the data collection required for impact evaluations seems practical if the incentives and contract procedures motivate implementers to schedule baseline, outcome, and comparison group measurements as part of the contracted DG activity. Changes to the Contracting Process to Provide for Impact Evaluations As already discussed, perhaps the key difference between the current approach of commissioning process evaluations when a mission sees

CREATING THE CONDITIONS 203 the need, as a separate contract issued after a project has begun or been completed or when a shift in strategy is contemplated, and commission- ing an impact evaluation is that an impact evaluation needs to be treated as an integral part of a project’s implementation design. Unless baseline measurements are part of the contract schedule and data collection on an appropriate comparison or control group is provided for at project inception, it is difficult—often impossible—to go back and obtain such information once a project has begun or been completed. This means that if a mission wants to obtain sound evidence of the impact of a particu- lar project, staff will need to think about planning an impact evaluation before they have even drawn up the call for proposals for that project and make a suitable design for impact evaluation part of the original action and budget plan for that project. Call for Proposals When a USAID mission undertakes a new project or the next phase of a continuing one, in most cases there is a formal request for bids, called a Request for Proposals (RFP) for a contract and a Request for Applications (RFA) for a grant or cooperative agreement. One required component for those responding to an RFP or RFA is a description of how the project would be monitored and evaluated. Given the strict federal rules govern- ing competitive procurement policies, the RFP/RFA is the primary source of information available to a would-be implementer about the mission’s goals for the project and requirements for a successful bidder, including M&E. In current practice there is seldom any indication that an evaluation process is expected beyond the required Performance Monitoring Plans, which generally focus on tracking the project’s activities and immediate outputs. In addition, as the committee learned, DG officers differ in how much detailed guidance they want to provide in an RFP or RFA, some- times preferring to give the implementers, who have substantive expertise and experience, flexibility to provide most of the details of how they think the project and M&E should be carried out. To undertake impact evaluations, RFPs and RFAs would need to contain explicit language indicating that on this occasion such an evalu- ation is expected. The solicitation would not need to specify the evalu- ation design in detail; the committee and the field teams were told that implementers would readily understand the implications of language that   key distinction among the types of agreements is the amount of control that USAID has A over how the award is implemented. USAID has the most control over contracts, less with cooperative agreements, and the least with grants, which give implementers wide discretion over how to carry out projects, including M&E.

204 IMPROVING DEMOCRACY ASSISTANCE called for sound impact evaluation as requiring the collection of baseline data, treatment and control groups where possible, and alternatives when the project involved an “N = 1” intervention. But the process would need to begin at this stage. If a more detailed statement is considered preferable, a recent RFP in one of the missions that the field teams visited provides an example. As part of the performance monitoring plan called for in the RFP for the Democratic Linkages project in Uganda, bidders were told they should have “a clearly developed strategy for assessing the impact of the pro- gram at all three levels [national, district, and subcounty] by evaluating outcomes over time (comparing pre-intervention and post-intervention values on impact variables) or by comparing outcomes in districts selected to receive the program and those that do not (matched to ensure their comparability” (USAID/Uganda 2007:27). Points for Impact Evaluations Once USAID receives proposals, the bids must be evaluated. Another part of the competitive process is awarding points, which are specified in the RFP or RFA, to various parts of a proposal. One of the impediments to encouraging investment in evaluations is that relatively few points are assigned to the M&E plan and often the M&E plan is included as a subset of some other category rather than being graded on its own. The committee did not undertake an extensive examination of this issue, but meetings with DG officers and implementers and the field visits suggest that it would be rare for an M&E plan to count for much more than 10 out of 100 possible points for the overall proposal. By contrast, the experience and quality of the implementer’s chief of party might earn 30 to 40 points because management ability is considered so critical to project success. The committee is not recommending a specific number of points for evaluation, but it does seem likely that some change would be needed to give a more rigorous evaluation plan a competitive advantage. Instead of changing the number of points, another approach would be to treat the M&E plan as a separate category, so that a high score might be a tipping point or a genuine competitive advantage. The DG office could consult with other areas in USAID, such as health or agriculture, where impact evaluations may be more common practice, for guidance on how to struc- ture the points or process used in evaluating proposals.   Again, as far as the committee was able to determine, these requirements were exceptions to standard practice.

CREATING THE CONDITIONS 205 Time Pressures One of the most precious commodities once an award is made is time. As noted above, once an award is made, there is often great pressure to “move the money” as soon as it becomes available, to “hit the ground running” and “show early success.” In principle, implementers generally have 30 to 60 days after an award is signed to develop an M&E plan for approval by the mission, which usually includes collection of some kind of baseline information or data prior to, or very soon after, the project (assistance) activities begin. Yet in practice, two things often happen: (1) time pressures mean that project activities actually begin before all the work to set up and implement the monitoring plan and baseline measurements can be accomplished or (2) the process of approving the monitoring plan can drag on, sometimes for months, so that projects fall behind schedule and plans to collect baseline measures are delayed or dropped. The effect is the same in both cases: Crucial baseline data are not collected and may not be able to be recon- structed later in the project. The opportunity for a rigorous assessment of project impact may be effectively lost. For those select projects for which DG officers want sound impact evaluations, contracting schedules for implementers need to allow for the implementation of an appropriate evaluation design, including estab- lishing an appropriate control or comparison group and setting up and completing baseline measurements on both the assistance and the control groups. Policymakers may need to be reminded that rushing to roll out projects without allowing for careful examination of initial conditions and creation of comparison groups undermines the only way to accumulate knowledge on whether those DG projects are working as intended and those expenditures are worthwhile. Keeping Project Evaluation Independent Ideally, the individuals or contractors who implement a project should not be the only ones involved in evaluating its outcomes. After all, they have every incentive to show success. Independent evaluations by a separate contractor that show project success are therefore much more convincing.   Where the comparison group is part of a population already being surveyed and the baseline data can be obtained from the survey, the need to establish relationships with the comparison group is obviated. But for activities involving smaller and identifiable control groups—such as sets of legislators or judges or NGOs or specific villages that will not receive assistance in the initial phase of the program—time to establish such relationships to allow proper data collection is essential to any sound impact evaluation.

206 IMPROVING DEMOCRACY ASSISTANCE USAID has already recognized this principle in its practices for pro- cess evaluations by requiring that they be carried out by agents other than the program implementers. Yet this is easier for process evaluations, which can be undertaken after a project has begun or been completed, than for impact evaluations, which generally require that plans for data gathering and analysis be “built in” to the project in the design stage. Once an award is given, USAID could then give separate contracts, or independent tasks within the same contract, to implementers A and B, the former to carry out the program and the latter to carry out the evalu- ation portion. This would leave the evaluation partner, who is receiving separate payment and rating from USAID on the quality of its evaluation, with incentives to provide the highest-quality evaluations for USAID. To minimize the risk of collusion, USAID may have to require contractors who implement a large number of projects for USAID DG offices to work with several different evaluation partners; similarly, evaluation contrac- tors should be required to partner with several different implementers over time in order to ensure continued independence of project and evalu- ation agents. Resource Issues One of the major objections to impact evaluations that the committee and its field teams encountered is that they “cost too much.” The collec- tion of high-quality baseline data and indicators, especially since it must be done for both those who receive the DG support and a control group that does not, can be costly, although Chapters 6 and 7 discuss ways in which at least some of those costs could be reduced. But unfortunately there is no way to analyze that objection relative to current M&E spend- ing because USAID is not able to provide reliable estimates of those costs. This is true both for USAID Washington and for the three missions visited by the committee’s field teams. There are several reasons that USAID cannot provide an estimate of its M&E expenditures. One reason is that there is no consistent meth- odology for budgeting project evaluations, so that both missions and implementers may count the same things in different ways. Perhaps more important, as already discussed there are many kinds of M&E, and the costs of some are much easier to estimate than others. The list below was developed with the assistance of USAID/Washington staff and the work of the three field teams. • M&E plans for each grant/contract. As discussed above, these are required of USAID grantees and contractors and approved by USAID. Proposals/applications will typically include an illustrative M&E plan,

CREATING THE CONDITIONS 207 but these differ in the level of detail, and the cost of preparing them would be difficult to measure. Sometimes a proposal includes an esti- mate of costs directly related to M&E (e.g., if the implementer anticipates doing an opinion survey), but this does not always happen and is not a requirement. It is uncertain whether a project’s M&E budget would include the time that staff members spend collecting data on indicators and preparing required reports. In some cases, local staff will collect the information, which is then sent to the implementer’s headquarters for analysis and preparation of the required reports for USAID. In this case the costs would more likely be considered part of the project’s overhead than part of the M&E costs. So project budgets might show a zero (even with a good M&E plan) or might show tens of thousands of dollars if, for example, annual opinion surveys are planned. • Mission Performance Management Plan (PMP). Required of each mission as part of meeting Government Performance and Results Act (GPRA) requirements, these set out “strategic objectives” and “interme- diate results” with corresponding results indicators. Many missions will spend money to have consultants train mission staff in developing PMPs and/or help develop them. Missions might also spend money to collect some data for them. But in many cases they rely on data collected by part- ners or from third-party sources (e.g., the host government, local NGOs) and rely on mission staff to develop the plans and compile data and thus would not have a budget line item dedicated to PMPs. • USAID annual report and common indicators. Missions were required to answer certain common questions each year for the annual report (in addition to the PMPs). Starting in FY2007, this was replaced by the com- mon indicators for USAID and the State Department developed as part of the foreign assistance “F Process” reforms. These costs are unlikely to be included in mission budgets. • Self-evaluations by implementers. Some grants and contracts include plans for the implementer to conduct its own evaluation, at the midway point and/or the end of the project. Typically these will include budgets for $10,000 to $20,000 to bring in people (e.g., from the home office) to do the evaluation. These may or may not include a budget to collect baseline and subsequent data. • Outside evaluation of grants/contracts. These are typically requested and paid for by a mission, often when it is thought a project is not per- forming well or a major project is close to completion and an evaluation is part of planning a follow-on project. Again, this type of evaluation almost always consists of a team of two to four consultants who spend two to three weeks in-country and base their findings largely on interviews with a range of people (mission staff, partner staff, direct and indirect benefi- ciaries, local experts, and so forth). This type of evaluation costs between

208 IMPROVING DEMOCRACY ASSISTANCE $40,000 and $100,000, depending on the number of consultants and the amount of time spent in the country. A mission might undertake zero to three evaluations of DG projects per year, depending on a number of factors (e.g., the number of activities in the DG portfolio, whether a new strategy is due, if a major event occurs in the country, new mission staff arrive). • Strategic objectives final evaluations. Missions are required to conduct a final evaluation whenever they close out activities in one of their stra- tegic objectives. These are conducted in much the same way that outside evaluations of grants/contracts are conducted, but with more emphasis on overall impact on a sector rather than exclusively focusing on the per- formance of the implementers. The cost would be about the same as the outside evaluations and depend on similar factors. With 100 overseas missions, each with dozens of projects under way at any given time, it seems reasonable to conclude that millions of dol- lars is spent each year on M&E, broadly defined. As discussed, impact evaluations of project effects are one component of the broader M&E task, and it would not be simply a matter of transferring funds spent on one part of the M&E function to a different task. But if some of the current approaches to assessing project impact do not, in fact, provide genuine evidence of success or failure, it would seem that there are resources that could be more productively applied, even if no firm dollar amount can be provided for them. More generally, a serious examination of the balance of effort and resources among various types of evaluation, in particular that devoted to monitoring (outcome evaluation) relative to other forms that can inform strategic decisions and assessments of program impact, could be another part of the evaluation initiative recommended in the next chapter. Improving Organizational Learning The results of sound impact evaluations have value for USAID only when they become readily accessible knowledge for USAID officers and that knowledge feeds into learning processes that inform policy and plan- ning. This section looks at what happens to the results of evaluations and other data after they are obtained. Archiving Survey Data to Build “Collective Memory” As discussed earlier in this report, USAID makes significant use of surveys in its DG programming. The committee believes that more could be done to fully exploit the utility of surveys in the measurement of DG program impact and to support greater learning across the organization.

CREATING THE CONDITIONS 209 One finding from interviews in Washington and the field is that, more often than not, raw survey data, the basis on which key comparisons within and across countries could be made, are lost. USAID currently has no central repository for the survey data its implementers collect. Given that with only the rarest of exceptions survey data by definition are com- puterized and almost always stored in common formats (typically SPSS, Excel, STATA, or SAS) for which interchangeability programs (e.g., Stat- Transfer) are readily available, the labor costs and storage space require- ments would be trivial. The committee recommends, as an initial step, that the DG office develop a simple system to establish and maintain such an archive. To emphasize how basic the tasks are, the design could be created by a library information sciences graduate student working as an intern and then maintained by a junior administrative staff person. Archiving the data, however, is far less of a problem than being sure that all of the data end up in Washington. Other studies of gen- eral USAID evaluation practices (Clapp-Wincek and Blue 2001) and the committee’s own DG-focused research found that despite requirements to do so, reports written by consultants and research organizations are not routinely sent to USAID Washington. For many years the Center for Development Information and Evaluation (CDIE) played the role of archi- vist for USAID. But even when CDIE was functioning, reporting was not systematic. Now that CDIE has been absorbed into the office of the new director of foreign assistance in the State Department, it is not clear how well the “collective memory” of USAID will continue to grow. Ensuring that survey data are retained would probably require an executive decision at the bureau level or higher to impose an absolute contractual requirement that the data generated would be deposited with USAID Washington. The committee recognizes that the barriers to doing so are real, as many of USAID’s DG programs are carried out by con- sulting firms whose contractual clauses broadly prohibit the use of their data beyond the confines of the company. Finding ways to address these proprietary issues will be essential to supporting the learning culture this committee believes USAID needs to acquire. Using Surveys More Systematically to Build a Global Knowledge Base To develop comparable data that can be regularly updated across the range of countries in which USAID operates, more attention needs to be paid to the systematic use of its survey data. The committee notes at the outset that the field of scientific survey research has been undergoing incremental refinement since its first use in the 1940s. Genuinely repre- sentative samples can be designed and survey data obtained at relatively modest cost, and questionnaires can be crafted that provide reliable and

210 IMPROVING DEMOCRACY ASSISTANCE valid measurement of citizens’ attitudes and behaviors. In practice, most USAID missions commission surveys in an ad hoc fashion that, coupled with the lack of agency-wide coordination of survey research methodol- ogy, data collection, and data analysis, means that USAID is not taking full advantage of the prospect for greater ability to develop comparability across surveys taken in many parts of the world. As discussed in Chapter 7, surveys can be used in one form of impact evaluation design when randomization is not possible. Surveys also pro- vide a powerful tool to test democratization hypotheses. Does corruption erode support for democracy? Do certain ethnic groups express more intolerance than others, participate less in civil society, or participate more in protest demonstrations? These are all important questions that can be asked of the Democracy Barometers surveys, and the answers can help target and adjust DG projects. Surveys can be used to track project success over time. To refer again to civil society participation, if USAID establishes as a project goal increased participation in a given region or among females, then repeated surveys over time can help determine the extent to which those efforts have been successful. Comparisons within a country provide important information about project impact. But to obtain data that would allow for a more general comparative assessment of democratic values and prac- tices, surveys from multiple countries are needed. USAID needs this com- parative information to be able to make a determination of how advanced or hindered democratic behaviors and practices are in any given country. For example, if it finds that corruption victimization affects 10 percent of the adult population in a given country in a single year, it needs to place these data alongside survey data obtained for other countries in order to determine if the 10 percent level is high, medium, or low. As already mentioned, consortia of researchers around the world have been developing regional surveys of democratic values and behav- iors. The earliest systematic surveys of entire regions emerged in Europe with the development of the Eurobarometer and since 2001 the emer- gence of the European Social Survey, which now covers 25 nations in the broadened European community. Other regions of the world also are covered by such surveys, including Eastern Europe, now included in the Eurobarometer; the New Europe Democracies Barometer, which covers much of the former Soviet Union and is currently based at the University of Aberdeen; the Asian Barometer, currently based at the National Taiwan University; and, most recently, the Arab Barometer, currently based at Princeton University and the University of Michigan.   Recent studies by several of these democracy barometers can be found in the July 2007 and January 2008 issues of the Journal of Democracy.

CREATING THE CONDITIONS 211 To the committee’s knowledge, USAID has invested in two regional surveys: (1) the AfroBarometer, organized by Michigan State University and the Institute for Democracy in South Africa; and (2) the Americas Barometer, organized by the Latin American Public Opinion Project of Vanderbilt University and its partner university and think tanks in Latin America, led by the University of Costa Rica. The committee believes that greater international coordination among existing surveys should be sought and supported. At present, even among the regional barometer surveys that USAID is partially funding, there is no central coordination across these two regions. Moreover, there are many countries in Africa in which the AfroBarometer does not operate, even though USAID does work there. At this time there is no assurance that the same core items will be asked in each region and country within Africa, nor is there any reason to believe that identical questions will be asked across regions. The committee recommends that USAID facilitate this sort of coordination among those regional surveys it is currently funding and also explore how it might promote such coordination with the Asian and Arab barometers. For example, a small conference could be held in Washington for the senior directors of these regional barom- eters to see if such coordination would be possible from administrative and financial points of view. It is obvious that within a region or country many items need to be unique to tap into the particularities of that region or country’s structure. Yet there is almost certainly a common core of items that could be asked that would work universally or nearly so. Increasing Active Learning In addition to acquiring and storing information to shed light on DG program outcomes, another essential part of the committee’s recommen- dations is for USAID to increase its activities for actively sharing and dis- cussing that information. The internal and external USAID Web sites and those of individual missions provide substantial amounts of information about DG projects and often furnish links to evaluations and efforts to derive “lessons learned.” Unfortunately, as with survey data, although all evaluations are supposed to be provided to the Development Experience Clearinghouse (DEC) and available on the Web, in practice a substantial fraction never make it out of implementer or mission files. In the absence of resources to pursue compliance with the requirement—and perhaps enforce some sanction for failure—the competing pressures of other tasks will mean that reporting remains a low priority. The committee believes   The DEC Web site is http://dec.usaid.gov/ (accessed on August 4, 2007). An assessment of how many evaluations reach the DEC is available in Clapp-Wincek and Blue (2001).

212 IMPROVING DEMOCRACY ASSISTANCE that the results of the evaluations undertaken during the evaluation initia- tive recommended in the next chapter would have to be much more read- ily available to have the desired effect on future USAID programming. The committee thus recommends that transmitting reports for DEC should be an important part of each project under the proposed evalu- ation initiative. More generally, as part of the initiative the resources of DEC should be augmented to help ensure that all project evaluation reports reach DEC so that they can be openly available. The Internet offers remarkable access and opportunities, but to learn from experience, DG officers and implementers also need opportunities to meet and discuss their experiences on a regular basis. Starting in the mid- 1990s, when a reorganization moved technical specialists from the regional bureaus to new “centers,” including a democracy center, annual meetings of DG officers from around the world were held with implementers in the form of “partners conferences,” which provided such opportunities. The meetings frequently included outside experts to supplement and support the learning process. CDIE also organized a series of programs that pro- vided opportunities for USAID officers back in the United States on leave to be exposed to the latest evaluations emerging from the center. Topics generally reflected the annual USAID evaluation agenda. A number of factors, including tight budgets for operating expenses and criticism of “extraneous” travel, have curtailed these events and a significant opportunity is being lost. The committee believes that increasing USAID’s capacity to learn what works and what does not should include provisions for regular face-to-face interactions among DG officers, implementers, and outside experts to discuss recent find- ings, both from the agency’s own evaluations of all kinds and studies by other donors, think tanks, and academics. Videoconferencing and other advanced technologies can be an important supplement, but per- sonal contact and discussion would be extremely important to share expe- riences of success and failure as the evaluation initiative goes forward. This includes lessons about the effectiveness of DG projects and about successes and failures in implementing impact evaluations. This type of meeting is especially important for ensuring that the varied insights derived from impact and process evaluations, academic studies, and examinations of democracy assistance undertaken by inde- pendent researchers, NGOs, think tanks, and other donors are absorbed, discussed, and drawn into USAID DG planning and implementation. While only USAID has the ability to develop and carry out rigorous evaluations of its projects’ impacts, many organizations are carrying out studies of various aspects of democracy assistance, and USAID’s staff can benefit from the wide range of insights, hypotheses, and lessons learned being generated by the broader community involved with democracy promotion.

CREATING THE CONDITIONS 213 While it will take some time for USAID to learn from undertaking the pilot impact evaluations, it will gain immediately from augmenting its overall learning activities and increasing opportunities for DG staff to actively engage with current research and studies on democratiza- tion. Several committee members wish to emphasize the considerable value to policymakers and DG officers of the many books, articles, and reports prepared in recent years by academic researchers, think tanks, and practitioners. Whatever the methodological flaws of these case stud- ies and process evaluations from a rigorous social sciences perspective, this expanding literature has provided important lessons and insights for crafting effective DG programs. Turning Individual Experience into Organizational Experience: Voices from the Field Realizing that its DG officers often had valuable insights and experi- ences gained from years of implementing projects in various conditions around the world, USAID’s Democracy Office began a pilot project under its Strategic and Operational Research Agenda (SORA) in 2005 to col- lect this information systematically. Called collectively “Voices from the Field,” this pilot project attempted to use extensive anonymous inter- views with DG officers who had served in two or more missions around the world to understand whether there were attributes that commonly led to project success and/or failure. In this pilot phase of the project, SORA developed a standard set of interview questions for each of its initial participants. Given SORA’s mission, these questions were largely designed to elicit descriptions of the best and worst projects in which the DG officer had participated (see the interview protocol in Appendix F). It then conducted interviews with eight participants, each of which lasted about two hours. The results of these interviews revealed a wide range of responses, although common trends in project success and failure also seemed to emerge. As part of its efforts to explore methodologies that could be used to learn from past experiences, USAID asked the committee to offer sug- gestions as to how the Voices from the Field project might be expanded and integrated into the overall SORA research design. Based on discus- sions with current and former DG officers, the committee decided to explore various options for expanding this project during at least one of its field visits (see Appendix E). Practical issues the committee wanted to understand about a potential Voices from the Field project included how frequently such interviews or debriefs should occur, who should conduct such interviews or collect such insights and experiences, and in which format(s) should the information be collected and disseminated. In addi- tion, one issue that had not been explored in the initial pilot phase of the

214 IMPROVING DEMOCRACY ASSISTANCE project conducted by USAID was whether or not those people who work for USAID DG missions around the world as foreign service nationals (or non-American citizens) would be able to provide additional sources of insight. While in Peru the field team attempted to address these questions through a series of meetings with current DG officers and foreign service nationals, including a dedicated meeting with two foreign service nation- als with considerable DG experience. As their tenure at the missions tends to be much longer than that of career DG officers, who move from one mission to another every one to four years, foreign service nationals tend to have a great deal of institutional knowledge and experience, often in particular subfields of DG programming such as decentralization or political party strengthening. It is their historical knowledge that often provides the continuity across projects over the long term. With regard to the frequency with which interviews or debriefings should occur, it seemed that a systematic inquiry of this sort would opti- mally be conducted every 12 to 18 months. This time frame would be consistent with other annual reporting requirements and would largely be reflective of the natural life span of projects that DG officers and foreign service nationals oversee. Careful timing of interviews and debriefs is an important consideration given the workload of those in DG missions. During the initial pilot phase of the Voices from the Field project, the interviews were conducted by USAID and the transcripts of the inter- views were made available to USAID, although the interviewees’ names were not attached to the transcript. The committee was also interested to learn whether participants would feel more comfortable responding to an interviewer who did not work for USAID even if their responses were anonymous. There was a question as to whether or not participants would feel comfortable honestly responding when asked to identify the primary attributes of both successful and unsuccessful projects if USAID were asking the questions. During the field visit inquiries the team found that this was not a great concern to potential participants. In fact, they said they felt very comfortable providing honest responses, even when discussing less successful aspects of programs. Further, they remarked that such honest discussions were a routine part of their work at that mis- sion. The one aspect of their work, however, that those interviewed would like to highlight to a greater extent was success in more routine matters. They expressed the desire to have a voice in sharing smaller everyday successes, which are often overlooked by bigger projects, programs, and efforts. Finally, if these interviews were undertaken on a larger scale in the   The field teams in Albania and Uganda met equally experienced foreign service nationals.

CREATING THE CONDITIONS 215 future, the committee would be interested in learning which formats may be most beneficial in both collecting and disseminating information gathered from these interviews and debriefings. In Peru, foreign service nationals in particular expressed a willingness to participate in face-to- face interviews, to complete written surveys, or to complete surveys or interviews conducted through other means such as a Web-based interface. Their primary request, however, was that the results of the interviews or debriefings be widely shared with them and with other DG professionals around the world. They expressed concern that opportunities for learning may be lost if the interviews are given and no information on the insights or lessons learned was to reach those working in the missions. There was great interest in learning from their experiences as well as those of col- leagues around the world; therefore they hoped that information from such programs would flow both in from and out to the field missions. Depending on the interview design, information collected through a Voices from the Field project focused on systematic debriefings of DG officers, and foreign service nationals could offer very detailed informa- tion on project implementation or more general insights about potential sources of project success or failure. These would not be substitutes for the empirical evidence that impact evaluations could offer. They could, however, complement the face-to-face interactions of annual DG offi- cers and partners recommended above by compiling a systematic record of experience; the results of these interviews might become part of the renewed conferences, further encouraging the sharing of experiences and collective learning. As an opportunity for continued learning from its wealth of experi- ences, the concept of “Voices from the Field” is consistent with SORA’s overall goal of better understanding what has worked, why, and under what conditions. Other organizations, such as the military, employ such systematic debriefing techniques, often with great benefit. On a more ambitious level, other, more academic uses of oral history could comple- ment or be a resource for the retrospective studies discussed in Chapter 4. Even more ambitious efforts to use “truth telling” conferences to add information and explore the varying perceptions of key historical events that have influenced how USAID views its ability to affect democratiza- tion could potentially yield valuable insights. Given the potential benefit of learning from the insights and exper- tise of DG officers and FSNs, the pilot project seems to offer USAID an opportunity to gain unique project-specific information it cannot acquire through other means. If incorporated into a larger framework designed to   An example from the foreign policy field is the work of James Blight and his colleagues on the Cuban Missile crisis (Blight and Welch 1989), which eventually included senior U.S., Soviet, and Cuban officials who had taken part in the decision-making process.

216 IMPROVING DEMOCRACY ASSISTANCE increase learning across the organization, “Voices from the Field” would complement other systematic approaches to gathering and employing more rigorously obtained information. The Committee therefore recom- mends that USAID consider a modest investment in continuing an improved “Voices from the Field” project, the results of which would be made available to USAID DG officers and FSNs. During the period of the evaluation initiative that we recommend in the next chapter, special attention might be given to interviews with those carrying out the new procedures for impact evaluations. If SORA decides to undertake addi- tional retrospective efforts, either by commissioning its own case studies or systematically mining current academic research, then more ambitious oral history or “truth telling” conferences might be part of the mix. While there is an opportunity to learn from this project, learning will only occur if that information is systematically collected and disseminated to those who may gain from that information, such as DG officers, FSNs, and other USAID employees involved in project direction and manage- ment. Further, as was clear from the discussions held in the field with DG professionals, their willingness to continue to participate in such efforts was largely linked to their ability to learn from the results. The insights and experiences collected must not only be studied, analyzed, and incor- porated into a larger framework of learning, but they must also be shared in an easily accessible format with those who stand to directly gain from this information. This could be accomplished through the development of a Web-based interface where respondents could complete surveys and interviews via their work computers and also access the results of other respondents. Other dissemination options should also be considered, such as providing annual results at conferences and gatherings of DG officers and professionals. Whatever the mechanism for collection and dissemi- nation selected, if USAID chose to continue this project, it should follow standard best practices and the results should be made widely available. Conclusions The potential changes to current USAID policy and practices dis- cussed in this chapter range from specific suggestions for the contracting process to a broad shift in the organization toward a much more system- atic effort to share and learn from its own work and that of others. In the next chapter we introduce a set of specific recommendations based around a DG evaluation initiative intended to increase the capacity of USAID to support and undertake a variety of well-designed impact evaluations, and to improve its organizational learning. We believe this initiative will demonstrate the value of increasing USAID’s ability to assess exactly what its DG programs accomplish, and provide guidance to help USAID

CREATING THE CONDITIONS 217 better determine which projects to use, in which conditions, to best assist democratic progress. REFERENCES Blight, J.G., and Welch, D.A. 1989. On the Brink: Americans and Soviets Reexamine the Cuban Missile Crisis. New York: Hill and Wang. Clapp-Wincek, C., and Blue, R. 2001. Evaluation of Recent USAID Evaluation Experience. Wash- ington, DC: Center for Development Information and Evaluation, USAID. Savedoff, W.D., Levine, R., and Birdsall, N. 2006. When Will We Ever Learn? Improving Lives Through Impact Evaluation. Washington, DC: Center for Global Development. USAID (U.S. Agency for International Development). 2006. U.S. Foreign Assistance Reform. Available at: http://www.usaid.gov/about_usaid/dfa/. Accessed on August 2, 2007. USAID ADS. 2007. Available at: http://www.usaid.gov/policy/ads/200/. Accessed on August 2, 2007. USAID/Uganda. 2007. Request for Proposals (RFP): Strengthening Democratic Linkages in Uganda. Kampala, Uganda: USAID/Uganda.

Next: 9 An Evaluation Initiative to Support Learning the Impact of USAID's Democracy and Governance Programs »
Improving Democracy Assistance: Building Knowledge Through Evaluations and Research Get This Book
×
Buy Paperback | $70.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Over the past 25 years, the United States has made support for the spread of democracy to other nations an increasingly important element of its national security policy. These efforts have created a growing demand to find the most effective means to assist in building and strengthening democratic governance under varied conditions.

Since 1990, the U.S. Agency for International Development (USAID) has supported democracy and governance (DG) programs in approximately 120 countries and territories, spending an estimated total of $8.47 billion (in constant 2000 U.S. dollars) between 1990 and 2005. Despite these substantial expenditures, our understanding of the actual impacts of USAID DG assistance on progress toward democracy remains limited—and is the subject of much current debate in the policy and scholarly communities.

This book, by the National Research Council, provides a roadmap to enable USAID and its partners to assess what works and what does not, both retrospectively and in the future through improved monitoring and evaluation methods and rebuilding USAID's internal capacity to build, absorb, and act on improved knowledge.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!