Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 235
Improving Democracy Assistance: Building Knowledge through Evaluations and Research Glossary Terms in italics are defined elsewhere in the Glossary. METHODOLOGICAL TERMS Case: A spatially delimited phenomenon observed at a single point in time or over some period of time—for example, a political or social group, institution, or event. By construction, a case lies at the same level of analysis as the principal inference. Thus, if an inference pertains to the behavior of nation-states, cases in that study will be comprised of nation-states. An individual case may also be broken down into one or more observations, sometimes referred to as within-case observations. Case study: The intensive study of a single case for the purpose of understanding a larger class of similar units (a population of cases). Note that while “case study” is singular—focusing on a single unit—a “case study research design” may refer to a study that includes several cases (e.g., comparative-historical analysis or the comparative method). Synonym: within-case analysis. Causal inference: Determining from data whether—minimally—a causal factor (X) is thought to raise the probability of an effect (Y) occurring. Control: See Experiment. Experiment: Generically, a research design in which the causal factor of interest (the treatment or intervention) is manipulated by the researcher so
OCR for page 236
Improving Democracy Assistance: Building Knowledge through Evaluations and Research as to produce a more tractable analysis. Within social sciences circles the term is often equated with a research design in which an additional attribute obtains: Cases are randomized across treatment and control groups. Antonym: observational. External validity: See Validity. Internal validity: See Validity. N: See Observation. Observation: The most basic element of any empirical endeavor. Any piece of evidence enlisted to support a proposition. Conventionally, the number of observations in an analysis is referred to by the letter N. Confusingly, N is also used to refer to the number of cases. Randomization: A process by which cases in a sample are chosen randomly (with respect to some subject of interest). An essential element for experiments that use control groups since the treatment and control groups, prior to treatment, must be similar in all respects that are relevant to the inference, and the easiest way to achieve this is through random selection. Sometimes, randomization occurs across matched pairs or within substrata of the sample (stratified random sampling), rather than across the entire population. Research design: The way in which empirical evidence is brought to bear on a hypothesis. Treatment: See Experiment. Validity: Internal validity refers to the correctness of a hypothesis with respect to the sample (the cases actually studied by the researcher). External validity refers to the correctness of a hypothesis with respect to the population of an inference (cases not studied but that the inference is thought to explain). The key element of external validity thus rests on the representativeness of a sample—that is, its relative bias. Variable: An attribute of an observation or a set of observations. In the analysis of causal relations, variables are understood either as independent (explanatory or exogenous), denoted X, or as dependent (endogenous), denoted Y. Within-case analysis: See Case study. X: See Variable. Y: See Variable.
OCR for page 237
Improving Democracy Assistance: Building Knowledge through Evaluations and Research TYPES OF INTERVENTIONS NOTE: USAID does not have a standard terminology to describe the various levels of activities it undertakes. Activity: An intervention of a single type (e.g., training judges). Intervention: Any activity or set of activities (e.g., project, program) undertaken by a funder. Usually employed in the context of an evaluation; here, the intervention is the independent variable whose effect on a policy outcome is being assessed. Program: Includes all projects that address a particular USAID policy area, such as democracy and governance, health, or humanitarian assistance. Project: Includes all activities within the scope of a particular contract or grant. TYPES OF APPRAISALS Country assessment: Appraisal of policy performance at the country level (e.g., levels of corruption or quality of democracy). Purposes of country assessments include tracking progress and regress across countries (including democratic and authoritarian transitions), identifying common patterns of transition and, possibly, the causal drivers of transition. This information should help funders decide in which countries investments might be most productive and also the sectors of a country that are most in need of assistance. Measured by meso- and macro-level indicators. Evaluation: See below. Monitoring: Routine oversight of a project’s implementation (e.g., whether funds are spent properly and other terms of the contract are adhered to). Usually measured with outputs (e.g., number of judges trained). Strategic: Appraisal of the opportunities and constraints in various countries for transition to democracy or the stabilization or better functioning of democracy. Should be based on hypotheses about the factors that drive or inhibit democracy in specific contexts. Strategic appraisals guide USAID’s central decisions on how much democracy assistance to allot to specific countries in specific time periods. Country assessments, made by USAID DG missions, also involve a strategic appraisal. Tactical: Appraisal of which programs should be employed, in which areas or sectors, to best assist a country’s transition to, or stabilization of, democracy. Tactical decisions are generally made at the level of the USAID mission DG office, following a country assessment. Good tactical decisions
OCR for page 238
Improving Democracy Assistance: Building Knowledge through Evaluations and Research depend on accumulated knowledge about the impacts of specific DG programs in particular contexts, gained through good evaluations. TYPES OF EVALUATIONS NOTE: Evaluations should be considered one type of appraisal. Impact evaluation: A study of a project or set of projects that seeks to determine how observed outcomes differ from what most likely would have happened in the absence of the project(s) by using comparison or control groups or random assignment of assistance across groups or individuals to provide a reference against which to assess the observed outcomes for groups or individuals who received assistance. Randomized designs offer the most accuracy and credibility in determining program impacts and therefore should be the first choice, where feasible, for impact evaluation designs. However, such designs are not always feasible or appropriate, and a number of other designs also provide useful information, but with diminishing degrees of confidence, for determining the impact of many different kinds of assistance projects. Output evaluation (generally equivalent to “project monitoring” within USAID): These evaluations consist of efforts to document the degree to which a program has achieved certain targets in its activities. Targets may include spending specific sums on various activities, giving financial support or training to a certain number of nongovernmental organizations (NGOs) or media outlets, training a certain number of judges or legislators, or carrying out activities involving a certain number of villagers or citizens. Output evaluations or monitoring are important for ensuring that activities are carried out as planned and that money is spent for the intended purposes. Participatory evaluation: Individuals, groups, or communities that will receive assistance are involved in the development of project goals, and investigators interview or survey participants during and/or after a project was carried out to determine what their goals and expectations are for the project, how valuable the activity was to them, and whether they were satisfied with the project’s results. Process evaluation: Focuses on how and why a program unfolded in a particular fashion, and if there were problems, on why things did not go as originally planned. Usually conducted after completion of a project, often using teams of experts who conduct interviews and examine project records. Currently the primary source of “lessons learned” and “best practices” intended to inform and assist project managers and implementers.
OCR for page 239
Improving Democracy Assistance: Building Knowledge through Evaluations and Research TYPES OF INDICATORS Indicator: Generically, any operational measure of an underlying concept. May be measured at local, regional, or national levels. Usually quantitative in nature, although may be formed from data originally gathered in a qualitative format. Includes outputs, outcomes, and meso- and macro-level indicators, as discussed below. For USAID’s purposes, good indicators are valid (the measurement is in accordance with the underlying concept), cross-nationally comparable, and reliable (different applications of the indicator result in similar if not identical measurements). Macro-level indicators: Measure country-level features at a highly aggregated level (e.g., democracy). Used for country assessment. Meso- or sectoral-level indicators: Measure country-level features in some rather specific policy area (e.g., elections). Used for country assessment and, very occasionally, for program/project evaluation. Outcomes: Measure the impact of an intervention on some aspect of society. Used for program/project evaluation. Outputs: Measure the specific targets of a program. Often used for program/project monitoring SUBSTANTIVE CONCEPTS Authoritarian regimes: Governments in which leaders are not chosen by competitive elections and in which all political opposition is repressed. All media, local government, judiciary, and legislature are tightly controlled by the executive. Democracy: Generally, rule by the people; also known as popular sovereignty; an aspect of governance. In reaching for a more specific definition, two general strategies may be identified. Minimalist definitions usually center on the idea of contestation (competition). Maximalist (ideal-type) definitions add additional qualifiers such as liberty/freedom, accountability, responsiveness, deliberation, participation, political equality, and social equality. Full democracy: A system of government in which leaders are chosen by open and fair electoral competition and in which all of the political liberties and rights needed to ensure such open and fair competition—personal security and nondiscrimination, rule of law, accountability of officials, civilian control, and freedom of speech, assembly, and media—are well institutionalized and protected.
OCR for page 240
Improving Democracy Assistance: Building Knowledge through Evaluations and Research Governance: The quality of government (e.g., rule of law, low corruption, high efficiency, high performance on dimensions deemed valuable for improving human welfare). May include some or all features of democracy. Partial democracy: A system of government in which leaders are chosen by electoral competition, but such competition is not fully open or fair, and in which many of the political liberties and rights needed to ensure open and fair competition are absent or irregular. Elections are often marked by violence or disorders, elected officials are not fully accountable, and certain groups may be excluded from politics or disadvantaged by state control of media or electoral procedures. Semiauthoritarian regimes: Governments in which leaders are not chosen by competitive elections but in which some political liberties are allowed. Leaders do stand for elections, but the eligibility and activities of the opposition are so tightly constrained that the outcome is never in doubt. There may be some independent media, some opposition political parties, and some diversity of representation in parliament or local governments. There may be some elements of the judiciary or electoral monitoring that function with autonomy.