Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Glossary Terms in italics are defined elsewhere in the Glossary. Methodological Terms Case: A spatially delimited phenomenon observed at a single point in time or over some period of timeâfor example, a political or social group, institution, or event. By construction, a case lies at the same level of analy- sis as the principal inference. Thus, if an inference pertains to the behavior of nation-states, cases in that study will be comprised of nation-states. An individual case may also be broken down into one or more observations, sometimes referred to as within-case observations. Case study: The intensive study of a single case for the purpose of under- standing a larger class of similar units (a population of cases). Note that while âcase studyâ is singularâfocusing on a single unitâa âcase study research designâ may refer to a study that includes several cases (e.g., comparative-historical analysis or the comparative method). Synonym: within-case analysis. Causal inference: Determining from data whetherâminimallyâa causal factor (X) is thought to raise the probability of an effect (Y) occurring. Control: See Experiment. Experiment: Generically, a research design in which the causal factor of interest (the treatment or intervention) is manipulated by the researcher so 235
236 IMPROVING DEMOCRACY ASSISTANCE as to produce a more tractable analysis. Within social sciences circles the term is often equated with a research design in which an additional attri- bute obtains: Cases are randomized across treatment and control groups. Antonym: observational. External validity: See Validity. Internal validity: See Validity. N: See Observation. Observation: The most basic element of any empirical endeavor. Any piece of evidence enlisted to support a proposition. Conventionally, the number of observations in an analysis is referred to by the letter N. Con- fusingly, N is also used to refer to the number of cases. Randomization: A process by which cases in a sample are chosen ran- domly (with respect to some subject of interest). An essential element for experiments that use control groups since the treatment and control groups, prior to treatment, must be similar in all respects that are rel- evant to the inference, and the easiest way to achieve this is through ran- dom selection. Sometimes, randomization occurs across matched pairs or within substrata of the sample (stratified random sampling), rather than across the entire population. Research design: The way in which empirical evidence is brought to bear on a hypothesis. Treatment: See Experiment. Validity: Internal validity refers to the correctness of a hypothesis with respect to the sample (the cases actually studied by the researcher). Exter- nal validity refers to the correctness of a hypothesis with respect to the population of an inference (cases not studied but that the inference is thought to explain). The key element of external validity thus rests on the representativeness of a sampleâthat is, its relative bias. Variable: An attribute of an observation or a set of observations. In the analysis of causal relations, variables are understood either as indepen- dent (explanatory or exogenous), denoted X, or as dependent (endog- enous), denoted Y. Within-case analysis: See Case study. X: See Variable. Y: See Variable.
GLOSSARY 237 Types of Interventions NOTE: USAID does not have a standard terminology to describe the various levels of activities it undertakes. Activity: An intervention of a single type (e.g., training judges). Intervention: Any activity or set of activities (e.g., project, program) undertaken by a funder. Usually employed in the context of an evaluation; here, the intervention is the independent variable whose effect on a policy outcome is being assessed. Program: Includes all projects that address a particular USAID policy area, such as democracy and governance, health, or humanitarian assistance. Project: Includes all activities within the scope of a particular contract or grant. Types of Appraisals Country assessment: Appraisal of policy performance at the country level (e.g., levels of corruption or quality of democracy). Purposes of coun- try assessments include tracking progress and regress across countries (including democratic and authoritarian transitions), identifying common patterns of transition and, possibly, the causal drivers of transition. This information should help funders decide in which countries investments might be most productive and also the sectors of a country that are most in need of assistance. Measured by meso- and macro-level indicators. Evaluation: See below. Monitoring: Routine oversight of a projectâs implementation (e.g., whether funds are spent properly and other terms of the contract are adhered to). Usually measured with outputs (e.g., number of judges trained). Strategic: Appraisal of the opportunities and constraints in various coun- tries for transition to democracy or the stabilization or better function- ing of democracy. Should be based on hypotheses about the factors that drive or inhibit democracy in specific contexts. Strategic appraisals guide USAIDâs central decisions on how much democracy assistance to allot to specific countries in specific time periods. Country assessments, made by USAID DG missions, also involve a strategic appraisal. Tactical: Appraisal of which programs should be employed, in which areas or sectors, to best assist a countryâs transition to, or stabilization of, democracy. Tactical decisions are generally made at the level of the USAID mission DG office, following a country assessment. Good tactical decisions
238 IMPROVING DEMOCRACY ASSISTANCE depend on accumulated knowledge about the impacts of specific DG programs in particular contexts, gained through good evaluations. Types of Evaluations NOTE: Evaluations should be considered one type of appraisal. Impact evaluation: A study of a project or set of projects that seeks to determine how observed outcomes differ from what most likely would have happened in the absence of the project(s) by using comparison or control groups or random assignment of assistance across groups or individuals to provide a reference against which to assess the observed outcomes for groups or individuals who received assistance. Randomized designs offer the most accuracy and credibility in determining program impacts and therefore should be the first choice, where feasible, for impact evaluation designs. However, such designs are not always feasible or appropriate, and a number of other designs also provide useful informa- tion, but with diminishing degrees of confidence, for determining the impact of many different kinds of assistance projects. Output evaluation (generally equivalent to âproject monitoringâ within USAID): These evaluations consist of efforts to document the degree to which a program has achieved certain targets in its activities. Targets may include spending specific sums on various activities, giving financial sup- port or training to a certain number of nongovernmental organizations (NGOs) or media outlets, training a certain number of judges or legisla- tors, or carrying out activities involving a certain number of villagers or citizens. Output evaluations or monitoring are important for ensuring that activities are carried out as planned and that money is spent for the intended purposes. Participatory evaluation: Individuals, groups, or communities that will receive assistance are involved in the development of project goals, and investigators interview or survey participants during and/or after a proj- ect was carried out to determine what their goals and expectations are for the project, how valuable the activity was to them, and whether they were satisfied with the projectâs results. Process evaluation: Focuses on how and why a program unfolded in a particular fashion, and if there were problems, on why things did not go as originally planned. Usually conducted after completion of a proj- ect, often using teams of experts who conduct interviews and exam- ine project records. Currently the primary source of âlessons learnedâ and âbest practicesâ intended to inform and assist project managers and implementers.
GLOSSARY 239 Types of Indicators Indicator: Generically, any operational measure of an underlying concept. May be measured at local, regional, or national levels. Usually quantita- tive in nature, although may be formed from data originally gathered in a qualitative format. Includes outputs, outcomes, and meso- and macro-level indicators, as discussed below. For USAIDâs purposes, good indicators are valid (the measurement is in accordance with the underlying concept), cross-nationally comparable, and reliable (different applications of the indicator result in similar if not identical measurements). Macro-level indicators: Measure country-level features at a highly aggre- gated level (e.g., democracy). Used for country assessment. Meso- or sectoral-level indicators: Measure country-level features in some rather specific policy area (e.g., elections). Used for country assess- ment and, very occasionally, for program/project evaluation. Outcomes: Measure the impact of an intervention on some aspect of soci- ety. Used for program/project evaluation. Outputs: Measure the specific targets of a program. Often used for pro- gram/project monitoring Substantive Concepts Authoritarian regimes: Governments in which leaders are not chosen by competitive elections and in which all political opposition is repressed. All media, local government, judiciary, and legislature are tightly controlled by the executive. Democracy: Generally, rule by the people; also known as popular sover- eignty; an aspect of governance. In reaching for a more specific definition, two general strategies may be identified. Minimalist definitions usually center on the idea of contestation (competition). Maximalist (ideal-type) definitions add additional qualifiers such as liberty/freedom, account- ability, responsiveness, deliberation, participation, political equality, and social equality. Full democracy: A system of government in which leaders are chosen by open and fair electoral competition and in which all of the political liber- ties and rights needed to ensure such open and fair competitionâper- sonal security and nondiscrimination, rule of law, accountability of offi- cials, civilian control, and freedom of speech, assembly, and mediaâare well institutionalized and protected.
240 IMPROVING DEMOCRACY ASSISTANCE Governance: The quality of government (e.g., rule of law, low corrup- tion, high efficiency, high performance on dimensions deemed valuable for improving human welfare). May include some or all features of democracy. Partial democracy: A system of government in which leaders are chosen by electoral competition, but such competition is not fully open or fair, and in which many of the political liberties and rights needed to ensure open and fair competition are absent or irregular. Elections are often marked by violence or disorders, elected officials are not fully account- able, and certain groups may be excluded from politics or disadvantaged by state control of media or electoral procedures. Semiauthoritarian regimes: Governments in which leaders are not chosen by competitive elections but in which some political liberties are allowed. Leaders do stand for elections, but the eligibility and activities of the opposition are so tightly constrained that the outcome is never in doubt. There may be some independent media, some opposition political parties, and some diversity of representation in parliament or local governments. There may be some elements of the judiciary or electoral monitoring that function with autonomy.