National Academies Press: OpenBook
« Previous: 2 Evaluation in USAID DG Programs: Current Practices and Problems
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 71
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 72
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 73
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 74
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 75
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 76
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 77
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 78
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 79
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 80
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 81
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 82
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 83
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 84
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 85
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 86
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 87
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 88
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 89
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 90
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 91
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 92
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 93
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 94
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 95
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 96
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 97
Suggested Citation:"3 Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 98

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 Measuring Democracy Introduction One of the U.S. Agency for International Development’s (USAID) charges to the National Research Council committee was to develop an operational definition of democracy and governance (DG) that disaggre- gates the concept into clearly defined and measurable components. The committee sincerely wishes that it could provide such a definition, based on current research into the measurement of democratic behavior and governance. However, in the current state of research, only the beginnings of such a definition can be provided. As detailed below, there is as much disagreement among scholars and practitioners about how to measure democracy, or how to disaggregate it into components, as on any other aspect of democracy research. The result is that there exist a welter of competing definitions and breakdowns of “democracy,” marketed by rivals, each claiming to be a superior method of measurement, and each the subject of sharp and sometimes scathing criticism. The committee believes that democracy is an inherently multidimen- sional concept, and that broad consensus on those dimensions and how   Helpful comments on this chapter were received from Macartan Humphreys, Fabrice Lehoucq, and Jim Mahoney. The committee is especially grateful to those who attended a special meeting on democracy indicators held at Boston University in January 2007: David Black, Michael Coppedge, Andrew Green, Rita Guenther, Jonathan Hartlyn, Jo Husbands, Gerardo Munck, Margaret Sarles, Fred Schaffer, Richard Snyder, Paul Stern, and Nicolas van de Walle. See Appendix C for further information. 71

72 IMPROVING DEMOCRACY ASSISTANCE to aggregate them may never be achieved. Thus, if USAID is seeking an operational measure of democracy to track changes in countries over time and where it is engaged, a more practical approach would be to disaggregate the various components of democracy and track changes in democratization by looking at changes in those components. Yet even for the varied components of democracy, there are no avail- able measures that are widely accepted and have demonstrated the valid- ity, accuracy, and sensitivity that would make them useful for USAID in tracking modest changes in democratic conditions in specific countries. The development of a widely recognized disaggregated definition of democracy, with clearly defined and objectively measurable components, would be the result of a considerable research project that is yet to be done. This chapter provides an analysis of existing measures of democ- racy and points the way toward developing a disaggregated measure of the type requested by USAID. The committee finds that most exist- ing measures of democracy are adequate, and in fair agreement, at the level of crude determination of whether a country is solidly democratic, autocratic, or somewhere in between. However, the committee also finds that all existing measures are severely inadequate at tracking small move- ments or small differences in levels of democracy between countries or in a single country over time. Moreover, the committee finds that existing measures disaggregate democracy in very different ways and that their measures of various components of democracy do not provide trans- parent, objective, independent, or reliable indicators of change in those components over time. While recognizing that it may seem self-serving for an academic com- mittee to recommend “more research,” it is the committee’s belief—after surveying the academic literature and convening a workshop of experts in democracy measures to discuss the issue—that if USAID wishes a mea- sure of democracy that it can use to gauge the impact of its programs and track the progress of countries in which it is active, it faces a stark choice: either rely on the current flawed measures of democracy or help support the development of a research project on democracy indicators that—it is hoped—will eventually produce a set of indicators with the broadly accepted integrity of today’s national accounts indicators for economic development. To provide just a few examples to preview the discussion below, USAID manages its DG programs with an eye toward four broad areas: rule of law, elections, civil society, and good governance. Yet consider the two most widely used indicators of democracy: the Polity autocracy/ democracy scale and the Freedom House scales of civil liberties and politi- cal rights. The former breaks down its measures of democracy into three components: executive recruitment, executive constraints, and political

MEASURING DEMOCRACY 73 competition, measured by six underlying variables. While some of these could be combined to provide indicators of elections, civil society, and aspects of rule of law, Polity does not address “good governance.” More- over, the validity of the various components and underlying variables in Polity is so greatly debated that there is no reason to believe that a mea- sure of rule of law based on the Polity components would be accepted. Freedom House rates nations on two scales: civil liberties (which conflates rule of law, civil society, and aspects of good governance) and political rights (which conflates rule of law, elections, and aspects of good gov- ernance). Even if these scales were based on objective and transparent measurements (and they are not), there would be no way to extract from them information on components relevant to USAID’s DG policy areas. Fortunately, while more sensitive and accurate measures to track sec- toral movements toward or away from democracy are vital to improving USAID’s policy planning and targeting of DG programs, USAID can still gain knowledge on the impacts of its programs by focusing on changes in outcome indicators at a level relevant to those projects (for which meth- odologies are examined in Chapters 5 through 7). That is, USAID should seek to determine whether its projects lead to more independent and effective behavior by judges and legislators, broader electoral participa- tion and understanding by citizens, more competitive and fair election practices, fewer corrupt officials, and other concrete changes. The issue of how much those changes contribute to overall trajectories of democracy or democratic consolidation is one that can only be solved by future expe- rience and study and the development of better disaggregated measures for tracking democracy at the sectoral level. The committee thus agrees that USAID is correct in focusing its inter- est in measurement on developing a measure of democracy that is dis- aggregated into discrete and measurable components. This chapter will analyze existing approaches to measuring democracy, identifying why they are flawed, and point the way toward what the committee believes will be a more useful approach to developing disaggregated sectoral or meso-level measures (Table 2-1). Problems with Extant Indicators A consensus is growing within the scholarly community that exist- ing indicators of democracy are problematic. These problems may be grouped into five categories: (1) problems of definition, (2) sensitivity issues, (3) measurement errors and data coverage, (4) aggregation prob-   See Bollen (1993), Beetham (1994), Gleditsch and Ward (1997), Bollen and Paxton (2000), Foweraker and Krznaric (2000), McHenry (2000), Munck and Verkuilen (2002), Treier and Jackman (2003), Berg-Schlosser (2004 a, b), Acuna-Alfaro (2005), and Vermillion (2006).

74 IMPROVING DEMOCRACY ASSISTANCE lems, and (5) lack of convergent validity. What follows is a brief, some- times rather technical, review of these problems and their repercussions. Definitions of key terms are provided in the text or in the Glossary at the end of the report. The focus of the discussion is on several leading democracy indi- cators: (1) Freedom House; (2) Polity; (3) ACLP (“ACLP” stands for the names of the creators—Alvarez, Cheibub, Limongi, and Przeworski; Alvarez et al 1996; recently expanded by Boix and Rosato 2001); and (4) the Economist Intelligence Unit (EIU). Freedom House provides two indices: “Political Rights” and “Civil Liberties” (sometimes employed in tandem, sometimes singly). Both are seven-point scales extending back to 1972 and cover most sovereign and semisovereign nations. Polity also provides two aggregate indices: “Democracy” and “Autocracy.” Both are 10-point scales and are usually used in tandem (by subtracting one from the other), which provides the 21-point (-10 to 10) Polity2 variable. Coverage extends back to 1800 for sovereign countries with populations greater than 500,000. ACLP codes countries dichotomously (autocracy/ democracy) and includes most sovereign countries from 1950 to 1990. The expanded dataset provided by Boix and Rosato (2001) stretches back to 1800. The EIU has recently developed a highly disaggregated index of democracy with 5 core dimensions and 60 subcomponents, which are combined into a single index of democracy (Kekic 2007). Coverage extends to 167 sovereign or semisovereign nations but only in 2006. Glancing reference will be made to other indicators in an increasingly crowded field, and many of the points made in the following discussion apply quite broadly. However, it is important to bear in mind that each indicator has its own particular strengths and weaknesses. The following brief survey does not purport to provide a comprehensive review.   See www.freedomhouse.org.   Both are drawn from the most recent iteration of this project, known as Polity IV. See www.cidcm.umd.edu/inscr/polity.   Jose Cheibub and Jennifer Ghandi are currently engaged in updating the ACLP dataset, but results are not yet available. See Bollen (1980), Coppedge and Reinicke (1990), Arat (1991), Hadenius (1992), Vanhanen (2000), Altman and Pérez-Liñán (2002), Gasiorowski (1996; updated by Reich 2002 [also known as “Political Regime Change—PRC dataset”]), and Moon et al (2006).   The most detailed and comprehensive recent reviews are Hadenius and Teorell (2005) and Munck and Verkuilen (2002). See also Bollen (1993), Beetham (1994), Gleditsch and Ward (1997), Bollen and Paxton (2000), Elkins (2000), Foweraker and Krznaric (2000), McHenry (2000), Casper and Tufis (2003), Treier and Jackman (2003), Berg-Schlosser (2004a, b), Acuna- Alfaro (2005), and Bowman et al (2005).

MEASURING DEMOCRACY 75 Definition There are many ways to define democracy, and each naturally gener- ates a somewhat different approach to measurement (Munck and Verkui- len 2002). Some definitions are extremely “thin,” focusing mainly on the presence of electoral competition for national office. The ACLP index exemplifies this approach: Countries that have changed national leader- ship through multiparty elections are democracies; other countries are not. Other definitions are rather “thick,” encompassing a wide range of social, cultural, and legal characteristics well beyond elections. For example, the Freedom House Political Rights Index includes the following questions pertaining to corruption: Has the government implemented effective anticorruption laws or pro- grams to prevent, detect, and punish corruption among public officials, including conflict of interest? Is the government free from excessive bu- reaucratic regulations, registration requirements, or other controls that increase opportunities for corruption? Are there independent and effec- tive auditing and investigative bodies that function without impediment or political pressure or influence? Are allegations of corruption by gov- ernment officials thoroughly investigated and prosecuted without preju- dice, particularly against political opponents? Are allegations of corrup- tion given wide and extensive airing in the media? Do whistle-blowers, anticorruption activists, investigators, and journalists enjoy legal protec- tions that make them feel secure about reporting cases of bribery and corruption? What was the latest Transparency International Corruption Perceptions Index score for this country? (Freedom House 2007) It may be questioned whether these aspects of governance, important though they may be, are integral components of democracy. More generally, many scholars treat good governance as a likely result of democracy; yet many donors (including USAID) treat good governance as an essential component of democracy. Similar complaints might be reg- istered about other concepts and scales of democracy; some are so “thick” as to include diverse elements of accountability, even distributional equity and economic growth. For example, some definitions treat the United States as a democracy from the passage of its Constitution and first national election in 1789. Yet since George Washington ran uncontested in both 1789 and 1792, even ACLP would not treat the United States as democratic until the appear- ance of contested multiparty elections in 1796. If slavery is considered a contravention of democracy, the United States could not be considered a democracy until its abolition throughout its territory in 1865. If women’s right to vote is also considered essential to the definition of democracy, the United States does not qualify until 1920. And if the disenfranchisement of African Americans in southern states is considered a block to democ-

76 IMPROVING DEMOCRACY ASSISTANCE racy, the United States does not become a full democracy until passage of the Civil Rights Act in 1965. In short, only a “thin” definition of democracy would classify the United States as “fully democratic” from the early nineteenth century. Yet most donor agencies are reluctant to adopt such thin measures as a guide to current democracy assessments, questioning whether “thin” indices of democracy capture all the critical features of this complex concept. The problem of definition is critical but very difficult to resolve. Sensitivity A related issue is that many of the leading democracy indicators are not sensitive to important gradations in the quality of democracy across countries or through time. At the extreme, dichotomous measures such as ACLP reduce democracy to a dummy variable: A country either is or is not a democracy, with no intermediate stages permitted. While useful for certain purposes, one may wonder whether this captures the complexity of such a variegated concept (Elkins 2000). At best it captures one or two dimensions of democracy (those employed as categorizing principles), while the rest are necessarily ignored. Most democracy indicators allow for a more elongated scale. As noted above, Freedom House scores democracy on a seven-point index (14 points if the Political Rights and Civil Liberties indices are combined). Polity provides a total of 21 points if the Democracy and Autocracy scales are merged into the Polity2 variable, which gives the impression of con- siderable sensitivity. In practice, however, country scores stack up at a few places (notably, 7 for autocracies and +10 for full democracies, the highest possible score), suggesting that the scale is not as sensitive as it purports to be. The EIU index is by far the most sensitive and does not appear to be arbitrarily “bunched.” Note that all extant indicators are bounded to some degree and therefore constrained. This means that there is no way to distinguish the quality of democracy among countries that have perfect negative or positive scores. This is fine as long as there really is no difference in the quality of democracy among these countries. Yet the latter assumption is highly questionable. Consider that in 2004, Freedom House assigned the highest score (1) on its Political Rights Index to the following 58 countries: Andorra, Australia, Austria, Bahamas, Barbados, Belgium, Belize, Bulgaria, Canada, Cape Verde, Chile, Costa Rica, Cyprus (Greek),   Questions can also be raised about whether these indices are properly regarded as in- terval scales (Treier and Jackman 2003). The committee does not envision an easy solution to this problem.

MEASURING DEMOCRACY 77 Czech Republic, Denmark, Dominica, Estonia, Finland, France, Germany, Greece, Grenada, Hungary, Iceland, Ireland, Israel, Italy, Japan, Kiribati, Latvia, Liechtenstein, Luxembourg, Malta, Marshall Islands, Mauritius, Micronesia, Nauru, Netherlands, New Zealand, Norway, Palau, Panama, Poland, Portugal, San Marino, Slovakia, Slovenia, South Africa, South Korea, Spain, St. Kitts and Nevis, St. Lucia, Suriname, Sweden, Swit- zerland, Tuvalu, United Kingdom, United States, and Uruguay. Are we really willing to believe that there are no substantial differences in the quality of democracy among these diverse polities? Measurement Errors and Data Coverage Democracy indicators often suffer from measurement errors and/or missing data.10 Some (e.g., Freedom House) are based largely on expert judgments, judgments that may or may not reflect facts on the ground. 11 Some (e.g., Freedom House in the 1970s and 1980s) rely heavily on sec- ondary accounts from a few newspapers such as the New York Times. These accounts may or may not be trustworthy and almost assuredly do not provide comprehensive coverage of the world. Moreover, newspaper accounts suffer from extreme selection bias, depending almost entirely on the location of the newspaper’s reporters. Thus, if the New York Times has a reporter in Mexico but none in Central America, coverage of the latter is going to much spottier than the former. In an attempt to improve cover- age and sophistication, some indices (e.g., EIU) impute a large quantity of missing data. This is a dubious procedure wherever data coverage is limited, as it seems to be for many of the EIU variables. Note that many of the EIU variables rely on polling data, which are available on a highly irregular basis for 100 or so nation states. The quality of many of the surveys on which the EIU draws has not been clearly established. This means that data for these questions must be estimated by country experts for all other cases, estimated to be about half of the sample. (The procedures employed for this estimation are not known.) Wherever human judgments are required for coding, one must be   The precise period in question stretches from December 1, 2003, to November 30, 2004; obtained from http://www.freedomhouse.org/template.cfm?page=15&year=2006 (accessed on Sep- tember 21, 2006). 10  For general treatments of the problem of conceptualization and measurement, see Ad- cock and Collier (2001). 11  With respect to the general problem of expert judgments, see Tetlock (2005), who found that expert opinions tended to reflect more the consensus of the expert community than an objective “truth,” inasmuch as his surveys of experts produced answers that were often, in retrospect, no more accurate than a coin toss.

78 IMPROVING DEMOCRACY ASSISTANCE concerned about the basis of the respondent’s decisions. In particular, one wonders whether coding decisions about particular topics (e.g., press freedom) may reflect an overall sense to outside experts of how demo- cratic country A is, rather than an independent evaluation of the question at hand. The committee also worries about the problem of endogeneity of the evaluations, that is, with experts looking more at what other experts and indicators are doing rather than making their own independent eval- uation of the country. The intercoder “reliability” may be little more than an artifact of experts accepting other experts’ judgments. In this respect, “disaggregated” indicators are often considerably less disaggregated than they appear. Note that it is the ambiguity of the questionnaires underlying these surveys that fosters this sort of premature aggregation. The committee undertook a limited statistical examination of the Free- dom House scores for 2007 on their key components—for political rights this included electoral process, pluralism and participation, and function- ing of government; for civil liberties these were freedom of expression, association and organizational rights, rule of law, and personal autonomy and individual rights (see Appendix C). Across all countries, two-way correlations among the seven components were never less than 0.86 and in several cases were 0.95 or greater. This high correlation could imply that democracy is indeed a far “smoother” condition than the “lumpy” view expressed in this study. That is, the high correlation among the items suggests that picking any one is just about as good as picking any other. Yet the committee doubts the independence of the judgments on each of the components of the scale. The EIU democracy scale also is divided into components: civil rights, elections, functioning of government, participation, and culture. Taking the Freedom House and EIU components together, a factor analysis reveals that a single factor loading explains 83 percent of the variance across all 12 components, and the two principal factors explain 90 percent of the variance (Coppedge 2007). This, by itself, is not problematic; it could be that good/bad things go together; that is, countries that are democratic on one dimension are also democratic on another. However, it raises concern about the actual independence of the various components in these indices. It could be, in other words, that respondents (either experts or citizens) who are asked about different dimensions of a polity are, in fact, simply reflecting their overall sense of a country’s democratic culture. It also sug- gests that the various independent components in fact contain no more useful information than the principal one or two factors. Adding to worries about measurement error is the general absence of intercoder reliability tests as part of the coding procedure. Freedom House does not conduct such tests (or at least does not make them public). Pol- ity does so, but it requires a good deal of hands-on training before coders reach an acceptable level of coding accuracy. This suggests that other cod-

MEASURING DEMOCRACY 79 ers would not reach the same decisions simply by reading Polity’s coding manual or that artificial uniformity is imposed. And this, in turn, points to a potential problem of conceptual validity: Key concepts are not well matched to the empirical data. Aggregation Since democracy is a multifaceted concept, all composite indicators must wrestle with the aggregation problem—how to weight the compo- nents of an index and which components to include. For aggregation to be successful, the rules must be clear, operational, and consistent with common notions of what democracy is; that is, the resulting concept must be valid. It goes almost without saying that different solutions to the aggregation problem lead to quite different results (Munck and Verkuilen 2002; for a possible exception to this dictum, see Coppedge and Reinicke 1990). Although most indicators have fairly explicit aggregation rules, they are often difficult to comprehend, and consequently to apply. They may also include “wild card” elements, allowing the coder free rein to assign a final score that accords with his or her overall impression of a country (e.g., Freedom House). In some cases (e.g., Polity), components are listed separately, which helps clarify the final score a country receives. However, in Polity’s case the components of the index are themselves highly aggre- gated, so the overall clarity of the indicator is not improved. Even when aggregation rules are clear and unambiguous, because they bundle a host of diverse dimensions into a single score, it is often unclear which of the dimensions is driving a country’s score in a particu- lar year. It is often difficult to articulate what an index value of “4” means within the context of any single indicator. Moreover, even if an aggregation rule is explicit and operational, it is never above challenge. The Polity index, in Munck and Verkuilen’s estimation, “is based on an explicit but nonetheless quite convoluted aggregation rule” (2002:26). Indeed, a large number of possible aggrega- tion rules fit, more or less, with everyday concepts of democracy and thus meet the minimum requirements of conceptual validity. For this reason the committee regards the aggregation problem as the only problem that is unsolvable in principle. There will always be disagreement over how to aggregate the various components of “Big D democracy” (i.e., the one cen- tral concept that is assumed to summarize a country’s regime status). Convergent Validity Given the above, it is no surprise that there is significant disagree- ment among scholars over how to assign scores for particular countries on

80 IMPROVING DEMOCRACY ASSISTANCE the leading democracy indices. Granted, intercorrelations among various democracy indicators are moderately high, suggesting some basic agree- ment over what constitutes a democratic state. As shown in the analysis undertaken for the committee that is summarized in Appendix C, the Pol- ity2 variable (combining Democracy and Autocracy) drawn from the Pol- ity dataset and the Freedom House Political Rights Index are correlated at .88 (Pearson’s r). Yet when countries with perfect democracy scores (e.g., the United Kingdom and the United States) are excluded from the sam- ples, this intercorrelation drops to .78. And when countries with scores of 1 and 2 on the Freedom House Political Rights scale (the two top scores) are eliminated, the correlation experiences a further drop—to .63, imply- ing that two-thirds of the variance in one scale is unrelated to changes in the other scale for countries outside the upper tier of democracies. The committee similarly finds that correlations between the Freedom House and EIU scores are low when the highest-scoring countries are set aside. For a substantial number of countries—Ghana, Niger, Guinea- Bissau, the Central African Republic, Chad, Russia, Cambodia, Haiti, Cuba, and India—the Freedom House and EIU scores differ so widely that they would be considered democratic by one scale but undemocratic by the other. Indeed, country specialists often take issue with the scoring of countries they are familiar with (e.g., Bowman et al 2005; for more exten- sive cross-country tests, see Hadenius and Teorell 2005). Since tracking progress in democracy assistance often depends on accurately measuring modest improvements in democracy, it is particu- larly distressing that the convergence between different scales is so low in this regard. While the upper “tails” of the distributions on the major indicators (the fully democratic regimes) are highly correlated, the democ- racy scores for countries in the upper middle to the bottom ranges are not. The analysis commissioned by the committee (see Appendix C) found that the average correlation between the annual Freedom House and Pol- ity scores for autocratic countries (those with Polity scores less than −6) during 1972-2002 was only .274. Among the partially free countries of the former Soviet Union, the correlation between annual Freedom House and Polity scores for the years 1991-2002 was .295; for the partially free countries in the Middle East, it was 0.40. In many cases the correlations for specific countries were negative, meaning that the two scales gave opposite measures of whether democracy levels were improving or not. This is a serious problem for USAID and other donors, since they are generally most concerned with identifying the level of democracy, and degrees of improvement, precisely for those countries lying in the middle and bottom of the distribution—countries that are mainly undemocratic or imperfectly democratic—rather than for countries already at the upper end of the democracy scale.

MEASURING DEMOCRACY 81 If there is little agreement on the quality and direction of democracy in countries that lie in between the extremes, it must be concluded that there is relatively little convergent validity across the most widely used democracy indicators. That is, whatever their intent, they are not in fact capturing the same concept. By way of conclusion to this very short review of extant indicators, the committee quotes from another recent review by Jim Vermillion, cur- rent executive vice president of the International Foundation for Election Systems: Initial work in the measurement of democracy has provided some excel- lent insights into specific measures and has helped enlighten our view of where underlying concepts related to democracy stand. However, we are far from coming up with a uniform, theoretically cohesive definition of the construct of democracy and its evolution that lends itself easily to statistical estimation/manipulation and meaningful hypothesis testing. (Vermillion 2006:30) The need for a new approach to this ongoing, and very troublesome, problem of conceptualization and measurement is apparent. Average Versus Country-Specific Results It is reasonable to ask, if the existing indicators of democracy have so many problems, how can the committee have any confidence in the find- ings mentioned in Chapter 1, such as that the number of democracies in the world is rising and that USAID DG assistance has, on average, made a significant positive difference in democracy levels? For that matter, how is it possible for scholars to have undertaken more than two decades of quantitative research on democracy and democratization, correlating various causal factors with shifts in these democracy indicators, with any belief in the validity of their research? The answers to this question lie in the very different purposes that democracy indicators must serve for scholarly analysis of average or overall global trends, as against the purposes they must serve to support policy analysis of trends in specific countries. For the former purpose it is acceptable for democracy data to have substantial errors regarding levels of democracy in particular states, as long as the errors are not systemati- cally biased. That is, even a democracy scale that makes substantial errors will be useful for looking at average trends as long as its score for any given country is equally likely to be “too high” or “too low.” Such a scale will state the level of democracy as too high in about half the world’s countries and too low in the other half, but the average level of global democracy overall will be fairly correct, and scholars can use statistical methods to “separate out” the random errors from the overall trends.

82 IMPROVING DEMOCRACY ASSISTANCE Statistical analyses of democracy that use extant indicators such as Polity or Freedom House are looking for the overall or average effects of various factors—such as economic growth, democracy assistance, or regime types—on democracy. Thus the Finkel et al studies (2007, 2008) described above, which demonstrate a positive impact of various forms of democracy assistance on average levels of democracy while statistically controlling for a host of background, trend, and other causal variables, also controlled for measurement errors in the democracy indices that were assumed to be evenly distributed across countries. What their results tell us is something like the following: In any four-year period, if three coun- tries are examined in which USAID invested an average of $10 million per country per year in DG assistance, those countries’ Freedom House scores will show an overall increase of three points (an average increase of one point per country) at the end of those four years relative to what would have been expected in the absence of USAID DG assistance.12 Let us accept this finding as the best available estimate of the truth (and this study has been subjected to careful peer criticism and its results stand up well)—on average, DG programs do achieve positive results. Yet such measures are not helpful, indeed can even be misleading, if used to evaluate the effects of DG programming in particular countries. For example, say that USAID spends $10 million on various DG programs in each of three countries. Say also that a valid and accurate democracy scale (assuming we are able to set aside the effects of any other factors on levels of democracy) would show that such programs led country 1 to increase by two points on this democracy scale and country 2 to increase by one point, while country 3 saw no change. USAID assistance programs thus achieved substantial success in one case, modest success in another, and no effect in the last. However, the flawed indicator we have instead records that country 1 increased by three points and country 2 decreased by one point, while country 3 increased by one point. On average, this is exactly the same result—overall scores in these countries increased by a total of three points (or an average of one point per country) for these countries over four years. Yet if USAID relies on this flawed indicator to estimate the impact of its efforts in specific countries, it will be considerably off. It will greatly overestimate the success of its programs in countries 1 and 3 and wrongly conclude that its programs were associated with a decline in democracy in country 2—all of this just because of random errors in the way that current democracy indicators track small movements or middle-range levels of democracy in particular countries. If USAID were 12  Finkel et al (2007, 2008) found essentially the same results with Polity scores as Freedom House scores, so this discussion holds for both indicators.

MEASURING DEMOCRACY 83 then to ramp up and spread the program in country 1, thinking it an over- whelming (rather than modest) success, and also spread the programs in country 3 that “seemed” to produce a success, while halting the programs that apparently failed to stem democracy decline in country 2, it could be making severe mistakes. Thus the errors found in current widely used democracy indicators, while still allowing them to serve well enough for purposes of scholarly research on average effects of various factors on democracy or for charting overall democracy trends, do not serve USAID at all well for the policy purposes of determining the effects of specific programs in particular countries.13 For this reason the rest of this chapter lays out an approach that the committee believes will be more fruitful for developing useful indica- tors of democratic change. Also for this reason, throughout this report methods are stressed for helping USAID determine the effects of its pro- grams using more concrete indicators of the immediate policy outcomes of those programs, rather than macrolevel indicators of national levels of democracy. A Disaggregated Approach to Measurement at the Country Level Given the multiple difficulties encountered by Freedom House, Polity, ACLP, EIU, and other extant indicators of democracy, one might reason- ably conclude that the stated task simply cannot be accomplished. That is, one cannot assign a single point score to a particular country at a particu- lar point in time, expecting that this score will accurately capture all the nuances of democracy and be empirically valid through time and across space. The goal of precise numerical comparison is impossible. While this conclusion may seem compelling, at least initially, one should also consider the costs of not comparing in a systematic fashion. Without some way of analyzing the quality of democracy through time and across countries, there is no way to mark progress or regress on this vital factor, to explain it, or to affect its future course. To gain knowledge of the world, and hence to make effective policy interventions, compari- sons must be made. And to compare with precision numerical scores must be assigned to countries according to the quality of democracy they sup- 13  As discussed in Chapter 4, when scholars undertake case studies of democratization in a particular country, they generally do not bother with indicators such as Polity or Freedom House to describe trends in that country, but instead focus on institutional or behavioral changes that they document in detail and seek the causes or consequences of those observed changes.

84 IMPROVING DEMOCRACY ASSISTANCE posedly possess.14 How, then—given the shortcomings of extant democ- racy indices—might this difficult task be handled more effectively? The committee proposes that the key to developing a more accu- rate and useful empirical approach to democracy—as to other large and unwieldy subjects (e.g., “governance”)—is to be found in greater disaggregation (Coppedge, forthcoming). Rather than focusing on how, precisely, to define democracy and attempting to arrive at a summary score (à la Freedom House or Polity), the committee proposes to focus on developing the most transparent, independent, and valid measures for the underlying dimensions of this concept. The key point is that this approach to data gathering takes place at a much lower level of abstrac- tion than Big D democracy. Previous Efforts at Disaggregation The idea of disaggregating measures of democracy and governance is of course not entirely new. As mentioned, the Polity IV dataset includes six component variables, each measured separately. Other precedents include the Handbook of Democracy and Governance Program Indicators (USAID 1998), the Bertelsmann Transformation Index (Bertelsmann Founda- tion 2003), the Database of Political Institutions (Beck et al 2000), the EIU index (Kekic 2007), and the World Bank governance indicators (Kaufmann et al 2006). In some areas—for example, free press (Freedom House 2006) or elections (Munck 2006)—disaggregated topics have been successfully measured on a global scale. In these and other instances, the committee suggests building on, or simply incorporating, previous efforts. However, the usual approach to disaggregation is flawed, either because the result- ing indicators are still highly abstract and hence difficult to operational- ize (e.g., Polity IV) and/or because the underlying components, while conceptually distinct, are gathered in such a way as to compromise their independence. Consider the six World Bank governance indicators—government effectiveness, voice and accountability, control of corruption, rule of law, regulatory burden, and political instability—which involve very simi- lar underlying components (Landman 2003, Kurtz and Schrank 2007, Thomas 2007). Issues of corruption, for example, figure in several of the six dimensions. It seems likely that overall perceptions on the part of 14  To some the assignment of a point score may seem a prime example of misplaced preci- sion. Yet the lack of precision inherent in such cross-country comparisons can be handled by including an estimate of uncertainty along with the point estimate so that users of the data will not be misled.

MEASURING DEMOCRACY 85 survey respondents (whether expert or civilian) as to “how country A is doing” color many of the survey responses on which these indicators depend, insofar as survey questions tend to be quite broad. This sort of disaggregation does not achieve the intended purpose. Indeed, it is often argued that the six Kaufmann variables are best regarded as measures of the same thing and therefore are often combined in empirical analyses. A similar problem besets other efforts at disaggregation, such as the recently released Freedom House measures of civil liberties and political rights, which are broken down into seven components: electoral process, political pluralism and participation, functioning of government, freedom of expression and belief, associational and organizational rights, rule of law, personal autonomy, and individual rights (Freedom House 2007). Again, the extremely high correlations among these components (>.87 on all paired comparisons; see Appendix C), along with the vagueness of the questions and coding procedures, prompts us to wonder whether these are truly independent measures of democracy, or simply different ways of accessing a country’s overall gestalt. The EIU index does a slightly better job of disaggregating its compo- nent variables, which are reported for five dimensions: electoral process and pluralism, civil liberties, the functioning of government, political participation, and political culture. Correlations are still quite high but not outrageously so. Moreover, the specificity of the questions makes the claim of independence among these five variables plausible. Unfortu- nately, the committee was not able to get access to the data for the 60 spe- cific questions that compose the five dimensions. It is quite possible that these underlying data are regarded by EIU as proprietary. If so, the index will have much less utility for policy and especially scholarly purposes. Meanings of Democracy We turn now to the vexing problem of definition, to which we have already alluded. Democracy means rule by the people, and this core attribute has remained relatively constant since the term was invented by the Greeks. Yet the notion of popular sovereignty is exceedingly vague. Thus, it may be necessary to adopt a more specific definition if the term is to have any practical utility. Unfortunately, in articulating an operational definition of democracy, considerable disagreement is encountered both within and outside the academic community. These disagreements are partly the product of cross-cultural differences (Schaffer 1998). More fun- damentally, they are a product of the multiple uses that have developed over many centuries (Dunn 2006). For current purposes the committee is primarily concerned with the concept as it might be applied to populous communities, that is, to nation-

86 IMPROVING DEMOCRACY ASSISTANCE states, regions, and large municipalities. In this context the term is nowa- days frequently identified with political contestation (also often called competition), as secured through an electoral process by which leaders are selected. Where effective competition exists, democracy is also said to exist (Schumpeter 1942, Alvarez et al 1996). For many writers, competition is the sine qua non of democracy. This may be regarded as a minimalist (or “thin”) definition of the concept. Although there is general consensus about the importance of political competition, many other attributes have also been understood as defin- ing features of democracy. These include liberty/freedom, accountability, responsiveness, deliberation, participation, political equality, and social equality. Each of these attributes may in turn be broken down into lower- level components, so the field of potential attributes is indeed quite vast. Adding these attributes to the minimal definition—political competi- tion—various maximalist, or ideal-type, definitions of the concept can be constructed. Arguably, a true, complete, or full democracy should pos- sess all of the foregoing definitional attributes, and each should be fully developed. Unfortunately, the committee sees no way of resolving the choice between minimal and maximal definitions of democracy. The first seems too small; it excludes too much. But the latter is clearly too large and unwieldy to be serviceable; it is, indeed, indistinguishable from good governance. Moreover, the many possible resolutions of this dilemma that lie in between minimal and maximal definitions cannot avoid the problem of arbitrariness: Why should some elements of democracy (as that concept is commonly employed) be included, while others are excluded? As a general rule, stipulated definitions tend to be poorly bounded, imprecise, or arbitrary (i.e., they violate ordinary usages of the concept and therefore do not “make sense”). The committee realizes that definitions must often be stipulated. But if the resulting indicators are not perceived as legiti- mate by policymakers and citizens on a global level, they are unlikely to perform the work that USAID and others expect of it. An illegitimate index, particularly one that is considered arbitrary and involves excessive judgment on the part of coders, is easy to dismiss. Thus, although one of the original tasks given to this committee by USAID was to develop an “initial operational definition of democ- racy and governance,” as discussed above, the committee has concluded, after extensive consultation among committee members and with leading authorities on democracy, that it is not possible for it to do so. The chal- lenges facing any particular committee of scholars in producing a defini- tion that would command wide assent, as outlined above, are simply too great.

MEASURING DEMOCRACY 87 Thirteen Dimensions of Democracy The committee’s proposed solution is to suggest, as a starting point for further study, a disaggregation of the concept of democracy down to a level where greater consensus over matters of definition, along with greater precision of measurement, may be obtained. In this way the com- mittee hopes to sidestep the eternally vexing question of what “democ- racy” means. Having considered the matter at some length and having consulted with distinguished experts on the subject, the committee resolved that there are at least 13 dimensions of democracy that are independently assessable (i.e., they do not reduce to some overall conception of “how country A is doing”): 1. National Sovereignty: Is the nation sovereign? 2. Civil Liberty: Do citizens enjoy civil liberty in matters pertaining to politics? 3. Popular Sovereignty: Are elected officials sovereign relative to nonelected elites? 4. Transparency: How transparent is the political system? 5. Judicial Independence: How independent and empowered is the judiciary? 6. Checks on the Executive: Are there effective checks on the executive? 7. Election Participation: Is electoral participation unconstrained and extensive? 8. Election Administration: Is the administration of elections fair? 9. Election Results: Are the results of an election accepted by the citizenry to indicate that a democratic process has occurred? 10. Leadership Turnover: Is there regular turnover in the top political leadership? 11. Civil Society: Is civil society dynamic, independent, and politi- cally active? 12. Political Parties: Are political parties well institutionalized? 13. Subnational Democracy: How decentralized is political power and how democratic is politics at subnational levels? The committee realizes that most of these dimensions are continuous (matters of degree), rather than dichotomous (either/or). Even so, it seems reasonable to refer to them—loosely—as potential necessary conditions of a fully democratic polity. Further details regarding the 13 components of the index, along with some initial suggestions for how to measure them, are discussed in

88 IMPROVING DEMOCRACY ASSISTANCE Appendix C. Here, the reader’s attention is called to the following general points: First, the criteria applying to different dimensions sometimes conflict with one another. For example, strong civil society organizations repre- senting one social group may pressure government to restrict other citi- zens’ civil liberties (Levi 1996, Berman 1997). This is implicit in democra- cy’s multidimensional character. Good things do not always go together. Second, some dimensions are undoubtedly more important in guar- anteeing a polity’s overall level of democracy than others. However, since resolving this issue depends on which overall definition of democracy is adopted and on various causal assumptions that are difficult to prove, the committee is not making judgments on this issue. Third, it is important to note that dimensions of democracy are not always dimensions of good governance. Thus, inclusion of an attribute on this list does not imply that the quality of governance in countries with this attribute will be higher than those without it. For example, some credibly democratic countries (Japan after World War II, the United States in the nineteenth century) have seen enormous corruption scandals. Of course, evaluating whether an attribute of democracy improves the qual- ity of governance hinges on how one chooses to define the latter, about which much has been written but little agreement can be found (Hewitt de Alcantara 1998, Pagden 1998, Knack and Manning 2000). The commit- tee leaves aside the question of how good governance might be defined, noting only that some writers consider democracy an aspect of good gov- ernance, some consider good governance an aspect of democracy, and still others prefer to approach these terms as separate and largely independent (nonnested) concepts. Finally, the committee does not rule out the possibility of alterations to this list of 13. The list might be longer (including additional compo- nents) or shorter (involving a consolidation of categories). There is noth- ing sacrosanct about this particular list of dimensions. Indeed, the com- mittee does not assume that a truly comprehensive set of dimensions is possible, given the extensive and overlapping set of meanings that have been attached to this multivalent term. However, the committee believes strongly that these 13 dimensions are a plausible place to begin. In any case, whether the index has 13 components or some other (smaller or larger) number is less significant for present purposes than the approach itself. Note that if one begins with a disaggregated set of indica- tors, it is easy to aggregate upward to create more consolidated concepts. One may also aggregate all the way up to Big D democracy, à la Polity and Freedom House. However, the committee does not propose aggrega- tion rules for this purpose, leaving it as a matter for future scholars and policymakers to decide.

MEASURING DEMOCRACY 89 Potential Benefits of Disaggregation No aggregate democracy index offers a satisfactory scale for purposes of country assessment or for answering general questions pertaining to democracy. Thus, the committee strongly supports USAID’s inclination to focus its efforts on a more disaggregated set of indicators as a way of capturing the diverse components of this key concept while overcoming difficulties inherent in measures that attempt to summarize, in a single statistic, a country’s level of democracy (à la Freedom House or Polity). To be sure, before undertaking a venture of this scope and scale, USAID will want to consider carefully the added value that might be delivered by a new set of democracy indicators. In the committee’s view, conceptual disaggregation offers multiple advantages. Even so, this approach will not solve every problem, and the committee does not wish to overstate the potential rewards our proposal could bring. The first advantage to disaggregation is the prospect of identifying concepts on whose definitions and measurements most people can agree. While the world may never agree on whether the overall level of democ- racy in India can be summarized as a “4” or a “5” (on some imagined scale), it may yet agree on more specific scores along 13 (or so) dimensions for the world’s largest democracy. The importance of creating consensus on these matters can hardly be overemphasized. The purpose of a democracy index is not simply to guide policymakers and policymaking bodies such as USAID, the World Bank, and the International Monetary Fund. Nor could it be so constrained, even if it were desirable. As soon as an index becomes estab- lished and begins to influence international policymakers, it also becomes fodder for dispute in other countries around the world. A useful index is one that gains the widest legitimacy. A poor index is one that is perceived as a tool of Western influence or a masque for the forces of globaliza- tion (as Freedom House is sometimes regarded). Indeed, because current democracy scales are produced by proprietary scalings and aggregations by specific organizations rather than by objective measurements, those organizations are often subjected to “lobbying” by countries that wish to shift their scores. The hope is that by disaggregating the components of democracy down to levels that are more operational and less debatable, it might be possible to garner a broader consensus around this vexed sub- ject. Countries would know, more precisely, why they received the scores they did. They would also know, more precisely, what areas remained for improvement. Plausibly, such an index might play an important role in the internal politics of countries around the world, akin to the role of Transparency International’s Corruption Perceptions Index (Transparency International 2007). A second advantage is the degree of precision and differentiation that

90 IMPROVING DEMOCRACY ASSISTANCE a disaggregated index offers relative to the old-fashioned “Big D” concept of democracy. Using the committee’s proposed index, a single country’s progress and/or regress could be charted through time, allowing for subtle comparisons that escape the purview of highly aggregated mea- sures such as Freedom House and Polity. One would be able to specify which facets of a polity have improved and which have remained stagnant or declined. This means that the longstanding question of regime transi- tions would be amenable to empirical tests. When a country transitions from autocracy to democracy (or vice versa), which elements come first? Are there common patterns, a finite set of sequences, prerequisites? Or is every transition in some sense, unique? Similarly, a disaggregated index would allow policymakers to clarify how, specifically, one country’s democratic features differ from others in the region or across regions. While Big D democracy floats hazily over the surface of politics, the dimensions of a disaggregated index are com- paratively specific and precise. Contrasts and comparisons may become correspondingly more acute. Applying the Proposed Index to Democracy Assistance Programming It is important to remember that, although the committee’s general goal is to provide a path to democracy measures that will be useful to pol- icymakers and citizens alike, the specific charge is to assist USAID. This means the index must be useful for particular policy purposes. Consider the problem of assessment. How can policymakers in Washington and in the field missions determine which aspects of a polity are most deficient and therefore in need of assistance? While Freedom House and Polity offer only one or several dimensions of analysis (and these are highly correlated and difficult to distinguish conceptually), the committee’s pro- posed index would begin with 13 such parameters. It seems clear that for assessing the potential impact of programs focused on different elements of a polity (e.g., rule of law, civil society, governance, and elections—the four subunits of the DG office at USAID), it is helpful to have indicators that offer a differentiated view of the subject. These same features of the proposed index are equally advantageous for causal analysis, which depends on the identification of precise mecha- nisms, intermediate factors that are often ignored by macro-level cross- national studies. Which aspects of democracy foster (or impede) economic growth? What aspect of democracy is most affected by specific democracy promotion efforts? Whether democracy is looked on as an independent (causal) variable or as a dependent (outcome) variable, we need to know which aspect of this complex construct is at play. Policymakers also wish to know what effect their policy interventions

MEASURING DEMOCRACY 91 might have on a given country’s quality of democracy (or on a whole set of countries, considered as a sample). There is little hope of answer- ing this question in a definitive fashion if democracy is understood only at a highly aggregated level. The interventions by democracy donors are generally too small relative to the outcome to draw plausible causal inferences between USAID policies, on the one hand, and country A’s level of democracy (as measured by Freedom House or Polity) on the other. However, it is plausible—though admittedly still quite difficult—to estimate the causal effects of a project focused on a particular element of democracy if that element can be measured separately. Thus, USAID’s election-centered projects might be judged against several specific indi- cators that measure the characteristics of elections. This is plausible and perhaps quite informative (though, to be sure, many factors other than USAID have an effect on the quality of elections in a country). The bot- tom line is this: If policymakers cannot avoid reference to country-level outcome indicators, they will be much better served if these indicators are available at a disaggregated meso level. All of these features should enhance the utility of a disaggregated index for policymakers. Indeed, the need for a differentiated picture of democracy around the world is at least as important for policymakers as it might be for academics. Both are engaged in a common enterprise, an enterprise that has thus far been impeded by the lack of a sufficiently discriminating measurement instrument. Consider briefly the problem that would arise for macroeconomists, finance ministers, and members of the World Bank and International Monetary Fund if they possessed only one highly aggregated indicator of economic performance. As good as GDP is (and there are, of course, considerable difficulties), it would not go very far without the existence of additional variables that measure the components of this macro-level concept. There is a similar situation in the field of political analysis. We have a crude sense of whether countries are democratic, undemocratic, or in between (e.g., “partly free” or partially democratic), but we have no systematic knowledge of how a country should be scored on the various components of democracy. Since a disaggregated index can be aggregated in a variety of ways, developing a disaggregated index is advantageous even if a single aggre- gated measure is sometimes desired for policy purposes. Indeed, it is expected that scholars and policymakers will compose summary scores from the underlying data provided by this index. However, the benefit of beginning with the same underlying data (along each of the identified dimensions) is that the process of aggregation is rendered transparent. Any composite index based on these data would be forced to reveal how the summary score for a particular country in a particular year was deter-

92 IMPROVING DEMOCRACY ASSISTANCE mined. Any critic of the proposed score, or of the summary index at large, would be able to contest the aggregation rules used by the author. The methodology is “open source” and thus subject to revision and critique. Further, any causal or descriptive arguments reached on the basis of a summary indicator could be replicated with different aggregation rules. If the results were not robust, it might be concluded that such conclusions were contingent on a particular way of putting together the components of democracy. In short, both policy and scholarly discourse might be much improved by a disaggregated index, even if the ultimate objective involves the composition of a highly aggregated index of Big D democracy. Funding and Management Readers of this document might wonder why, if the potential benefits of a disaggregated democracy index are so great, one has not yet been developed. There are two simple answers to this question. First, produc- ing such an index would be a time-consuming and expensive proposition, requiring the participation of many researchers. It would not be easy. Second, although the downstream benefits are great, no single scholar or group of scholars has the resources or the incentives to produce such an index.15 (Academic disciplines do not generally reward members who labor for years to develop new data resources.) Consequently, academics have continued to use—and complain about—Polity, Freedom House, ACLP, and other highly aggregated indices. Policymakers will have to step into this leadership vacuum if they expect the problem of faulty indicators to be solved. Precedents for such support can be found in other social science fields. USAID served as a principal funder for demographic and health surveys that vastly enhanced knowledge of public health throughout the developing world.16 The State Department and the Central Intelligence Agency served as principal funders of the Correlates of War data collec- tion project.17 On a much smaller scale, the State Department provides ongoing support for the Polity project. To be sure, the entire range of indicators proposed here is probably larger than any single funder is willing or able to undertake. It is highly advisable that several funders share responsibility for the project so that 15  Note that while scholars who are discontented with the leading indicators of democracy periodically recode countries of special concern to them (e.g., McHenry 2000, Berg-Schlosser 2004a, b; Acuna-Alfaro 2005; Bowman et al 2005), this recoding is generally limited to a small set of countries and/or a small period of time. 16  Surveys and findings are described on the USAID Web site: http://www.measuredhs. com/. 17  Information about the project may be found at http://www.correlatesofwar.org/.

MEASURING DEMOCRACY 93 its financial base is secure into the future and so that the project is not wholly indebted to a single funder, a situation that might raise questions about project independence. Preferably, some of these funders would be non-American (e.g., Canadian, European, Japanese, European Union, or international organizations like the World Bank or the United Nations Development Program). Private foundations (e.g., Open Society Institute, Google Foundation) might also be tapped. The committee conceptualizes this project as a union of many forces. This makes project management inevitably more complicated. However, the sorts of difficulties encoun- tered here, insofar as they constitute a deliberative process about the sub- stantive issues at stake, may enhance the value of the resulting product. Certainly, it will enhance its legitimacy. Another possibility is that different funders might undertake to develop (or take responsibility for) different dimensions of the index, thus apportioning responsibility. It is preferable, in any case, that some level of supervision be maintained at the top so that the efforts are well coordi- nated. Coordination involves not only logistical issues (sharing experts in the field, software, and so forth) but also, more importantly, the develop- ment of indicators that are mutually exclusive (nonoverlapping) so that the original purpose of the project—disaggregation—is maintained. Note that several of the above-listed components might be employed across several dimensions, requiring coordination on the definition and collec- tion of that variable. As a management structure, the committee proposes an advisory group to be headed by academics—with some remuneration, depending on the time requirements, and suitable administrative support—in part- nership with the policy community.18 This partnership is crucial, for any widely used democracy assessment tool should have both a high degree of academic credibility and legitimacy among policymakers. Major short- comings of previous efforts to develop indices of democracy and gover- nance resulted from insufficient input from methodologists and subject specialists or lack of broad collaboration across different stakeholders. For this wide-ranging proposal, experts on each of the identified dimensions will be needed. Their ongoing engagement is essential to the success of the enterprise. Moreover, it is important to solicit help widely within the social sciences disciplines so that decisions are not monopo- lized by a few (with perhaps quirky judgments). As a convening body, 18  The Utstein Partnership, a group formed in 1999 by the ministers of international de- velopment from the Netherlands, Germany, Norway, and the United Kingdom to formalize their cooperation is an example of this possible approach applied to a different problem. The U4 Anti-Corruption Resource Centre assists donor practitioners to more effectively address corruption challenges by providing a variety of online resources. See http://www. u4.no/about/u4partnership.cfm.

94 IMPROVING DEMOCRACY ASSISTANCE there are several possibilities, including the professional associations of political science, economics, and sociology (the American Political Science Association, American Economic Association, and American Sociological Association) or a consortium of universities. Conclusions This chapter has reviewed the most widely used indicators that mea- sure “democracy” and arrived at these key findings: • The concept of democracy cannot at present be defined in an authoritative (nonarbitrary) and operational fashion. It is an inherently multidimensional concept, and there is little consensus over its attri- butes. Definitions range from minimal—a country must choose its leaders through contested elections—to maximal—a country must have universal suffrage, accountable and limited government, sound and fair justice and extensive protection of human rights and political liberties, and economic and social policies that meet popular needs. Moreover, the definition of democracy is itself a moving target; definitions that would have seemed reasonable at one time (such as describing the United States as a democ- racy in 1900 despite no suffrage for women and few minorities holding office) are no longer considered reasonable today. To obtain a more reliable and credible method of tracking democratic change to guide USAID DG programming, USAID should foster an effort to develop disaggregated sectoral-level measures of democratic governance. This would likely have to involve numerous parties to attain wide acceptance. • Existing empirical indicators of democracy are flawed. The flaws extend to problems of definition and aggregation, imprecision, measure- ment errors, poor data coverage, and a lack of convergent validity. These existing measures are useful to identify whether countries are fully demo- cratic, fully autocratic, or somewhere in between. They are not reliable, however, as a guide for tracking modest improvements or declines in democracy within a country over the period of time in which most DG projects operate. • While the United States, other donor governments, and interna- tional agencies that are making decisions about policy in the areas of health or economic assistance are able to draw on extensive databases that are compiled and updated at substantial cost by government or mul- tilateral agencies mandated to collect such data (e.g., World Bank, World Health Organization, Organization for Economic Cooperation and Devel- opment), no comparable source of data on democracy currently exists. Data on democracy are instead currently compiled by various individual academics on irregular and shoestring budgets, or by nongovernmental

MEASURING DEMOCRACY 95 organizations or commercial publishers, using different definitions and indicators of democracy. These findings lead the committee to make a recommendation that we believe would significantly improve USAID’s (and others’) ability to track countries’ progress and make the type of strategic assessments that will be most helpful for DG programming. • USAID and other policymakers should explore making a sub- stantial investment in the systematic collection of democracy indica- tors at a disaggregated, sectoral level—focused on the components of democracy rather than (or in addition to) the overall concept. If they wish to have access to data on democracy and democratization compa- rable to that relied on by policymakers and foreign assistance agencies in the areas of public health or trade and finance, a substantial government or multilateral effort to improve, develop, and maintain international data on levels and detailed aspects of democracy would be needed. This should not only involve multiple agencies and actors in efforts to initially develop a widely accepted set of sectoral data on democracy and demo- cratic development but should seek to institutionalize the collection and updating of democracy data for a broad clientele, along the lines of the economic, demographic, and trade data collected by the World Bank, United Nations, and International Monetary Fund. While creating better measures at the sectoral level to track demo- cratic change is a long-term process, there is no need to wait on such measures to determine the impact of USAID’s DG projects. USAID has already compiled an extensive collection of policy-relevant indicators to track specific changes in government institutions or citizen behavior, such as levels of corruption, levels of participation in local and national decision making, quality of elections, professional level of judges or leg- islators, or the accountability of the chief executive. Since these are, in fact, the policy-relevant outcomes that are most plausibly affected by DG projects, the committee recommends that measurement of these factors rather than sectoral-level changes be used to determine whether the projects are having a significant impact in the various elements that compose democratic governance. REFERENCES Acuna-Alfaro, J. 2005. Measuring Democracy in Latin America (1972-2002). Working Paper No. 5, Committee on Concepts and Methods (C&M) of the International Political Sci- ence Association. Mexico City: Centro de Investigacion y Docencia Economicas.

96 IMPROVING DEMOCRACY ASSISTANCE Adcock, R., and Collier, D. 2001. Measurement Validity: A Shared Standard for Qualitative and Quantitative Research. American Political Science Review 95(3):529-546. Altman, D., and Pérez-Liñán, A. 2002. Assessing the Quality of Democracy: Freedom, Com- petitiveness and Participation in Eighteen Latin American Countries. Democratization 9(2):85-100. Alvarez, M., Cheibub, J.A., Limongi, F., and Przeworski, A. 1996. Classifying Political Re- gimes. Studies in Comparative International Development 31(2):3-36. Arat, Z.F. 1991. Democracy and Human Rights in Developing Countries. Boulder: Lynne Rienner. Beck, T., Clarke, G., Groff, A., Keefer P., and Walsh, P. 2000. New Tools and New Tests in Comparative Political Economy: The Database of Political Institutions. Policy Research Working Paper 2283. Washington, DC: World Bank, Development Research Group. For further info, see http//www.worldbank.org/research/bios/pkeefer.htm Research Group Web site http//econ.worldbank.org/. Beetham, D., ed. 1994. Defining and Measuring Democracy. London: Sage. Berg-Schlosser, D. 2004a. Indicators of Democracy and Good Governance as Measures of the Quality of Democracy in Africa: A Critical Appraisal. Acta Politica 39(3):248-278. Berg-Schlosser, D. 2004b. The Quality of Democracies in Europe as Measured by Current Indicators of Democratization and Good Governance. Journal of Communist Studies and Transition Politics 20(1):28-55. Berman, S. 1997. Civil Society and the Collapse of the Weimar Republic. World Politics 49(3): 401-429. Bertelsmann Foundation. 2003 Bertelsmann Transformation Index: Towards Democracy and a Market Economy. Gütersloh, Germany: Bertelsmann Foundation. Boix, C., and Rosato, S. 2001. A Complete Data Set of Political Regimes, 1800-1999. Chicago: University of Chicago, Department of Political Science. Bollen, K.A. 1980. Issues in the Comparative Measurement of Political Democracy. American Sociological Review 45:370-390. Bollen, K.A. 1993. Liberal Democracy: Validity and Method Factors in Cross-National Mea- sures. American Journal of Political Science 37(4):1207-1230. Bollen, K.A., and Paxton, P. 2000. Subjective Measures of Liberal Democracy. Comparative Political Studies 33(1):58-86. Bowman, K., Lehoucq, F., and Mahoney, J. 2005. Measuring Political Democracy: Case Exper- tise, Data Adequacy, and Central America. Comparative Political Studies 38(8):939-970. Casper, G., and Tufis, C. 2003. Correlation Versus Interchangeability: The Limited Robust- ness of Empirical Findings on Democracy Using Highly Correlated Data Sets. Political Analysis 11:196-203. Coppedge, M. 2007. Presentation to Democracy Indicators for Democracy Assistance. Boston University, January 26. Coppedge, M. Forthcoming. Approaching Democracy. Cambridge: Cambridge University Press. Coppedge, M., and Reinicke, W.H. 1990. Measuring Polyarchy. Studies in Comparative Inter- national Development 25:51-72. Dunn, J. 2006. Democracy: A History. New York: Atlantic Monthly Press. Elkins, Z. 2000. Gradations of Democracy? Empirical Tests of Alternative Conceptualiza- tions. American Journal of Political Science 44(2):287-294. Finkel, S.E., Pérez-Liñán, A., and Seligson, M.A. 2007. The Effects of U.S. Foreign Assistance on Democracy Building, 1990-2003. World Politics 59(3):404-439. Finkel, S.E., Pérez-Liñán, A., Seligson, M.A., and Tate, C.N. 2008. Deepening Our Under- standing of the Effects of U.S. Foreign Assistance on Democracy Building: Final Report. Available at: http://www.LapopSurveys.org.

MEASURING DEMOCRACY 97 Foweraker, J., and Krznaric, R. 2000. Measuring Liberal Democratic Performance: An Empiri- cal and Conceptual Critique. Political Studies 48(4):759-787. Freedom House. 2006. Freedom of the Press 2006: A Global Survey of Media Independence. New York: Freedom House. Freedom House. 2007. Methodology, Freedom in the World 2007. Freedom House: New York. Available at: http://www.freedomhouse.org/template.cfm?page=351&ana_page=333&year =2007. Accessed on September 5, 2007. Gasiorowski, M.J. 1996. An Overview of the Political Regime Change Dataset. Comparative Political Studies 29(4):469-483. Gleditsch, K.S., and Ward, M.D. 1997. Double Take: A Re-examination of Democracy and Autocracy in Modern Polities. Journal of Conflict Resolution 41:361-383. Hadenius, A. 1992. Democracy and Development. Cambridge: Cambridge University Press. Hadenius, A., and Teorell, J. 2005. Assessing Alternative Indices of Democracy. Committee on Concepts and Methods Working Paper Series. Mexico City: Centro de Investigacion y Docencia Economicas (CIDE). Hewitt de Alcantara, C. 1998. Uses and Abuses of the Concept of Governance. International Social Science Journal 11(155):105-113. Kaufmann, D., Kraay, A., and Mastruzzi, M. 2006. Governance Matters V: Governance Indicators for 1996-2005. Washington, DC: World Bank. Kekic, L. 2007. The Economist Intelligence Unit’s Index of Democracy. Available at: http:// www.economist.com/media/pdf/DEMOCRACY_INDEX_2007_v3.pdf. Accessed on Febru- ary 23, 2008. Knack, S., and Manning, N. 2000. Why Is It So Difficult to Agree on Governance Indicators? Washington, DC: World Bank. Kurtz, M.J., and Schrank, A. 2007. Growth and Governance: Models, Measures, and Mecha- nisms. Journal of Politics 69:2. Landman, T. 2003. Map-Making and Analysis of the Main International Initiatives on De- veloping Indicators on Democracy and Good Governance. Unpublished manuscript, University of Essex. Levi, M. 1996. Social and Unsocial Capital: A Review Essay of Robert Putnam’s Making Democracy Work. Politics & Society 24:145-155. McHenry, D.E. 2000. Quantitative Measures of Democracy in Africa: An Assessment. De- mocratization 7(2):168-185. Moon, B.E., Birdsall, J.H., Ciesluk, S., Garlett, L.M., Hermias, J.H., Mendenhall, E., Schmid, P.D., and Wong, W.H. 2006. Voting Counts: Participation in the Measurement of De- mocracy. Studies in Comparative International Development 41(2):3-32. Munck, G.L. 2006. Standards for Evaluating Electoral Processes by OAS Election Observa- tion Missions. Paper prepared for Organization of American States. Munck, G.L., and Verkuilen, J. 2002. Conceptualizing and Measuring Democracy: Alterna- tive Indices. Comparative Political Studies 35(1):5-34. Pagden, A. 1998. The Genesis of Governance and Enlightenment Conceptions of the Cosmo- politan World Order. International Social Science Journal 50(1):7-15. Reich, G. 2002. Categorizing Political Regimes: New Data for Old Problems. Democratization 9(4):1-24. Schaffer, F.C. 1998. Democracy in Translation: Understanding Politics in an Unfamiliar Culture. Ithaca, NY: Cornell University Press. Schumpeter, J.A. 1942. Socialism and Democracy. New York: Harper. Tetlock, P. 2005. Expert Political Judgment: How Good Is It? How Can We Know? Princeton, NJ: Princeton University Press. Thomas, M.A. 2007. What Do the Worldwide Governance Indicators Measure? Unpublished manuscript, School of Advanced International Studies, Johns Hopkins University.

98 IMPROVING DEMOCRACY ASSISTANCE Transparency International. 2007. Corruption Perceptions Index. Available at: http://www. transparency.org/policy_research/surveys_indices/cpi. Accessed on September 5, 2007. Treier, S., and Jackman, S. 2003. Democracy as a Latent Variable. Paper presented at the Political Methodology meetings, University of Minnesota, Minneapolis-St. Paul. USAID (U.S. Agency for International Development). 1998. Handbook of Democracy and Gov- ernance Program Indicators. Washington, DC: Center for Democracy and Governance. Available at: http://www.usaid.gov/our_work/democracy_and_govenance/publications/pdfs/ pnacc390.pdf. Accessed on August 1, 2007. Vanhanen, T. 2000. A New Dataset for Measuring Democracy, 1810-1998. Journal of Peace Research 37:251-265. Vermillion, J. 2006. Problems in the Measurement of Democracy. Democracy at Large 3(1): 26-30.

Next: 4 Learning from the Past: Using Case Studies of Democratic Transitions to Inform Democracy Assistance »
Improving Democracy Assistance: Building Knowledge Through Evaluations and Research Get This Book
×
Buy Paperback | $70.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Over the past 25 years, the United States has made support for the spread of democracy to other nations an increasingly important element of its national security policy. These efforts have created a growing demand to find the most effective means to assist in building and strengthening democratic governance under varied conditions.

Since 1990, the U.S. Agency for International Development (USAID) has supported democracy and governance (DG) programs in approximately 120 countries and territories, spending an estimated total of $8.47 billion (in constant 2000 U.S. dollars) between 1990 and 2005. Despite these substantial expenditures, our understanding of the actual impacts of USAID DG assistance on progress toward democracy remains limited--and is the subject of much current debate in the policy and scholarly communities.

This book, by the National Research Council, provides a roadmap to enable USAID and its partners to assess what works and what does not, both retrospectively and in the future through improved monitoring and evaluation methods and rebuilding USAID's internal capacity to build, absorb, and act on improved knowledge.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!