National Academies Press: OpenBook
« Previous: Appendix B: Committee Meetings and Participants
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 259
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 260
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 261
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 262
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 263
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 264
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 265
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 266
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 267
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 268
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 269
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 270
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 271
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 272
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 273
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 274
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 275
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 276
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 277
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 278
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 279
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 280
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 281
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 282
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 283
Suggested Citation:"Appendix C: Measuring Democracy." National Research Council. 2008. Improving Democracy Assistance: Building Knowledge Through Evaluations and Research. Washington, DC: The National Academies Press. doi: 10.17226/12164.
×
Page 284

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

C Measuring Democracy This appendix contains three sections to support and expand the material in Chapter 3. The statistical analysis presented in the first sec- tion was carried out by Ramziya Shakirova, a graduate student at George Mason University, on behalf of the committee. The second section contains the agenda and participants list for a committee workshop, “Democracy Indicators for Democracy Assistance,” held at Boston University in Janu- ary 2007. The last section is an “Outline for a Disaggregated Meso-level Democracy Index” by John Gerring, which contains additional material related to the index proposed in Chapter 3. STATISTICAL ANALYSIS Spearman vs. Pearson Coefficients The comparison of Spearman and Pearson correlation coefficients shows that on the whole, they are quite similar. However, in some cases the Spearman correlation coefficients are not significant (probably, the Pearson coefficients are too, but Stata does not display a significance level for the Pearson coefficients), which means that the Freedom House (FH) and Polity scores are in fact independent. The countries in “Partially Free Group” with insignificant correlations are: Cambodia: Pearson is 0.3281; Spearman is 0.3453, not significant Armenia: Pearson is 0.1632; Spearman is 0.1615, not significant 259

260 APPENDIX C Azerbaijan: Pearson is -0.0808; Spearman is 0.2864, but not significant Moldova: Pearson is 0.6019; Spearman is 0.4550, not significant Ukraine: Pearson is -0.3344; Spearman is -0.3015, not significant Afghanistan: Pearson is 0.1832; Spearman is 0.2388, not significant Egypt: Pearson is -0.2036; Spearman is -0.0889, but not significant Yemen: Pearson is -0.0096; Spearman is -0.2060, not significant Tunisia: Pearson is -0.0265; Spearman is -0.0452, not significant Mexico: Pearson is 0.4544; Spearman is 0.2681, not significant Greece: Pearson is 0.896; Spearman is 0.1609, not significant Macedonia: Pearson is 0.373; Spearman is 0.3924, not significant Sierra Leone: Pearson is 0.5094; Spearman is 0.2858, not significant Zimbabwe: Pearson is 0.2791; Spearman is 0.2612, but not significant Burundi: Pearson is 0.4269; Spearman is 0.2823, not significant Cameroon: Pearson is -0.1538; Spearman is -0.1018, not significant Comoros: Pearson is -0.0408; Spearman is 0.2358, not significant Kenya: Pearson is 0.1287; Spearman is -0.1646, not significant For two countries (Colombia and Côte d’Ivoire), the coefficients are close in their magnitude, although the Spearman coefficients are significant only at the 10 percent level, but not the 5 percent level. Correlation of First Differences The average correlation coefficients for the first differences in the group of “Partially Free” countries are low for the Former Soviet Union and the Middle East (Table C-1). • For the Former Soviet Union, the average correlation coefficient is equal to 0.148. Particularly, the coefficients are low for Armenia (0.1871) and Tajikistan (0.1320), and close to zero for the Ukraine (0.0891). Negative coefficients are observed for Kazakhstan (-0.1562), Moldova (-0.1800), and Russia (-0.5188). Satisfactory coefficients for the first differences are found in this group only for Azerbaijan, Belarus, and Georgia. • For the Middle East, the average is 0.285392. In this group, negative coefficients are observed for Egypt (-0.1292), Iran (-0.0531), and Yemen North (-0.0408), and cloze to zero, but positive, for Tunisia (0.0840). In other regional groups there are also some countries with negative or zero coefficients: (Malaysia (-0.083), Panama (-0.2525), Angola (-0.0788), Côte d’Ivoire (-0.091), Liberia (0.000), Madagascar (0.0589), Rwanda (–0.2813), Togo (-0.0195), Uganda (0.0211), Chad (-0.218), Comoros (-0.3467), and Equatorial Guinea (-0.3725). The average correlation

APPENDIX C 261 coefficients for other regional groups are the following: Asia (0.4829), Latin America (0.54092105), and Africa (0.408083). In the group of “Democratic” countries (Table C-2), negative correlations for the first differences are observed for Cyprus (-0.6930), France (-0.0197), and Mauritius (-0.0197), and close to zero coefficient for Trinidad (0.0163). The average for this group is also very low, and equal to 0.11855714. For “Autocratic” countries (Table C-3), negative coefficients are observed for China (-0.0113), Oman (-0.0496), Yemen South (-0.4123), and Mauritania (-0.0197), and zero correlation for Syria. There are several countries where the correlation coefficients for the first differences are positive, although correlations between FH and Polity scores are negative (Bahrain, Iraq, and Morocco). The average correlation coefficient for “autocratic” countries is 0.296829. TABLE C-1  “Partially Free” Countries (Polity Scores -5 to +7)— Correlations with FH Scores Spearman Correlation Pearson Rank Coefficient Number of Correlation Order for First Country Observations Years Coefficient Coefficient Differences Asia Cambodia 21 1972-1978; 0.32810 0.3453*1 0.2763 1988-2002 Fiji 31 1972-2002 0.86190 0.9474 0.6204 Indonesia 31 1972-2002 0.78120 0.6197 0.5473 Malaysia 31 1972-2002 0.64410 0.6942 –0.0830 Mongolia 31 1972-2002 0.98480 0.9694 0.6717 Philippines 31 1972-2002 0.93820 0.8965 0.5049 South Korea 31 1972-2002 0.96250 0.8702 0.4825 Taiwan 31 1972-2002 0.93100 0.9355 0.5105 Thailand 31 1972-2002 0.72970 0.6443 0.8155 Average 0.79572 0.769167 0.4829 Variance 0.04410 0.043645 0.06668676 Standard deviation 0.20999 0.208915 0.2582378 Former Soviet Union Armenia 11 1992-2002 0.16320 0.1615* 0.1871 Azerbaijan 11 1992-2002 –0.08080 0.2864* 0.6191 Belarus 11 1992-2002 0.97200 0.9877 0.7319 Georgia 11 1992-2002 0.81110 0.7709 0.4286 Kazakhstan 11 1992-2002 0.54070 0.6992 –0.1562 Moldova 11 1992-2002 0.60190 0.4550* –0.1800

262 APPENDIX C TABLE C-1  Continued Spearman Correlation Pearson Rank Coefficient Number of Correlation Order for First Country Observations Years Coefficient Coefficient Differences Russia 11 1992-2002 –0.77130 -0.7287 -0.5188 Tajikistan 11 1992-2002 0.75710 0.7944 0.1320 Ukraine 11 1992-2002 –0.33440 –0.3015* 0.0891 Average 0.29550 0.347211 0.148 Variance 0.34806 0.317729 0.16145 Standard deviation 0.58997 0.563674 0.40181 Middle East Afghanistan 24 1972-2002 0.18320 0.2388* 0.3839 (7 missing values) Algeria 31 1972-2002 0.59750 0.5458 0.6766 Bangladesh 31 1972-2002 0.83010 0.7977 0.5921 Egypt 31 1972-2002 –0.20360 –0.0889* –0.1292 Iran 31 1972-2002 –0.43260 –0.4784 –0.0531 Jordan 31 1972-2002 0.87020 0.9045 0.3692 Yemen North 18 1972-1989 0.64240 0.5127 –0.0408 Yemen 13 1990-2002 –0.00960 –0.2060* 0.4714 Nepal 31 1972-2002 0.77640 0.7313 0.2268 Pakistan 31 1972-2002 0.81720 0.8421 0.6368 Sri Lanka 31 1972-2002 0.75830 0.7932 0.2070 Tunisia 31 1972-2002 –0.02650 –0.0452* 0.0840 Average 0.40025 0.378967 0.285392 Variance 0.21838 0.227563 0.07862994 Standard deviation 0.46731 0.477035 0.2804103 Latin America Argentina 31 1972-2002 0.84850 0.6961 0.5757 Bolivia 31 1972-2002 0.86500 0.7738 0.1415 Brazil 31 1972-2002 0.72550 0.6021 0.3323 Chile 31 1972-2002 0.97890 0.8875 0.8631 Colombia 31 1972-2002 0.38110 0.3248** 0.3981 Dominican 31 1972-2002 0.41910 0.3863 0.5327 Republic Ecuador 31 1972-2002 0.96720 0.8145 0.8878 Honduras 31 1972-2002 0.91660 0.5534 0.3793 Uruguay 31 1972-2002 0.96570 0.9126 0.8292 Venezuela 31 1972-2002 0.91890 0.8858 0.6999 Guatemala 31 1972-2002 0.52950 0.4147 0.7231 Guyana 31 1972-2002 0.61740 0.6223 0.4895 Haiti 31 1972-2002 0.77960 0.5672 0.8299 El Salvador 31 1972-2002 0.44230 0.4156 0.4314 Mexico 31 1972-2002 0.45440 0.2681* 0.1765

APPENDIX C 263 TABLE C-1  Continued Spearman Correlation Pearson Rank Coefficient Number of Correlation Order for First Country Observations Years Coefficient Coefficient Differences Nicaragua 31 1972-2002 0.69900 0.7018 0.7508 Panama 31 1972-2002 0.78820 0.8845 –0.2525 Paraguay 31 1972-2002 0.93820 0.8731 0.8007 Peru 31 1972-2002 0.90660 0.8867 0.6885 Average 0.74430 0.656363 0.5409211 Variance 0.04356 0.046605 0.08946278 Standard deviation 0.20870 0.215881 0.2991033 European Union Turkey 31 1972-2002 0.45220 0.6856 0.5807 Spain 31 1972-2002 0.97670 0.8546 0.5638 Greece 31 1972-2002 0.89570 0.1609* 0.6981 Macedonia 11 1992-2002 0.37300 0.3924* 0.6713 Portugal 31 1972-2002 0.95720 0.8370 0.7478 Albania 31 1972-2002 0.95260 0.9623 0.2263 Bulgaria 31 1972-2002 0.99360 0.9750 0.9330 Croatia 12 1991-2002 0.92370 0.7648 0.5851 Czechoslovakia 21 1972-1992 0.99360 0.8176 0.9911 Yugoslavia 31 1972-2002 0.87510 0.7940 0.3907 Hungary 31 1972-2002 0.98420 0.9123 0.7045 Poland 31 1972-2002 0.98750 0.9314 0.6476 Romania 31 1972-2002 0.93910 0.8766 0.5623 Slovakia 10 1993-2002 0.90800 0.9039 0.5754 Average 0.87229 0.776314 0.6341214 Variance 0.03957 0.05302 0.03728278 Standard deviation 0.19893 0.230261 0.19308749 Africa Angola 26 1976-1991; 0.64430 0.6916 –0.0788 1993-2002 Côte d’Ivoire 31 1972-2002 0.21770 0.3316** –0.0910 Kenya 31 1972-2002 0.12870 -0.1646* 0.5633 Liberia 25 1972-1989; –0.05210 –0.0088 0.0000 1996-2002 Lesotho 30 1972-1997; 0.79260 0.8084 0.6109 1999-2002 Madagascar 31 1972-2002 0.90170 0.8745 0.0589 Malawi 31 1972-2002 0.99040 0.9925 0.9026 Mali 31 1972-2002 0.98390 0.8861 0.7004 Mozambique 28 1975-2002 0.95730 0.9106 0.6709 Nigeria 31 1972-2002 0.86180 0.7510 0.8536 Niger 31 1972-2002 0.94460 0.8791 0.7003

264 APPENDIX C TABLE C-1  Continued Spearman Correlation Pearson Rank Coefficient Number of Correlation Order for First Country Observations Years Coefficient Coefficient Differences Rwanda 31 1972-2002 –0.54360 –0.7643 –0.2813 South Africa 31 1972-2002 0.94290 0.8831 0.3889 Senegal 31 1972-2002 0.72580 0.6668 0.2634 Sierra Leone 27 1972-1996; 0.50940 0.2858* 0.7987 1998-2000 Sudan 31 1972-2002 0.54780 0.4058 0.8398 Tanzania 31 1972-2002 0.95040 0.8904 0.5234 Togo 31 1972-2002 0.81130 0.8327 –0.0195 Uganda 29 1972-1978; 0.57990 0.6659 0.0211 1980-1984; 1986-2002 Zambia 31 1972-2002 0.87650 0.8812 0.8956 Zimbabwe 31 1972-2002 0.27910 0.2612* 0.2902 Benin 31 1972-2002 0.98980 0.8889 0.9128 Burkina Faso 31 1972-2002 0.82510 0.8709 0.7828 Burundi 28 1972-1992; 0.42690 0.2823* 0.2181 1996-2002 Cameroon 31 1972-2002 –0.15380 –0.1018* 0.2964 Central African 31 1972-2002 0.90760 0.8678 0.6987 Republic Chad 26 1972-1977; 0.86520 0.8529 –0.218 1984-2002 Comoros 24 1976-1994; –0.04080 0.2358* –0.3467 1996-2002 Congo Brazzaville 31 1972-2002 0.88020 0.8362 0.7504 Equatorial Guinea 31 1972-2002 –0.43130 –0.4482 –0.3725 Ethiopia 29 1972-1973; 0.83050 0.6848 0.1520 1974-1990; 1992-2002 Gabon 31 1972-2002 0.92670 0.9295 0.6838 Gambia 31 1972-2002 0.92960 0.9447 0.8985 Ghana 31 1972-2002 0.91910 0.8917 0.5412 Guinea Bissau 27 1975-1997; 0.94260 0.8784 0.8501 1999-2002 Guinea 31 1972-2002 0.87330 0.9561 0.2320 Average 0.631697 0.598072 0.408083 Variance 0.182355 0.19153 0.165665 Standard deviation 0.427031 0.437642 0.40702 * Coefficient is not significant at 5 percent significance level. **Coefficient is not significant at 5 percent level, but is significant at 10 percent level.

APPENDIX C 265 TABLE C-2  “Democratic” Countries (Polity Scores 8-10)— Correlations with FH Scores Number of Correlation Correlation for Country Observations Years Coefficient First Differences India 31 1972-2002 0.17940 0.5833 Israel 31 1972-2002 0.34690 0.5708 Jamaica 31 1972-2002 0.20880 0.3919 Trinidad 31 1972-2002 0.15910 0.0163 Cyprus 31 1972-2002 0.18380 –0.6930 France 31 1972-2002 0.02550 –0.0197 Mauritius 31 1972-2002 0.79870 –0.0197 Average 0.26446 0.11855714 Variance 0.067371 0.200422906 Standard deviation 0.259559 0.447686169 TABLE C-3  “Autocratic” Countries (Polity Scores -10 to -6)— Correlations with FH Scores Number of Correlation Correlation for Country Observations Years Coefficient First Differences China 31 1972-2002 0.34910 –0.0113 Burma 31 1972-2002 0.53420 0.3216 USSR 20 1972-1991 0.84570 0.7358 Bahrain 31 1972-2002 –0.12800 0.3623 Iraq 31 1972-2002 –0.06930 0.4152 Kuwait 30 1972-1989; 0.39630 0.8575 1991-2002 Morocco 31 1972-2002 –0.18060 0.4120 Oman 31 1972-2002 0.57210 –0.0496 Syria 31 1972-2002 –0.25580 0.0000 Yemen South 18 1972-1989 –0.78260 –0.4123 Eritrea 10 1993-2002 0.77170 0.3500 Mauritania 31 1972-2002 0.33160 –0.0197 Swaziland 31 1972-2002 0.78380 0.8647 Congo Kinshasa 20 1972-1991 0.66670 0.3294 Average 0.27392 0.296829 Variance 0.234796 0.136285 Standard deviation 0.484558 0.369168

266 APPENDIX C Workshop Agenda and Participants Democracy Indicators for Democracy Assistance January 26-27, 2007 Boston University AGENDA Friday, January 26, 2007 1:00 p.m. Meeting begins • Opening remarks • Introductions • Brief project overview • Plan for the meeting 1:30 p.m. Overview: USAID and Democracy Assistance Work History of USAID Indicator Work David Black, USAID Applicability to USAID Programming and Evaluation Margaret Sarles, USAID 2:00 p.m. Extant Indicators. How good are they? To what degree do they fulfill USAID’s objectives, and to what extent do they fall short? Particular focus on Polity, Freedom House (with its newly released subcomponents), and the new (somewhat disaggregated) index from the Economist Intelligence Unit. 3:00 p.m. Break 3:15 p.m. Defining and Measuring Democracy. What is democracy? Can its dimensions and subcomponents be specified? What are the boundaries of what we choose to measure? Which important aspects of society (i.e. human rights, economic freedoms, and perhaps some things labeled governance) should fall outside our definition of democracy? Should the project also include aspects of governance that do not fall within the rubric of democracy (tout court)? 6:30 p.m. Meeting Adjourns 7:00 p.m. Committee Working Dinner

APPENDIX C 267 Saturday, January 27, 2007 10:00 a.m. The Aggregation Problem. Can aggregation rules be arrived at (a) within dimensions and (b) across dimensions? Can we provide some guidance to USAID on how to define “Big-D” democracy? Or is it advisable to avoid this highest level of aggregation? 11:00 a.m. History. How important is the historical aspect of the index? What would have to be sacrificed from the current index in order for it to be extended back to 1960, 1900, or 1800? 11:30 a.m. Management and Payoff. How to make this project work? Will the necessary data be available? How big a project is this, really? How much time would it take? How much money would it cost? How would it be organized? (Should we rely primarily on students or expert staff? If the latter, would they need to be paid, and if so how much?) What is the potential payoff of this project? Is it worth the money it would take? 12:00 p.m. General Discussion (Lunch meeting). Revisit all issues to see what points of consensus have been reached and what points of disagreement remain. Try to resolve the latter. Return to issues that need more discussion. 1:30 p.m. Final Recommendations and Conclusions 2:00 p.m. Meeting Adjourns Participants David Black Gerardo Munck U.S. Agency for International University of Southern California Development Margaret Sarles Michael Coppedge U.S. Agency for International Notre Dame University Development John Gerring Frederic Schaffer Boston University Harvard University Andrew Green Richard Snyder Georgetown University Brown University Rita Guenther Paul Stern National Academies National Academies Jo Husbands Nicolas van de Walle National Academies Cornell University

268 APPENDIX C Outline for a Disaggregated Meso- level Democracy Index John Gerring Chapter 3 introduced the Committee’s proposal to develop a disaggregated index, which we believe will better serve USAID’s needs for strategic assessment and tracking. At the meso level, we identified 13 dimensions of democracy that may be independently assessed: 1. National Sovereignty: Is the nation sovereign? 2. Civil Liberty: Do citizens enjoy civil liberty in matters pertaining to politics? 3. Popular Sovereignty: Are elected officials sovereign relative to non-elected elites? 4. Transparency: How transparent is the political system? 5. Judicial Independence: How independent, clean, and empowered is the judiciary? 6. Checks on the Executive: Are there effective checks on the executive? 7. Election Participation: Is electoral participation unconstrained and extensive? 8. Election Administration: Is the administration of elections fair? 9. Election Results: Do results of an election indicate that a democratic process has occurred? 10. Leadership Turnover: Is there regular turnover in the top political leadership? 11. Civil Society: Is civil society dynamic, independent, and politically active? 12. Political Parties: Are political parties well institutionalized? 13. Subnational Democracy: How decentralized is political power and how democratic is politics at subnational levels? The rest of this section of Appendix C elaborates on some of the issues related to the proposed index, concluding with a more detailed listing of the 13 dimensions listed above. Components Each dimension has multiple components, chosen with five criteria in mind: (a) centrality to the dimension, (b) centrality to the overall concept of democracy (defined minimally and maximally, as explained in the text), (c) the possible incorporation of existing data, (d) measurement precision, (e) accuracy (reliability), and (f) nonredundancy. Each component is stated in the form of a question or statement that may be coded numerically for

APPENDIX C 269 a given country or territory during a given year. Further work will be required in order to specify what these scales mean in the context of each question. The devil is always in the details. Coding categories are dichotomous (yes/no), categorical (unranked), nominal (ranked), or interval. In certain cases, it may be possible to combine separate components into more aggregated nominal scales without losing information (Coppedge and Reinicke 1990). This is possible, evidently, only when the underlying data of interest are, in fact, nominal. There are roughly 100 components in the index as currently constructed. While this may seem like quite a few, the reader is urged to consider that most of these questions—indeed, the vast majority—are very simple to answer. Thus, it should not take a country expert (or well-coached student assistant) very long to complete the questionnaire. Indeed, this is precisely the point. A longer set of questions is sometimes quicker to complete than a much shorter set of questions, if the latter are vague and ambiguous (due, we suppose, to a high level of aggregation). For each datum, one should record (a) the coding (numerical or natural language), (b) the source(s) on which the coding was based, (c) the coder(s), (d) any revisions to the initial coding that may have been made in previous iterations of the dataset, (e) any further explanation that might be helpful, and (f) estimates of uncertainty (discussed below). Evidently, it is important that the data-storage software be capable of handling numerical and narrative responses (e.g., MS Access). Objective/Subjective Measures With respect to attaining greater accuracy, “hard” or “objective” indicators—based on what might be considered factual matters—are preferred over expert opinions. As one example, one might consider how to replace (or supplement) the opinion of country experts about how free the press is with a content analysis of major news outlets. Where the press is free, one would expect to find (a) a dispersion of views across news sources and (b) criticism of political leaders. Both signal the existence of the sort of open debate that is impossible if the press is constrained, and inevitable (one would think) if it is not. At the same time, it is important to note that the development of an objective measure for a difficult concept such as press freedom is apt to be time-intensive and costly, and may not be possible at all for previous eras. Additionally, objective indicators are sometimes subject to the problem of “teaching to the test”; governments can attain higher scores by fulfilling some criterion that has little import for democracy. The benefits of easy data collection thus must be balanced against the benefits of data efficiency, coverage, and conceptual validity.

270 APPENDIX C Survey Research A major question is whether to include dimensions that require public opinion surveys. The EIU index has lots of questions of this nature, for example, about how legitimate the general public views the election process. (“Democracy assessments” also rely centrally on surveys, though their purpose is usually not comparative [Beetham 2004].) We have opted to include relatively few questions of this nature because (a) it is very expensive to do this sort of public opinion polling on a regular basis and across all countries, (b) it is less useful if polling is conducted only in “problem” countries (for then there is no basis for comparison), (c) no such historical information is available, (d) polling questions tend to vary in form or format from country to country and year to year and hence may convey misleading information if used as a cross-national indicator, (e) in nondemocratic countries citizens may not feel free to speak openly, and (f) public perceptions are not the most valid test of a country’s level of democracy, even where civil liberties are ensured. (On the latter point, one might consider Mexico’s recent election, which many members of the public thought was highly flawed, but which outside observers seem to think was conducted with considerable fairness.) Data Sources For contemporary years, obtaining sufficient information to code each new component ought to be fairly easy. Sources such as the Chronicle of Parliamentary Elections [and Developments], Keesing’s Contemporary Archives, the Journal of Democracy (“Election Watch”), El Pais (www.elpais. es), the Statesman’s Yearbook, Europa Yearbook, Political Handbook of the World, reports of the Inter-Parliamentary Union, the ACE Electoral Knowledge Network, Elections Around the World (www.electionworld. org), the International Foundation for Election Systems (www.IFES. org), the Commonwealth Election Law and Observer Group (www. thecommonwealth.org), the OSCE Office for Democratic Institutions and Human Rights (www.osce.org/odihr), the Carter Center (www.cartercenter. org), the International Republican Institute (www.iri.org), the National Democratic Institute (www.ndi.org), the Organization for American States (www.oas.org), country narratives from the annual Freedom House surveys, newspaper reports, and secondary accounts (according to subject and time period) will be invaluable. Given the project’s broad theoretical scope and empirical reach, evidence-gathering approaches must be eclectic. Multiple sources will be employed wherever possible in order to cross-validate the accuracy of underlying data.

APPENDIX C 271 Uncertainty It is vital to include not only an estimate of a country’s level of democracy across various dimensions and components but also a level of uncertainty associated with each estimate. This may be arrived at by combining two features of the analysis (a) intercoder reliability (if available) and (b) subjective uncertainty (the coder’s estimate of how accurate a given score might be). Uncertainty estimates serve several functions: Scholars may include these estimates as a formal component of their analyses; they provide a signal to policymakers of where the democracy index is most (and least) assured; and they focus attention on ways in which future iterations of the index may be improved. Finally, uncertainty estimates allow for the inclusion of countries and time periods with vastly different quantities and qualities of data— without compromising the legitimacy of the overall project. As noted, contemporary codings are likely to be associated with lower levels of uncertainty than the analogous historical codings, and countries about which much is known (e.g., France) will be associated with lower levels of uncertainty than countries about which very little is known (e.g., Central African Republic). Without corresponding estimates of uncertainty, an index becomes hostage to its weakest links; critics gravitate quickly to countries and time periods that are highly suspect, and the validity of the index comes under harsh assault—even if the quality of other data points is more secure. With the systematic use of uncertainty estimates, these very real difficulties are brought directly into view by granting them a formal status. In so doing, the legitimacy of the larger enterprise is enhanced, and misuses are discouraged. Time The dataset is assumed to be annual, though it might be coded at longer intervals in earlier historical periods. (One minor question to consider is whether codings should refer to the state of affairs pertaining at the end of the designated period (December 31), or to a mean value across the period of observation [January 1–December 31].) It is strongly urged that the index—or at least some elements of it—be extended back in time, preferably to 1800. There are several reasons for this. First, if one wishes to judge trends, a trend line is necessary. And the longer the trend line, the more information will be available for analysis. Consider the question of how Ukraine is doing now—for example, in 2008. If a new index provides data only for that year, or several years prior, the meaning of a “5” (on some imagined scale) is difficult to assess. Similarly, a purely contemporary index is unable to evaluate the question of democratic “waves” occurring at distinct points in historical time

272 APPENDIX C (Huntington 1991) or of distinctive “sequences” in the transition process (McFaul 2005). If we wish to judge the accuracy of these hypotheses (and many others) we must have at our disposal a substantial slice of historical time. Second, insofar as we wish to understand causal relations—what causes democracy and what democracy causes—it is vital to have a long time series so that causes and effects can be effectively disentangled. (Of course, this does not assure that they will be disentangled; but with observational data it is virtually a prerequisite.) Third, recent work has raised the possibility that democracy’s effects are long term, rather than (or in addition to) short term (Gerring et al 2005, Converse and Kapstein 2006, Persson and Tabellini 2006). Indeed, it is quite possible that the short-term and long-term effects of democracy are quite different (plausibly, long-term effects are more consistent, and more positive along various developmental outcomes, than short-term effects). Consideration of these questions demands a historical coding of the key variable. For all these reasons, we think it unlikely that any new index would displace Freedom House, Polity, and ACLP unless it can match the historical coverage of these well-established indices. Summary Scores For each dimension, a summary score will be suggested. Evidently, this task of aggregation is devilish, for all the reasons just reviewed. Yet, it should be considerably easier to solve at this level than at the level of Big D democracy. Thus, we propose to aggregate the results for each component so as to arrive at a single score for each of the 13 dimensions. This score will be expressed on a scale from 1 to 10, providing a snapshot view of how each country, in a given year, performs on that dimension. We feel confident that, with the aid of the underlying components listed in the index below, it will be possible for those knowledgeable about a country to reach agreement on the (approximate) level of national sovereignty, popular sovereignty, and so on enjoyed by that country in a given year. A country’s score along these 13 dimensions comprises its Democracy Profile. This level of aggregation seems feasible, and should be easy to compare across countries and through time. We also believe that this is a useful level of aggregation. It says something meaningful, something that should be understandable to all observers. It will allow USAID and other international actors a way of gauging progress and regress; it may even provide a way of gauging the relative success of different programs— though problems of causal attribution are inevitably knotty.

APPENDIX C 273 We are considerably less confident that it will be possible to reach agreement in aggregating across the 13 dimensions to reach a single, summary score for each country in a given year—“Big-D” democracy. Logistics In order to manage a project of this scope without losing touch with the particularities of each case, it is necessary to marry the virtues of cross-national data with the virtues of regional expertise. As currently envisioned, the project relies primarily upon country experts to do the case-by-case coding. Student assistants may be employed in a supporting role (e.g., to fetch data). These coding decisions will be supervised by several regional experts who are permanently attached to the project and who will work to ensure that coding procedures across countries, regions, and time periods are consistent. Extensive discussion and cross-validation will be conducted at all levels, including intercoder reliability tests. We strongly advise an open and transparent system of commentary on the scores that are proposed for each country, after initial questionnaires are completed by country experts but before results are finalized. This might include a Web-based Wikipedia-style discussion in which interested individuals are encouraged to comment on the scores provisionally assigned to the country or countries that they know well. This commentary might take the form of additional information—perhaps unknown to the country expert—that speaks to the viability of the coding. Or it might take the form of extended discussions about how a particular question applies to the circumstances of that country. Naturally, some cranky participants may be anticipated in such a process. However, the Wikipedia experience suggests that there are many civic-minded individuals, some of them quite sophisticated, who may be interested in engaging in this process and may have a lot to add. At the very least, it may provide further information upon which to base estimates of uncertainty (as discussed above). Final decisions, in any case, would be left to a larger committee. Evidently, different components will involve different sorts of judgments and different levels of difficulty. Some issues are harder than others, and will require more codings and recodings. As a general principle, wherever low intercoder reliability persists for a given question, that question should be reexamined and, if possible, reformulated. It is important that the process of revision be continual. Even after the completed dataset is posted, users should be encouraged to contribute suggestions for revision and these suggestions should be systematically reviewed.

274 APPENDIX C Pilot Tests Before USAID, or any agency, undertakes a commitment to develop— and maintain—a new democracy index, it is important that it be confident of the yield. Thus, we recommend several interim tests of a “pilot” nature. One of the principal claims of this index is that greater inter- coder reliability will be achieved when the concept of democracy is disaggregated. This claim may be probed through intercoder reliability tests across the leading democracy indices. A pilot test of this nature might be conducted in the following manner: Train the same set of coders to code all countries (or a subset of countries) in a given year according to guidelines provided by Freedom House, Polity, and the present index. Each country-year would receive several codings by different coders, thus providing the basis for an intercoder reliability test. These would then be compared across indices. Since the coders would remain the same, varying levels of intercoder reliability should be illustrative of basic differences in the performance of the indices. Of course, there are certain methodological obstacles to any study of this sort. One must decide how much training to provide to the coders, and how much time to give them. One must decide whether to employ a few coders to cover all countries, or have separate coders for each country. One must decide whether to hire “naïve” coders (e.g., students) or coders well versed in the countries and regions they are assigned to code (the “country expert” model). In any case, we think the exercise worthwhile, not only because it provides an initial test of the present index but also because it may bring a level of rigor to a topic—political indicators—that has languished for many years in a highly unsatisfactory state. The Index Dimensions 1. National Sovereignty: Is the nation sovereign? 2. Civil Liberty: Do citizens enjoy civil liberty in matters pertaining to politics? 3. Popular Sovereignty: Are elected officials sovereign relative to non-elected elites? 4. Transparency: How transparent is the political system? 5. Judicial Independence: How independent, clean, and empowered is the judiciary? 6. Checks on the Executive: Are there effective checks on the executive?

APPENDIX C 275 7. Election Participation: Is electoral participation unconstrained and extensive? 8. Election Administration: Is the administration of elections fair? 9. Election Results: Do results of an election indicate that a democratic process has occurred? 10. Leadership Turnover: Is there regular turnover in the top political leadership? 11. Civil Society: Is civil society dynamic, independent, and politically active? 12. Political Parties: Are political parties well institutionalized? 13. Subnational Democracy: How decentralized is political power and how democratic is politics at subnational levels? Clarifications “Party” may refer to a longstanding coalition such as the CDU/CSU in Germany if that coalition functions in most respects like a single party. The identity of the party may be obscured by name changes. (If the party/coalition changes names but retains key personnel and is still run by and for the same constituency then it should be considered the same organization.) “Executive” refers to the most powerful elective office in a country (if there is one)—usually a president or prime minister. Wherever there is disparity between formal rules (constitutional or statutory) and actual practice, coding decisions should be based on the latter. Unless otherwise specified, the geographic unit of analysis is the (sovereign or semi-sovereign) nation-state. Evidently, there is enormous heterogeneity within large nation-states, necessitating judgments about which level of coding corresponds most closely to the mean value within that unit. Where extreme heterogeneity exists vis-à-vis the variable of interest it may be important to include a companion variable that would indicate high within-country variance on that particular component. One thinks of contemporary Sri Lanka and Colombia—states where the quality of democracy is quite different across regions of the country. Questions pertaining to elections may be disaggregated according to whether they refer to elections for the (a) lower house, (b) upper house, or (c) presidency. In some cases, (b) and/or (c) is nonexistent or inconsequential, in which case it should be ignored. If no election occurs in a given year, then many of these questions should be left unanswered (unless of course rules or norms pertaining to elections have changed in the interim). If more than one election occurs in a given year there will be two entries for that country in that year. (This complicates data

276 APPENDIX C analysis, but it is essential to the purpose of the dataset, which is to provide primary-level data that can be used for further analysis.) At some point, coding responses must be added to this questionnaire. Such responses may be dichotomous, multichotomous, or continuous, depending upon the question. However, we suggest that all original coding scales (where coding decisions are required) be comprised of no more than five categories. A larger number of options may create greater ambiguity. In any case, these response options should be as operational as possible. It should be clear what a “3” means with respect to the question at hand. 1. National Sovereignty General question: Is the nation sovereign? Is the territory independent of foreign domination? (Note: We are not concerned here with pressures that all states are subject to as part of the international system.) 2. Civil Liberty General questions: Do citizens enjoy civil liberty in matters pertaining to politics? Note: Civil liberties issues pertaining specifically to elections are covered in later sections. Does the government directly or indirectly attempt to censor the major media (print, broadcast, Internet)? Indirect forms of censorship might include politically motivated awarding of broadcast frequencies, withdrawal of financial support, influence over printing facilities and distribution networks, selective distribution of advertising, onerous registration requirements, prohibitive tariffs, and bribery. (See recent index of Internet freedom developed by the Berkman Center for Internet and Society, Harvard University.) Of the major media outlets, how many routinely criticize the government? Are individual journalists harassed—i.e., threatened with libel, arrested, imprisoned, beaten, or killed—by government or nongovernmental actors while engaged in legitimate journalistic activities? Is there self-censorship among journalists when reporting on politically sensitive issues? Are works of literature, art, music, and other forms of cultural expression censored or banned for political purposes? Do citizens feel safe enough to speak freely about political subjects in their homes and in public spaces?

APPENDIX C 277 Is it possible to form civic associations, including those with a critical view of government? Is physical violence (e.g., torture) and/or arbitrary arrest targeted at presumed opponents of the government widespread? Are certain groups systematically discriminated against by virtue of their race, ethnicity, language, caste, or culture to the point where it impairs their ability to participate in politics on an equal footing with other groups? (Note: This question pertains to citizens only [not non- citizens] and does not cover issues of disenfranchisement, which are included in a later section.) If so, how large (as a percentage of the total population) is this group(s)? 3. Popular Sovereignty General question: Are elected officials sovereign relative to nonelected elites? Are there national-level elections (even if only pro forma)? If yes, are the governments that result from these elections fully sovereign—in practice, not merely in constitutional form—vis-à-vis any nonelective bodies whose members are not chosen by, or removable by, elected authorities (e.g., a monarchy, the military, and the church)? Note that this does not preclude extensive delegation of authority to nonelective bodies such as central banks and other agencies. But it does presume that the members of these nonelective authorities are chosen by, and may be removed, in circumstances of extreme malfeasance, by elective authorities. This power of removal must be real, not merely formal. Thus, while constitutions generally grant power to civilian authorities to remove military rulers, it is understood that in some countries, during some periods, an action of this nature would not be tolerated. In most cases, it will be clear to those familiar with the countries in question when this sort of situation obtains, though there may be questions about the precise dates of transition (e.g., when Chilean political leaders regained control over the military after the Pinochet dictatorship). 4. Transparency General question: How transparent is the political system? Note: this section pertains to the polity as a whole, while some other questions listed below pertain to particular sections of the polity (e.g., election administration). Are government decisions made public in a timely fashion and otherwise made accessible to citizens? Are decision-making processes open to public scrutiny, for example, through committee hearings?

278 APPENDIX C 5. Judicial Independence General question: How independent, clean, and empowered is the judiciary? Is the judiciary independent of partisan-political pressures? Is the judiciary noncorrupt? Is the judiciary sufficiently empowered to enforce the laws of the land, including those pertaining to the ruling elite (or is its power so reduced that it cannot serve as a check on other branches of government)? 6. Checks on the Executive General question: Are there effective checks—other than elections—on the exercise of power by the executive? Note: Questions pertaining to electoral accountability are addressed elsewhere. Constitutionality Does the executive behave in a constitutional manner (i.e., according to written constitutional rules or well-established constitutional principles)? Term limits If the executive is elected directly by the general electorate (or through an electoral college), are there term limits? If so, what are they? Are they respected (at this point in time)? The legislature Is the executive able to control the legislature by undemocratic means (e.g., by manipulating legislative elections, by proroguing the legislature, by buying votes in the legislature)? Is the executive able to make major policy decisions without legislative approval, i.e., without passing laws? Can the executive rule by fiat? The judiciary Is the executive accountable to the judiciary—which is to say, is the judiciary prepared to enforce the constitution, even when in conflict with the executive? 7. Election Participation General question: Is electoral participation unconstrained and extensive? Suffrage What percent of citizens (if any) are subject to de jure and de facto eligibility restrictions based on ascriptive characteristics other than age (e.g., race, ethnicity, religion)? What percent of the population are excluded from suffrage by virtue of being permanent residents (noncitizens)?

APPENDIX C 279 Turnout Note: This variable is meaningless in the absence of free and fair elections. Therefore, although data may be collected for all countries, it should be considered an aspect of democracy only where countries score above some minimal level on Election Administration. What percent of the adult (as defined by the country’s laws) electorate turned out to vote? 8. Election Administration General question: Is the administration of elections fair? Election law At this time, are regularly scheduled elections—past and future—on course, as stipulated by election law or well-established precedent? (If the answer is no, the implication is that they have been suspended or postponed in violation of election law or well-established precedent.) Are there clear and explicit sets of rules for the conduct of elections and are the rules clearly disseminated (at the very least, to political elites in the opposition)? Election commission Note: Election commission refers to whatever government bureau(s) is assigned responsibility for setting up and overseeing elections. Is it unbiased and independent of partisan pressures or balanced in its representation of different partisans? Does it have sufficient power and/or prestige to enforce its own provisions? (Are its decisions respected and carried out?) Registration Are electoral rolls updated regularly? Do they accurately reflect who has registered? (If the election rolls are not made public, then the answer is assumed to be No.) Do names of those registered appear on the rolls at their local polling station (as they ought to)? Integrity of the vote Are all viable political parties and candidates granted access to the ballot (without unduly burdensome qualification requirements)? Are opposition candidates/parties subject to harassment (e.g., selective prosecution, intimidation)? Is the election process manipulated through other means (e.g., changing age or citizenship laws to restrict opposition candidate’s access to the ballot, stalking horse candidates, snap elections scheduled without sufficient time for the opposition to organize)? Are election choices secret (or are there violations)?   This section draws on Munck (2006).

280 APPENDIX C Is vote-buying (bribery) and/or intimidation of voters widespread? Are other forms of vote fraud (e.g., ballot-stuffing, misreporting of votes) widespread? What percent of polling stations did not open on time, experienced an interruption, ran out of voting materials, or experienced some other sort of irregularity? What was the percentage of lost or spoiled ballots? Media Do all parties and candidates have equal access to the media? Equal access is understood as (a) all candidates or parties for a particular office are treated equally (thus granting an advantage to small parties or minor candidates) or (b) access to the media is in rough proportion to the demonstrated support of a party or candidate in the electorate. Is election reportage (reportage about politics during election periods) biased against certain parties and/or candidates? Campaign finance Are there disclosure requirements for large donations? If so, are these effective (i.e., are they generally observed)? Is public financing available? If so, does it constitute at least one-third of the estimated expenditures by candidates and/or parties during the course of a typical campaign? Does the incumbent enjoy unfair advantages in raising money by virtue of occupying public office? Unfair advantage involves such things as (a) a levy on civil servants to finance the party’s campaigns, (b) widespread and organized use of civil servants for campaign purposes, or (c) use of government materiel for campaign purposes. Is campaign spending heavily tilted in favor of the incumbent party or candidate(s)? That is, does the incumbent party or candidate(s) expend more financial resources than their support in the electorate (as judged by polls or general impressions) or the legislature would indicate? Note: Where campaign expenditures are unreported, or such reports are unreliable, they may be estimated from each party’s campaign activity, e.g., number of political advertisements on TV, radio, or billboards. Election monitors Were election monitors from all parties and/or from abroad allowed to monitor the vote at polling stations across the country? How many polling stations (percent) were attended by election monitors (other than those representing the ruling party or clique)? 9. Election Results General question: Do results of an election indicate that a democratic process has occurred?

APPENDIX C 281 What percent of the vote was received by the largest party or winning candidate in the final (or only) round? Specify name of party or candidate: What percent of the vote was received by the second largest party or second most successful candidate in the final round? Specify name of party or candidate: What percent of the seats in the lower/upper house was obtained by the largest party? Specify name of party: What percent of the seats in the lower/upper house was obtained by the second largest party? Specify name of party: Do the official results conform, more or less, to actual ballots cast (as near as that can be estimated)? What was the general verdict by international election monitors and or the international press vis-à-vis the democratic quality of this election, i.e., how fair was it? Note: If there was disagreement, then please report the mean (average) result, weighting each group by its level of involvement in overseeing this election. Did losing parties/candidates accept the essential fairness of the process and the result? 10. Leadership Turnover General question: Is there regular turnover in the top political leadership? Note: Turnover may be regarded as a sufficient condition of effective electoral competition. If turnover occurs (by democratic instruments), contestation must be present—though it may of course still be flawed. Executive How many years has the current executive been in office? (Source: “YRSOFFC” variable from the DPI.) How many consecutive terms has the current executive served? Did the last turnover in power occur through democratic means (e.g., an election, a loss of confidence in the legislature, or a leader’s loss of confidence in his/her own party)? Ruling party/coalition How many years has the current ruling party or coalition been in office? (Source: “PRTYIN” variable from the DPI.) How many consecutive terms has the current ruling party or coalition served? Note: relevant only where elections fill the major offices. Did the last turnover in power occur through democratic means (e.g.,

282 APPENDIX C an election, a loss of confidence in the legislature, or a leader’s loss of confidence in his/her own party)? 11. Civil Society General question: Is civil society dynamic, independent, politically active, and supportive of democracy? Notes: a. “Civil society organization” refers to any of the following: an interest group, a social movement, church group, or classic NGO, but not a private business, political party, or government agency. Must be at least nominally independent of government and the private sector. b. Questions about civil liberties, of obvious significance to civil society, are covered in a separate section. Existing indicators: the Civil Society Index compiled by the Global Civil Society Project. How much support for democracy is there among citizens of the country? (Sources: World Values Surveys, Eurobarometer, Afrobarometer, Latinobarometer [see EIU].) What is the level of literacy (a presumed condition of effective participation)? (Source: WDI.) What percent of citizens regularly listen to or read the national news? Are civil society organizations generally independent of direct government influence (or are they manipulated by the government and its allies such that they do not exercise an independent voice)? Are there any sizeable civil society organizations that are routinely critical of the government? Are major civil society organizations—representing key constituencies on an issue—routinely consulted by policymakers on policies relevant to their members (e.g., by giving testimony before legislative committees)? 12. Political Parties General question: Are political parties well institutionalized? Notes: a. Questions about the freedom to form parties and participate in elections are included under Election Administration. b. Questions below refer to all parties in a polity, considered as a whole. However, larger parties should be given greater weight in calculat- ing answers so that the party system is adequately represented. Are there well-understood rules governing each party’s business and, if so, are these rules generally followed?

APPENDIX C 283 Is there a clearly identifiable group of party members and is this group relatively stable from year to year? Do parties issue detailed policy platforms (manifestos)? Do parties hold regular conventions and, if so, are these conventions sovereign (in the sense of making final decisions on party polity and procedure)? Do parties have local sections (constituency groups), or are they centered on the capital and on a restricted group of local notables? 13. Subnational Government General question: How democratic is politics at subnational levels? Note: “Subnational government” refers to governments at regional and local levels. How centralized is power within the polity, taking all factors into account (for a useful discussion of various relevant factors see Rodden 2004)? As a way of calibrating this, Switzerland may be said to define the decentralized extreme while New Zealand defines the centralized extreme among democratic polities. Most authoritarian regimes are highly centralized, but not all (e.g., failed states such as Afghanistan or Somalia). To clarify, the question refers to the relative power balance between national and subnational levels; it does not attempt to judge the actual strength of control at either level. That is, whether both levels of government are weak or strong is irrelevant; what is relevant is only their power relative to each other. The question pertains to practical power not to formal/constitutional power. Note that centralization is usually not considered a definitional component of democracy: New Zealand, most would agree, is no less democratic than Switzerland. However, if power is highly centralized in a very large country—say, India—one may infer a significant problem of local accountability. In any case, the degree of centralization/decentralization gives meaning to the next question. How democratic are electoral politics at the subnational level? If practices differ appreciably between national and subnational levels, and perhaps even between regional and local levels, it may be necessary to complete the previous sections—Election Participation, Election Administration, Election Results, Leadership Turnover—for different levels of government. References Converse, N., and Kapstein, E.B. 2006. “The Economics of Young Democracies: Policies and Performance.” Working Paper No. 85, Center for Global Development (March). Coppedge, M., and Reinicke, E.B. 1990. “Measuring Polyarchy.” Studies in Comparative Inter- national Development 25:51-72. Europa Yearbook. [various years] The Europa Yearbook. London: Europa Publications.

284 APPENDIX C Gerring, J.; Bond. P.; Barndt, W.; and Moreno, C. 2005. Democracy and Growth: A Historical Perspective. World Politics 57(3):323-364. Huntington, Samuel P. 1991. The Third Wave: Democratization in the Late Twentieth Century. Norman, OK: University of Oklahoma Press. McFaul, M. 2005. Transitions from Postcommunism. Journal of Democracy 16(3): 5-19. Munck, G. L. 2006. Standards for Evaluating Electoral Processes by OAS Election Observa- tion Missions. Paper prepared for Organization of American States. Persson, T., and Tabellini, G. 2006. Democratic Capital: The Nexus of Political and Economic Change. NBER Working Paper. No. 12175. Rodden, J. 2004. Comparative Federalism and Decentralization: On Meaning and Measure- ment. Comparative Politics (July):481-500.

Next: Appendix D: Understanding Democratic Transitions and Consolidation from Case Studies: Lessons for Democracy Assistance »
Improving Democracy Assistance: Building Knowledge Through Evaluations and Research Get This Book
×
Buy Paperback | $70.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Over the past 25 years, the United States has made support for the spread of democracy to other nations an increasingly important element of its national security policy. These efforts have created a growing demand to find the most effective means to assist in building and strengthening democratic governance under varied conditions.

Since 1990, the U.S. Agency for International Development (USAID) has supported democracy and governance (DG) programs in approximately 120 countries and territories, spending an estimated total of $8.47 billion (in constant 2000 U.S. dollars) between 1990 and 2005. Despite these substantial expenditures, our understanding of the actual impacts of USAID DG assistance on progress toward democracy remains limited—and is the subject of much current debate in the policy and scholarly communities.

This book, by the National Research Council, provides a roadmap to enable USAID and its partners to assess what works and what does not, both retrospectively and in the future through improved monitoring and evaluation methods and rebuilding USAID's internal capacity to build, absorb, and act on improved knowledge.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!