4
Technology Issues

As described in Chapter 1, an election is not a single event but rather a process. It is thus helpful to consider the information technology (IT) of voting in two logically distinct categories: IT for voter registration and IT for voting.

4.1 INFORMATION TECHNOLOGY FOR VOTER REGISTRATION

Voter registration is affected by information technology. Though the subject has received comparatively little attention in the public debate, it is beginning to receive attention. Voter registration is the gatekeeping process that seeks to ensure that only those eligible to vote are indeed allowed to vote when they show up at the polls to cast their votes. Although much of the voter registration process unfolds before Election Day, the final step generally occurs on Election Day. Specifically, citizens register to vote before Election Day, and presuming that they vote at the polls, their voting credentials are checked on Election Day.

Voter registration is a complex process, as one might expect of a decentralized endeavor that involves millions of voters. Historically, voter registration has been a local function, and the primary function of election officials. However, under the Help America Vote Act of 2002 (HAVA), states are required to assume responsibilities that have previously been the province of individual local election jurisdictions. Specifically, HAVA calls for the states to create, for use in federal elections, a “single, uniform,



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 45
Asking the Right Questions About Electronic Voting 4 Technology Issues As described in Chapter 1, an election is not a single event but rather a process. It is thus helpful to consider the information technology (IT) of voting in two logically distinct categories: IT for voter registration and IT for voting. 4.1 INFORMATION TECHNOLOGY FOR VOTER REGISTRATION Voter registration is affected by information technology. Though the subject has received comparatively little attention in the public debate, it is beginning to receive attention. Voter registration is the gatekeeping process that seeks to ensure that only those eligible to vote are indeed allowed to vote when they show up at the polls to cast their votes. Although much of the voter registration process unfolds before Election Day, the final step generally occurs on Election Day. Specifically, citizens register to vote before Election Day, and presuming that they vote at the polls, their voting credentials are checked on Election Day. Voter registration is a complex process, as one might expect of a decentralized endeavor that involves millions of voters. Historically, voter registration has been a local function, and the primary function of election officials. However, under the Help America Vote Act of 2002 (HAVA), states are required to assume responsibilities that have previously been the province of individual local election jurisdictions. Specifically, HAVA calls for the states to create, for use in federal elections, a “single, uniform,

OCR for page 45
Asking the Right Questions About Electronic Voting official, centralized, interactive computerized statewide voter registration list defined, maintained, and administered at the State level,” containing registration information and a unique identifier for every registered voter in the state. This requirement applies to essentially all states; according to the Department of Justice, this requirement would not be satisfied by local election jurisdictions continuing to maintain their own nonuniform voter registration systems in which records are only periodically exchanged with the state. Rather, HAVA requires a true statewide system that is both uniform in each local election jurisdiction and administered at the state level.1 Once a voter registry has been established, two primary technology-related tasks for voter registrars are to keep ineligible individuals off the registration lists and to make sure that eligible ones who are on the lists stay on the lists. A third task—registering new voters—occurs on a regular basis as people come of age or move into a community and want to vote and normally spikes right before or during an election. However, registering new voters occurs on a “retail” case-by-case basis, in contrast to the purging function, which is necessarily done “wholesale.” Purging tasks arise because individuals identified as eligible voters may lose their eligibility for a number of reasons. A list of such reasons from Florida is typical2—voters may lose eligibility due to felony convictions, civil court rulings of mental incapacity, death, and inactivity. In addition, a voter may cease to be properly registered, because his or her eligibility to vote in particular electoral contests can be affected by a change in residence or by redistricting that places his or her residence in a different voting district. Finally, an individual registered to vote in more than one local election jurisdiction, even if he or she is otherwise an eligible voter, may vote only in the location in which he or she is legally entitled to vote. Because lists of registered voters contain millions of entries, the purging of a voter registration list must be at least partially automated. That is, a computer is required to compare a large volume of information received from other secondary sources (e.g., departments of vital statistics for death notices, law enforcement or corrections agencies for felony convictions, departments of tax collection or motor vehicles for recent addresses) against its own database of eligible voters to determine if a given individual continues to be eligible. Note also that states do not in general 1   See http://www.usdoj.gov/crt/voting/misc/faq.htm. 2   Florida Department of State, Florida Voter Registration System: Proposed System Design and Requirements, January 29, 2004. Available at http://election.dos.state.fl.us/hava/pdf/FVRSSysDesignReq.pdf.

OCR for page 45
Asking the Right Questions About Electronic Voting check across state boundaries to see if voters are registered in more than one state or if they have voted in two states on Election Day. Though this task sounds like a relatively simple one—just compare the lists3—it is enormously complicated by two facts: (1) the same individual may be represented on the different lists in different ways (John Jones and John X. Jones may refer to the same person, and he may have given the former name in registering to vote and the latter name in obtaining a driver’s license) and (2) the same name (e.g., John Jones) may refer to many different people. (This problem would be greatly ameliorated by the use of an identifier unique to the individual, such as a Social Security number, but for a variety of historical and legal reasons, the nation has chosen to eschew such use.) Thus, there must be some specific criteria for determining whether or not different names refer to the same person. For example, to deal with the first fact above, one criterion might be this: If similar names have the same home address associated with them, the names refer to the same individual. Such a criterion thus requires a rule for determining “similarity” or a match. One such matching rule might be “if the first and last names are identical, consider the full name a match.” Under this approach, John Jones and John X. Jones would be deemed to be the same individual only if they share the same home address, but John Jones and Mary Jones would be deemed different individuals even if they shared the same home address. Suffixes on names, such as Jr. and Sr., can also cause problems in a similar manner. Similarly, the second fact involving identical names might require a criterion such as, “If the name is associated with several different home addresses, there are as many different individuals as there are home addresses.” In this case, the matching criterion applies to home addresses, which are somewhat less ambiguous than names.4 The problem of determining whether names match is an algorithmic one. A simple and obvious algorithm calls for a perfect character-by-character match between names. But names in a database may be misspelled (e.g., due to typographical errors), and thus an algorithm that is relatively insensitive to such errors may be of more utility in determining 3   Lists provided by other sources must also be correct and complete (e.g., all those reported as felons must indeed have been convicted of felonies but not misdemeanors), but that point is outside of the scope of this discussion. 4   But not entirely. In the District of Columbia, for example, a specific residence may be listed as “3751 Joycelyn Street, NW” and “3751 1/2 Joycelyn Street, NW” in different official records of the D.C. government, depending on whether or not the computer software in use at any given department is able to process “1/2” as part of a street address.

OCR for page 45
Asking the Right Questions About Electronic Voting a match. Names can be pronounced the same way but spelled differently and vice versa. One class of algorithms developed to handle such problems is Soundex algorithms.5 These algorithms are widely used today for applications involving name matching, and their applications include name matching in comparisons of voter registration databases with other databases. It is useful to distinguish between a “strong match” and a “weak match.” A strong match is one in which there is a very high probability that two data segments represent the same person. A weak match indicates that two data segments are similar, but additional information or research is necessary to determine if the two data segments represent the same person. In addition, there can be many legal ways to identify a citizen who is eligible to vote, which suggests that information in multiple databases can be used to determine eligibility. Whatever the approach, it is important to realize a trade-off between false negatives and false positives. Any approach will identify some names as different when they do refer to the same individual (false negative) and other names as similar when they do not refer to the same individual (false positive). Consider the significance of this problem for purging of a voter registration list. Any approach will incorrectly identify some registered voters as ineligible and thus improperly purge them (false positive) and will also fail to find ineligible voters who are not identified as such and thus remain on the list (false negative). For example, John Jones on the voter registration list and Jahn Jones on the convicted felon list may constitute a weak match, and without additional research, John Jones may be improperly removed from the voter registration list (a false positive). On the other hand, the names Sam Smith on the voter registration list and Sam X. Smith on the convicted felon list (with both names referring to the same person) may result in Sam Smith improperly remaining on the voter registration list (a false negative). It is a fundamental reality that the rate of false positives and the rate of false negatives cannot be driven to zero simultaneously. The more demanding the criteria for a match, the fewer matches will be made. Conversely, the less demanding the match, the more matches will be 5   Soundex algorithms solve the generic problem of matching names that sound alike but have different representations in text form (e.g., Smith and Smithe). A Soundex algorithm generates a string of characters that represent approximately its phonetic sound, so that words that sound alike, even if spelled differently, all result in the same character string when proceeding through the algorithm. The original Soundex algorithm was patented in 1918, and there have been refinements to it over the years, resulting in a class of such algorithms.

OCR for page 45
Asking the Right Questions About Electronic Voting made. For example, a requirement that names match (using all of the letters), addresses match, and dates of birth match is more demanding and will result in fewer matches than if the requirement is that only names and addresses match and only some of the letters and/or sounds in the name are used to determine a match. The choice of criteria for determining similarity is thus an important policy decision, even though it looks like a purely technical decision. Furthermore, the considerations discussed above suggest that the presence or absence of human intervention in the purging process is important. That is, one should regard as very different a purging system that is fully automated and one that uses technology only to flag possible individuals for further attention by some responsible human decision maker. Because the human decision maker would use different criteria to render a decision (including the use of common sense and contextual factors), the rate of false positives would be reduced—and considerably so if the different criteria could be applied consistently. In addition, the use of lists of inactive voters can provide some protection against false positives. A purge removes a voter from the voter registration list entirely, and thus this voter would either be denied the ability to vote or might be allowed to cast a provisional ballot. But if a voter who might otherwise have been purged is moved instead to an inactive voter list, the voter still remains on the rolls—and may vote in a subsequent election. Finally, the purging of voter registration lists must itself be seen in a larger context, as such purging can be used as a political tool to manipulate the outcome of elections. One such use is to purge in local election jurisdictions chosen so that a purge would have differential effects on various voting blocs. Statewide management of voter registration lists reduces the possibility that decisions to purge are made locally, but there may be nothing in state law that in principle or in practice prevents state officials from ordering such purges for political reasons. The issue above is important because there must be some criterion by which to determine if a purging is undertaken overaggressively or underaggressively. An overaggressive purge purges individuals who should be retained on the rolls. An underaggressive purge does not purge individuals who should not be retained on the rolls. Either type of purge can be undertaken for political reasons, depending on the demographics of those inappropriately retained on or purged from the rolls. One approach to understanding the nature of a purge is to compare the rate at which eligible voters are inappropriately purged (E) with the rate at which ineligible voters are not purged (I). That is, define R as the ratio of I to E. Thus, R reflects the number of ineligible voters who are not purged for every eligible voter who is purged. Those who put a very high

OCR for page 45
Asking the Right Questions About Electronic Voting premium on eligible voters not being purged want E to be as low as possible, and thus tend to favor large R. Those who put a very high premium on purging the voter rolls of all ineligible voters want I to be as small as possible, and thus tend to favor small R. In any event, given a certain fraction of ineligible voters in the voter registration database, the choice of R determines a great deal about the performance requirements of the purging process. As Box 4.1 illustrates, the choice of R fixes the relative effectiveness of the purging process in identifying eligible voters for retention compared with not identifying ineligible voters for purging. Note also that Election Day credential checking involves a similar set of considerations. A citizen presents his or her credentials at the polling place, and these credentials are checked against a listing of eligible voters. Again, the issue of similarity is relevant. If the eligibility credential is an excerpt from the voter registration database (e.g., a voter registration card), the possibilities for error are minimized. But if, instead, the requirement is to prove one’s identity with some other set of credentials, such as a driver’s license, a judgment of similarity must again be made. However, this time the criteria—which may or may not be the same as those used for purging voter registration lists—work in the opposite direction. A demanding similarity criterion will tend to exclude eligible voters, while a less demanding criterion will allow more ineligible individuals to vote (or at least result in more confusion between different individuals). Against the discussion above, a number of important questions arise: 4-1. Are the relative priorities of election officials in the purging of voter registration databases acceptable? As noted above, purging databases can be conducted in an overaggressive manner or in an underaggressive manner. The politically correct response for public consumption is that it is equally important to purge the registration rolls of ineligible voters and to ensure that no eligible voters are purged, but of course in practice officials must choose the side on which they would prefer to err. An explicit statement of R—the number of ineligible voters who are not purged for every eligible voter who is purged—is thus a quantitative measure of the direction in which a given policy is leaning. (Of course, being able to make an estimate of R requires that data be collected that indicate the probability that an eligible voter on the voter registration rolls is wrongly purged, the probability that an ineligible voter on the voter registration rolls fails to be purged, and the fraction of the voter registration rolls that actually consists of ineligible voters.) 4-2. What standards of accuracy should govern voter registration databases? In voting machines, a Federal Voting Systems Standard specifies a maximum error rate of 1 in 500,000 voting positions (e.g., 1 in every

OCR for page 45
Asking the Right Questions About Electronic Voting Box 4.1 False Positives and False Negatives Let Pfp = the probability that an eligible voter on the voter registration (VR) rolls is wrongly purged. Let Pfn = the probability that an ineligible voter on the VR rolls fails to be purged. Let f = the fraction of the VR rolls that actually consists of ineligible voters. Each cell entry in the table below indicates the probability of the action taken given the status of an individual on the VR roll. In the ideal case (a perfect algorithm), the likelihood of purging an eligible individual is zero, as is the likelihood of not purging an ineligible individual. Action Taken Status of Person on VR Roll Eligible Ineligible Not purged 1 0 Purged 0 1 In the more realistic case, with nonzero Pfp and Pfn, the probabilities are as follows: Action Taken Status of Person on VR Roll Eligible Ineligible Not purged 1 − Pfp Pfn Purged Pfp 1 − Pfn By definition, f is the fraction of the database of size N that consists of ineligible individuals. Based on the tables above, the cell entries below indicate the number of people who are eligible (ineligible) who are subsequently purged or not purged. Action Taken Number of Individuals on Roll Who Are Eligible Ineligible Not purged (1 − Pfp)(1 − f) N Pfn fN Purged Pfp(1 − f) N (1 − Pfn)fN If we define R as then

OCR for page 45
Asking the Right Questions About Electronic Voting 2,000 punch card ballots with 250 voting positions on each card). What might be a comparable standard for the accuracy of a voter registration database, taking into account that people move frequently and die eventually? 4-3. How well do voter registration databases perform? How many people who think they are registered really are registered? How many people who are registered should be registered? The first question requires a general population survey that is linked to registration records (the American National Election Studies did this for many years). The second question requires a sample from the registration list followed up with diligent efforts to contact the people and the collection of information about them. 4-4. What is the impact on voter registration database maintenance of inaccuracies in secondary databases? The quality of databases other than those for voter registration affects maintenance of voter registration databases. In general, databases such as those of departments of motor vehicles (DMVs), departments of correction, and departments of vital statistics are not under the control of the state election officials. (Vital statistics are usually under the control of a county or municipality.) For example, if a DMV database is highly inaccurate in its recording of addresses, and a decision on voter eligibility depends on a match between the address on the voter registration database and that of the DMV, the probability of purging an eligible voter increases, all else being equal. A related point is the fact that database interoperability is in general a nontrivial technical task. The secondary databases needed for verification of voter registration are developed for entirely different purposes, and both the syntax and semantics of those databases are likely to be different from those of the voter registration databases. Finally, these secondary databases are subject to state legislative control as well, and there are a wide range of options for how legislatures can affect their disposition and use in the voter registration process. For example, states could explicitly disclose these sources, so that a voter could be especially careful to ensure that he or she is not being misrepresented in such databases. States could mandate that secondary databases be managed with a higher level of care when they are used for purposes related to voter registration. Or states could mandate that in the interests of protecting voter privacy only certain types of data in these secondary databases would be available to the voter registration process. More generally, refining criteria for the various legal reasons for purges has been and will be on the agenda of many legislatures, and discretion based in local election jurisdictions about how to conduct purges will probably be subject to increased scrutiny. 4-5. Will individuals purged from voter registration lists be notified in enough time so that they can correct any errors made, and will

OCR for page 45
Asking the Right Questions About Electronic Voting they be provided with an easy and convenient process for correcting mistakes or making appeals? From the discussion above, it is clear that some number of eligible voters will be inappropriately purged in any large-scale operation. Given that the right to vote is a precious one, voters who may have been purged incorrectly should have the opportunity to correct such mistakes before they cast their votes.6 4-6. How can the public have confidence that software applications for voter registration are functioning appropriately? As the discussion in Section 4.2.1 indicates, software for voting systems is subject to a variety of certification and testing requirements that are intended to attest to its quality. But there are no such standards or requirements for software associated with voter registration. Voters who lack confidence in the operation of voter registration systems will be uncertain about their ability to vote on election day. Large numbers of such voters will almost surely result in reduced turnouts. 4-7. How are privacy issues handled in a voter registration database? In many states, much of the information in a voter registration database is public information. HAVA directs states to coordinate those databases with drivers’ license databases of state DMVs and with the U.S. Social Security Administration. States may choose to coordinate with other databases as well, such as databases containing identification information for felons and death records. Much of the information in these other databases is not relevant to one’s eligibility. For example, one’s driving record is contained in a database of licensed drivers maintained by the state DMV. This database may be used to verify names and addresses for voter registration purposes (checking consistency, for example), but one’s driving record is not relevant for determination of voting eligibility. How do state laws, regulations, or guidelines limit the fields that constitute public information or the extent to which the interfacing agencies are permitted to retain personal data received from the other agencies during the matching process required for voter registration? How, if at all, is such nonrelevant information protected from inappropriate disclosure? How might such nonrelevant information be used to bias voter turnout for partisan 6   Provisional balloting is a method required by HAVA that enables provisional ballots to be cast, subject to subsequent validation of a voter’s credentials. Though in principle such an approach solves the problem of an improperly purged voter, there are two potential problems with it. First, for all practical purposes, a provisional ballot has the same privacy protections as an absentee ballot—which are necessarily of a lesser degree than the privacy protections available in the voting booth on Election Day. Second, provisional ballots are inherently suspect in a way that votes cast in a voting booth are not, and the voter casting a provisional ballot will leave the polling place without any assurance that the ballot will indeed be counted.

OCR for page 45
Asking the Right Questions About Electronic Voting purposes? (Indeed, much of the information contained in these databases is for sale by the states, and the purchasers of such information are often political parties.) 4-8. How can technology be used to mitigate negative aspects of a voter’s experience on Election Day? For example, in many large jurisdictions, check-in lines at polling places can be both long and uneven. One frequently heard reason for this phenomenon is that any given poll worker checking registration can only check certain last names (e.g., all those names starting with letters A through G). This is true because the roll books containing lists of registered voters are broken up that way, and the poll workers have no flexibility on this point. However, information technology might be used to provide such similar information to poll workers without the need for such a procedure.7 4-9. How should voter registration systems connect to electronic voting systems, if at all? Today, there is an “air gap” between voting, even if done electronically, and checking for voter registration, which is done manually. However, in the interests of efficiency and rapid movement through polling places, it is easy to see a persuasive argument for why these functions should be integrated. A voter could simply present an electronic registration card to a voting station and be allowed to cast a ballot. This arrangement might facilitate easy, vote-anywhere voting in thousands of locations across a state rather than in just one precinct location and also early voting, in which a voter could vote at a central site. In both situations, a voter could have high assurance that he/she received the correct ballot form corresponding to his or her registration address. The most obvious argument against this arrangement is that it potentially compromises the secrecy of voting in a major way. Nevertheless, it is easy to imagine that both voter registration and voting might be integrated in packages of services offered by election service vendors. 4.2 INFORMATION TECHNOLOGY FOR VOTING IT for balloting is what is usually meant by “electronic voting systems”—the systems described in Chapter 3. This section addresses security and usability issues. Usability can be characterized as functionality that facilitates a voting system’s accurate capture of a voter’s intent in casting a ballot and assures the voter that his or her ballot has been so captured. Furthermore, the voting system must record that ballot accu- 7   This is not to say that the use of information technology for this purpose has no downsides. For example, it may be more difficult to capture a signature if one is required.

OCR for page 45
Asking the Right Questions About Electronic Voting rately until it is tabulated, even in the face of deliberate wrongdoing (security) or accidental error or mishap (reliability). 4.2.1 Approaching the Acquisition Process In considering the purchase of any given voting system, an election official’s first step is often to consider systems that have been qualified under a process established by the Election Assistance Commission (EAC). Specifically, a vendor’s voting system is qualified if an Independent Testing Authority (ITA) asserts that the system in question meets or exceeds the Federal Elections Commission’s 2002 Voting Systems Standards (Box 4.2).8 ITAs are designated by the National Association of State Election Directors, and a vendor pays an ITA for its work in qualifying a system. Knowledge that a given voting system has been qualified according to a particular standard provides some degree of assurance that the system in question meets a minimum set of requirements. Nevertheless, the fact that a given voting system has been qualified may not be the only criterion that affects a decision maker’s procurement decision.9 This is because voting systems fit into a larger context that cannot be separated from an assessment of fitness for purpose. The election official is responsible for the conduct of an election with integrity, and the equipment used in the election is only one part of that election. Yet, the qualification process evaluates voting systems, making just such a separation. This is not the fault of the qualification process—it is simply a consequence of the fact that any testing process must necessarily set bounds on the scope of the evaluation. Of particular significance is the fact that various jurisdictions have long-established policies, procedures, and practices that govern the conduct of elections. Introduction of new technology into established practices almost always results in some degree of conflict and difficulty, even when the authorities seek to adjust existing practices to accommodate the new technology. Technology may work properly only if certain pro- 8   The Federal Election Commission’s 2002 Voting Systems Standards call for three kinds of tests to be performed on voting systems to ensure that the end product works accurately, reliably, and appropriately: qualification testing (the focus of this section), certification tests performed by states in order to document conformance to state law and practice, and acceptance tests performed by the jurisdiction acquiring the system to document conformance of the delivered system to characteristics specified in the procurement documentation as well as those demonstrated in the qualification and certification tests. 9   In practice, qualification may only be a prerequisite for a vendor to be considered for purchase. That is, a county may be interested in “all qualified systems”; thus, the fact of qualification may have no relationship to a specific purchase decision.

OCR for page 45
Asking the Right Questions About Electronic Voting sponse devices (buttons, levers, sensitive areas of a touch screen, force levels and accuracy thresholds of response motion, etc.). It also depends on evident correspondence (in location, direction of response motion, sequential order, label wording, etc.) of the appropriate response to the stimulus (e.g., name of the candidate). This is what human factors professionals call stimulus-response (or display-control) compatibility. It is the criterion that the infamous butterfly ballot flaunted. 5. Error types, causation, and remediation. Human errors can be classified in different ways, and such classification is a step toward understanding their causes and preventions. Errors can be omissions (correct action not taken) or commissions (actions taken that ought not to have been taken). Errors can be slips (intended action not taken) or mistakes (intended action taken but turning out to be inappropriate). Errors can occur at any of the stages of sensing, remembering, deciding, or responding. Human errors often result when people do not receive sufficient feedback in a timely and understandable way. In daily living, people constantly get such feedback from their physical and social surroundings. Other common error causes are inappropriate mental models of how something works, forgetting, distraction, incorrect expectations (e.g., performing a task in a habituated way when present circumstances call for a deviation from the norm), lack of sufficient stimulus energy, or mental or bodily incapacity. The best way to prevent error is to design the machine or process to be easy (simple, obvious) to use, and this includes good feedback, even in redundant ways. Education and training are next most important, but best designs also minimize necessary training. Computer-based decision aids and in situ guidance, alarms, and prevention of exposure to the opportunity to err (the computer will not recognize certain commands under some circumstances) are other techniques used. Posted warnings have proven to be the least effective means of preventing errors. A well-designed system with adequate feedback will allow the user to commit an error, observe the error, decide what to do about it, and gracefully recover from it. 6. Training. What is obvious to the designer of any machine or process is often not so obvious to the user. Any experience that differs from what one is accustomed to is likely to trigger some confusion. Therefore, at least a modicum of training will be essential for electronic voting. Some training can be accomplished by a well-designed brochure made available either prior to or at the site of voting. It can be augmented by poll workers explaining features of the machine or process that may be confusing. A more sophisticated approach used in some computer-based systems is to embed the training—that is, have the voter go though a few steps of observation and response to displayed dummy candidates to

OCR for page 45
Asking the Right Questions About Electronic Voting ensure that the voter understands the system. Training is also important for the poll worker, who are often senior citizens less familiar with and more anxious about using computers than the majority of the voter population. 7. Interaction with automation. Human interaction with computer-based machines that may be said to embody at least rudimentary intelligence poses special problems. These may occur for poll workers or technicians employed to set up the machines, make sure they are working properly, understand indications of machine failure (and curtail their use if necessary), and transfer voting data from them to other repositories. It is common that the user attributes more intelligence to a computer than it has. It is also common that a mode error is committed—namely, the user assumes that the machine is set in one mode and takes actions appropriate to that mode, when in fact it has been set to another mode and the action produces an undesirable result. 8. Experimentation and simulation. Experimentation and simulation are essential to system design, setup, voter and poll worker training, and evaluation of voter confidence and system effectiveness. Dealing with human subjects is a special art. Because of the special challenges of dealing with the great diversity of voters and poll workers with respect to education, technological sophistication, and physical and mental limitations, great importance must be attached to well-designed simulation trials, with voter subjects drawn from the representative population. Experimental designs must include a sufficient sample size and proper allocation of subjects to experimental runs to minimize bias in resulting data. Only then can designers of machines and training regimens feel confident, and only then can conclusions about system effectiveness and voter confidence be made. Voting systems pose a particularly difficult usability challenge. They must be highly usable by the broad public.34 As Hochheiser et al. point out, a citizen in the voting booth facing an electronic voting system may not feel comfortable with information technology, may not be literate (in terms of everyday reading and writing and/or with respect to using a computer), may not be an English speaker, and may have physical, perceptual or cognitive disabilities that interfere with understanding the bal- 34   Voter registration database systems are another example of an election-related information technology, and as such, user interface issues are important to their users as well. But the population of intended users for these systems—those involved with election administration—is very different from the general adult population at large (that is, those who are part of the population of potential voters). As one example, election officials are likely to interact with a voter registration database system frequently, whereas voters are likely to interact with a voting system only rarely.

OCR for page 45
Asking the Right Questions About Electronic Voting Box 4.7 Usability Issues That an Independent Assessment Might Examine Are voting station controls clearly labeled? Are fonts readable? Is consistent language used throughout the interface? Can users easily change votes once selected? Are write-in votes easy to cast, with clearly labeled choices? Are controls laid out so as to minimize the likelihood of accidental completion of a ballot? Have user interfaces been designed for use by and tested by a wide range of users of varying levels of expertise, education, and literacy? Have user interfaces been designed for use by and tested by voters with various disabilities, including (but not limited to) poor vision/blindness, motor impairments, and cognitive difficulties? Has the testing been conducted in environments that approximate the stresses and distractions of real polling places? Does the system provide adequate feedback that the vote intended was indeed captured? SOURCE: Harry Hochheiser, Ben Bederson, Jeff Johnson, Clare-Marie Karat, and Jonathan Lazar, The Need for Usability of Electronic Voting Systems: Questions for Voters and Policy Makers, Association for Computing Machinery (ACM) Special Interest Group on Computer-Human Interaction (SIGCHI), U.S. Public Policy Committee, white paper submitted to the committee. Available at /cstb/project_evoting_acm-sigchi.pdf” http://www7.nationalacademies.org/cstb/project_evoting_acm-sigchi.pdf. lot, interacting with the system, and casting a vote. This citizen is probably alone in the booth and may not be able to, or may be socially inhibited from, asking for help. Finally, most citizens vote no more than once or twice a year and thus have little opportunity to develop experience or familiarity with the system. Box 4.7 addresses some of the issues that might be examined in a usability assessment. 4.2.3.2 Design for Effective Use The first stage in the life cycle of a voting system is requirements development and design. The top-level requirement is relatively simple: the system must capture the voter’s vote as he or she intended it. However, designing a system to do this under a wide variety of circumstances is a nontrivial task. Questions related to design include the following:

OCR for page 45
Asking the Right Questions About Electronic Voting 4-26. How does a voter receive feedback after he or she has taken an action to cast a vote? After the voter has pressed a button or touched a screen, a natural question for the voter to ask is, “Did the machine accept my input?” or “How do I know my vote was entered?” While punch card, optical scan, and lever voting systems involve physical artifacts that provide immediate feedback to the voter about the choice or choices that have been made, the workings of electronic voting systems are more opaque from the voter’s standpoint. Indeed, in some electronic voting systems, feedback mechanisms must be explicitly designed in. (In this context, this question is a user interface question rather than a security question. That is, it is assumed that the software is not trying to trick the voter into believing something that is not true.) Note also that the presence of some feedback does not solve all user interface problems. Useful feedback both informs the user that an action was recorded and indicates which action was accomplished. For example, a click sound and the appearance of an X in a selection box indicates that a selection was made but not necessarily which selection was made. If the box is not clearly located next to the appropriate option, or the option is not highlighted when selected, a user may not know which specific option was selected. In the case of the Florida butterfly ballot of 2000 (a punch card ballot), voters received feedback about having punched a hole in the card. But the ballot nevertheless confused voters about which selections they had actually made. One possibility is that voters did not punch the card fully; a second possibility is that poorly maintained machines made it impossible to punch the card fully. In both cases, the result would have been some ballots with “hanging” and “dimpled” chads—and doubt about the validity of those votes. At the same time, the voter would not know that the ballot cast might not be interpreted as a valid vote. A third possibility is related to ballot design—some number of votes appear to have been inadvertently cast for the wrong candidate because of misalignment of the punch hole locations and the candidate names—and the voter may have cast a vote for someone other than his or her actual choice without knowledge of that error. 4-27. How is an electronic voting system engineered to avoid error or confusion? Both the display and control interfaces of the system and the logic enforced by the system are at issue. For example, a large ballot may need to be presented to the voter on multiple display screens. What feedback does the system provide to the voter about where he or she is in the ballot? What provisions are made to enable the voter to back up, go forward, and jump around the ballot? To retrace his or her steps? To review the entire ballot before submitting it? As for logic, systems can be designed to block actions that would invalidate a vote or to warn the

OCR for page 45
Asking the Right Questions About Electronic Voting voter of possible errors in the ballot before the ballot is cast, thus providing an opportunity to correct his or her ballot. For example, a direct recording electronic (DRE) system can prevent a voter from overvoting by forcing the selection of an “excess” choice to result in the deselection of a previously selected choice, or by not allowing new selections beyond a certain number and generating a message that informs the voter of a mistake. In the case of undervoting, a DRE system can warn a voter if a particular contest has been left blank but without forcing him or her to cast a vote in that contest.35 (Both punch card and optical-scan voting systems can warn voters of overvotes if ballots are counted in real time by a precinct-based system.) 4-28. What accommodations have been made to address the special concerns and needs of people with disabilities? Citizens with disabilities have a right to a voting experience that is fair and acceptably straightforward—a requirement that is codified in the Help America Vote Act of 2002. Note that these issues are not simply problems of technology. In some instances, assistance from poll workers may be necessary. 4-29. What accommodations have been made to address the needs of non-English speakers, voters with low literacy skills, and citizens from various cultural, ethnic, and racial groups? All citizens have a right to vote regardless of their background, language group, or cultural situation. Electronic voting systems offer the possibility that a ballot can be easily switched to different languages or rendered audible for nonreaders. 4-30. How and to what extent have concerns about the needs of these parties been integrated into the design of the system from the start? A substantial body of experience indicates that attention to such concerns is much more effective at the start of the design process than at the end, at which point other decisions have been made that eliminate options that might otherwise have been desirable. (For example, a “screen reader” that tries to render a written ballot into words is often not as successful as a ballot that is designed from the beginning to include auditory interaction.) 4-31. What are the ballot definition capabilities offered to jurisdictions? Ballot definition is the process through which the ballot pre- 35   Error checking can also create voter dissatisfaction. For example, some voters have become accustomed to nonelectronic systems that do not perform error checking. If they violate the ballot logic (e.g., an overvote), their votes do not count, but they have no way of knowing this fact if the votes are tabulated remotely. When faced with an electronic voting system that does perform error checking, the voter may react negatively because it is preventing him or her from voting in the accustomed manner.

OCR for page 45
Asking the Right Questions About Electronic Voting sented to the voter is laid out. It involves aspects such as font size, graphics, placement and formatting of items, translation into other languages, and so on. Ballot definition issues were responsible for the problems with Florida’s butterfly ballots in the 2000 presidential election. In practice, a voter’s experience is determined by some mixture of the system’s devices for entering input and the appearance of the ballot to the voter. Voting systems must be usable with a wide variety of ballots. That is, a vendor may wish to sell systems to multiple jurisdictions, each of which has different ballot requirements. Even within the same jurisdiction, a number of different ballots may be involved. Ballot design directly affects the ability of voters to understand the issues, recall their decisions, and actually carry out their intentions, and a given technology affects which ballot designs can be implemented. For example, voting systems based on touch-screen technology may be subject to frequent interface modifications that create a difficulty for election officials and voters but also make possible rapid prototyping for ballots and responsive redesign for error correction. Vendors have the responsibility of enabling jurisdictions to define ballots. The specific ballot definition capabilities provided to the jurisdiction are of considerable importance, because they can increase or decrease the likelihood of confusing, misleading, or even illegal ballots. (For example, a vendor might provide user-tested and validated templates for jurisdictions to use as a point of departure. Or vendors could provide local election jurisdictions with ballot definition toolkits that enforce usability principles as well as local laws and regulations, to the extent feasible.) 4-32. How is provisional balloting managed? Of course, election officials have the option of insisting that a provisional ballot be processed entirely offline. But a vendor may offer such capabilities online. Online provisional balloting raises a number of issues: Segregation of provisional ballots from ordinary ballots. Since a provisional ballot counts only if it is determined later to be cast by a person eligible to cast it, it must be separated from ordinary ballots. Maintenance of voter secrecy. Given that the provisional ballot must be connected in some way to voter-identifying information (so that the voter’s status can be later ascertained), the potential for secrecy violation is manifestly obvious. What mechanisms are available to ensure that voter secrecy rights are respected? Ballot selection. More advanced electronic voting systems may seek to support vote-anywhere voting, in which a voter can present himself or herself at any precinct in the state, identify his or her home jurisdiction, and expect the correct ballot to appear on the screen at his or her voting station. How will this capability be managed?

OCR for page 45
Asking the Right Questions About Electronic Voting 4.2.3.3 Usability Testing Usability testing is done through simulations and experiments as described above. In addition to the response time and error data derived from experiments, it is useful to get subjective data, either from questionnaires or from focus groups or both. But a primary lesson from human factors engineering is that the number of different ways machines can confuse people is far larger than one can imagine from even the most careful on-paper analysis. While experienced designers and careful on-paper analyses are important elements of human factors engineering, repeated cycles of realistic and intensive testing with a broad range of users and reengineering to reduce the likelihood of errors is absolutely essential to the process. A broad range would include people with a diversity of education, socioeconomic backgrounds, technical experience, literacy, and physical, perceptual, language, and cognitive abilities. Realistic testing includes environmental conditions that approximate those found in the polling place, including attendant chaos, noise, and time pressure. To illustrate the kinds of unusual and not-easy-to-anticipate problems that occur in operational use, consider that a voter may need to switch the language of presentation in mid-stream. Quoting from the field notes of a member of the committee who was observing: [In observations of early voting for the 2004 General Election in Los Angeles County,] a young, female Asian voter was observed in a Monterey Park early voting location (Monterey Park City Hall, Community Room), on October 29, 2004, at approximately 12:30 pm (the final day of early voting in Los Angeles County for that election). This young woman asked one of the polling place workers for assistance using the voting machine, and she clearly began to have some difficulties with her ballot. Eventually, she requested assistance again, which involved two polling place workers, as she wished to change the language that the ballot was presented in from Chinese to English, in the middle of casting her ballot. Eventually, the polling place workers managed to switch her ballot from Chinese to English on the electronic voting device. This voter was timed as taking almost 24 minutes to vote, from start to finish; other voters at this same location were observed typically taking from about 5 to 7 minutes to vote using the same electronic voting machines. It is thus reasonable to ask about the nature of usability testing and the range of users involved in such testing. 4-33. What is the range of the subjects used in testing usability? As a general rule, the broader the spread of demographic and socioeco-

OCR for page 45
Asking the Right Questions About Electronic Voting nomic characteristics of the test population, the greater the likelihood that potential operational problems will be identified in advance. 4-34. What is the error rate in capturing votes of any given system? How is that error rate determined? A commonly used and well-accepted aggregate metric for this error rate is the residual vote, defined as the sum of overvotes and top-of-ticket undervotes (in which the voter indicates no choice for the most important contest on the ballot, and thus the ballot does not count as a vote). Overvotes are clearly errors, whereas undervotes are entirely legal and may reflect a voter’s preference to refrain from voting in a particular contest. Nevertheless, because the top-of-ticket contest (e.g., the contest for president of the United States) is the most important contest, it is assumed that an undervote for that contest reflects an error on the part of the voter.36 Note that because the voter’s experience is determined by a combination of the voting system, the particular ballot layout, and the particular environment (e.g., ambient noise, lighting, time pressure), a realistic estimate of error rate is obtainable only by undertaking the measurement under circumstances that are very close to those that would prevail on Election Day. 4-35. What are the submetrics of usability that are applied to evaluate and compare systems? Usability is in general a multidimensional issue, and different voting jurisdictions may place different weights on the various dimensions of usability. For example, a rural jurisdiction serving a voter population that almost exclusively speaks English may well place lesser weight on usability metrics that relate to ballot presentation in languages other than English than would an urban jurisdiction serving a large number of language minorities. Residual vote is a useful aggregate measure of usability, but making specific usability improvements in a voting system requires a more detailed understanding of why voters overvote and undervote. Moreover, residual vote is a conservative measure of error, in that it does not capture voters who vote for a candidate other than the one they intended. 4-36. To what extent, if any, do problems with usability systematically affect one political party or another or one type of candidate or another? Usability problems that have a greater effect on a certain demo- 36   To illustrate the use of residual vote as a metric for comparing the performance of different voting technologies, Henry Brady used residual vote to compare the performance of punch cards in 1996 to that of optical scanning in 2002 in Fresno County in California. He found that the residual vote dropped by a factor of about 4 as the result of changing voting technologies. See Henry Brady, Detailed Analysis of Punch card Performance in the Twenty Largest California Counties in 1996, 2000, and 2003, available at http://ucdata.berkeley.edu:7101/new_web/recall/20031996.pdf.

OCR for page 45
Asking the Right Questions About Electronic Voting graphic group, for example, may work to the disadvantage of a particular party. 4-37. How is feedback from actual usage incorporated into upgrades to currently deployed systems? The ultimate in operational testing is experiences during Election Day, when voting systems get their maximal workout. Because it is virtually certain that some users will be confused and make errors with any deployed system, it is desirable to have some method for systematically capturing anomalous voter experiences and using information about such anomalies as a point of departure for future upgrades. Vendors and election officials should therefore go out of their way to seek information about voter problems with a given system rather than to ignore or, worse still, suppress such reports. 4-38. How does usability testing incorporate the possibility that different jurisdictions may create ballots that are very different from one another? Because the voter’s experience at a voting station depends both on the underlying technology and the way the ballot is presented, it is important that usability testing be conducted across a range of different ballots. 4-39. Who should conduct usability testing on specific ballots? Because the ITAs are not in a position to evaluate specific ballots that jurisdictions may use, ITA qualification does not provide assurances about the usability of given ballot. Indeed, the soonest that a specific Election Day ballot can be made available is after the relevant primaries for that election. Thus, election officials must either conduct usability testing themselves, or engage some other party (parties) to do it. An obvious—though hardly disinterested—choice is the vendor. But there may be other parties available to perform such services on relatively short notice. 4.2.3.4 Education and Training Voter education is challenging. Because many people vote only once or twice a year, they may well forget how to use the systems they used in previous years. Given the rate at which people change residences, some nontrivial number of voters in any given jurisdiction are likely to be first-time voters there, and because different jurisdictions make their own decisions about which voting systems they will acquire, some people will always be voting on unfamiliar equipment. Some devices for entering input, such as touch screens, can behave idiosyncratically in a way that is dependent on how a particular unit is calibrated. Finally, product upgrades from vendors may change the user interface, which would result in a different “look” and “feel” from election to election. This suggests that education or training will be necessary, at least for some (significant number of) voters.

OCR for page 45
Asking the Right Questions About Electronic Voting Voter education materials must be comprehensible to a wide range of people, and so should be written so as to not require high levels of education, be available in multiple languages, have visuals that correspond closely to the systems and ballots in use, provide step-by-step instructions, and be available to nonsighted individuals. 4-40. How long does it take a first-time user to become familiar enough with the system to use it reliably and with confidence? As a rule, this question can only be answered by simulation and direct user testing. 4-41. What kinds of educational materials should be prepared and distributed in advance? Many organizations, both partisan and nonpartisan, provide voter education materials that illustrate how to fill out ballots. While these materials are generally oriented toward the specific choices that voters will make, information about the operation of the voting systems that will be used is likely to be helpful to most voters. Such information can be made available in many ways, notably in print and online. Nonpartisan educational materials in multiple formats (e.g., video cassettes, DVD, and online or Web-based) teaching how to operate the units can be available to voters at the polls prior to actual voting. 4-42. To what extent are practice systems available for use before and on Election Day? While good “paper” instructions would be helpful, actual hands-on experience and familiarity would make a world of difference for the voter in operating a voting station. The availability of a demonstration station, configured identically to the ones that voters will actually use, would allow voters who are uncertain about the mechanics of voting to practice ballot casting in a realistic fashion. Even if demonstrator stations are not available in every polling place, making a few available in convenient locations prior to Election Day would help. 4-43. What voter assistance can the voting station itself provide to users? Nothing in principle prevents the voting system from providing information about the mechanics of casting a ballot. For example, voting systems can prevent overvoting (voting for more than one candidate when only one selection is allowed) by providing an indicator that such a condition has occurred and preventing the user from making the ballot final until the problem is corrected. They can also warn the user if an undervote has occurred—that is, that the voter has not made choices for certain offices or propositions—by asking if the undervote was deliberate. It is also possible to have an online help facility that a confused or uncertain user can invoke. Context-sensitive help (i.e., help that varies depending on where the user is in the voting process) is generally much more helpful than generic advice that the user must read and compre-

OCR for page 45
Asking the Right Questions About Electronic Voting hend before finding what he or she needs. Note also that in the unfamiliar confines of the voting booth, with lines of other voters waiting, voters may feel pressure to complete their votes as quickly as possible. Such pressure increases the likelihood of errors and may reduce the willingness of some voters to use online help facilities. 4.2.4 Reconciling Security and Usability For a variety of reasons, election officials often believe that security and usability are necessarily traded off against one another. For example, the tension between overaggressive purging and underaggressive purging of a voter registration list reflects this trade-off: Greater security (and reduction of fraudulent voting) is associated with overaggressive purging, while greater accessibility to the polls is associated with underaggressive purging. Maintaining privacy in the voting booth is a matter of security, while allowing another individual inside the voting booth to assist the voter is a matter of usability. And, security by obscurity is fundamentally dependent on a denial of access. These contrasts illustrate a more general point—in the design of any computer system, there are inevitably trade-offs among various system characteristics: better or less costly administration, trustworthiness or security, ease of use, and so on. Nevertheless, in the design of electronic voting systems, the trade-off between security and usability is not necessarily as stark as many election officials believe. That is, there is no a priori reason a system designed to be highly secure against fraud cannot also be highly usable and friendly to a voter. The reason is that the security and usability requirements are directed at different targets. The biggest threat to security per se is likely to come from individuals with strong technical skills who are working behind the scenes to subvert an election. By contrast, usability is an issue primarily for the voter at the voting station on Election Day. Because these populations are qualitatively different, efforts to mitigate security problems and efforts to mitigate usability problems can proceed for a long time on independent tracks, even if they may collide at some point after attempts at better design or better engineering have been exhausted. This point also has implications for the testing and certification process. Specifically, because security and usability are in large measure not attributes that must be traded off against each other, different skill sets are necessary for a competent evaluation of security and usability. Thus, it cannot be assumed that experts in one area are necessarily competent to evaluate issues in the other.