What price convenience? Another confirmation that the Holy Grail of voting is not found in conventional wisdom

When you vote in November, consider: What price convenience? What cost convenience? What individual effort is Democracy worth?

To listen to elected officials and many activists, the Holy Grail of Elections, would seem to be Turnout. Given the emphasis you would think that almost nothing else matters: Integrity, candidate access, campaign finance, media bias, or costs – when focusing on turnout, it seems everything else is forgotten. A report from Ohio, confirms earlier studies that early voting does not increase turnout,

To listen to elected officials and many activists, the Holy Grail of Elections, would seem to be Turnout. Given the emphasis you would think that almost nothing else matters: Integrity, candidate access, campaign finance, media bias, or costs – when focusing on turnout, it seems everything else is forgotten.

We posted a news item from Ohio, earlier in the week from the the Columbus Dispatch: Early voting hasn’t boosted Ohio turnout <read>

Early voting has not led to more voting in Ohio, at least not in terms of total votes cast.
A Dispatch analysis of the vote totals from the past three presidential elections in the state shows that overall turnout in the 2012 race, when Ohioans arguably had the most opportunities in state history to vote early, was lower than in the 2004 election, when there was virtually no early voting in Ohio.
Turnout in 2008, the first presidential race in which Ohioans had no-fault absentee voting and also the first time an African-American was on the ballot, was about 1 percent higher than in 2004.
“People who vote early are people who are typically going to vote anyway,” said Paul Beck, a political science professor at Ohio State University. “So, early voting hasn’t really succeeded in turning out more people to vote. We’ve made it a lot easier to vote, but on the other hand, some people are very discouraged about politics and might not care how easy it is to vote.”

This November voters in Connecticut will vote on a Constitutional Amendment to let the General Assembly to chose early voting methods, if any, for Connecticut. Conventional wisdom is that early voting will significantly increase turnout, wrong! That ignores the evidence. Proponents will tell us that there is almost no absentee voting fraud, wrong! that ignores the evidence.

We posted the evidence almost years ago: Researchers: Early Voting alone DECREASES turnout <read>

States have aggressively expanded the use of early voting, allowing people to submit their ballots before Election Day in person, by mail and in voting centers set up in shopping malls and other public places. More than 30 percent of votes cast in the 2008 presidential race arrived before Election Day itself, double the amount in 2000. In 10 states, more than half of all votes were cast early, with some coming in more than a month before the election. Election Day as we know it is quickly becoming an endangered species…

But a thorough look at the data shows that the opposite is true: early voting depresses turnout by several percentage points…Controlling for all of the other factors thought to shape voter participation, our model showed that the availability of early voting reduced turnout in the typical county by three percentage points…

Even with all of the added convenience and easier opportunities to cast ballots, turnout not only doesn’t increase with early voting, it actually falls. How can this be? The answer lies in the nature of voter registration laws, and the impact of early voting on mobilization efforts conducted by parties and other groups on Election Day.

That was just one, will conducted study. Here in Connecticut, Secretary of the State, Denise Merrill created an Election Performance Task Force. Election administration expert Doug Chapin summarized his review of available studies, covered here: Elections Performance Task Force: Technology Fair and Doug Chapin <read>

  • Early voting, no-excuse absentee voting, and voting centers are strong trends. They can provide voter convenience. They can save money or add to costs. Data does not support significant changes in participation.
  • Once you start early voting, taking it away can have an impact, once people are accustomed to it. (As taking away local polling place voting may also have a similar impact)
    Survey voters to determine their levels of satisfaction and confidence in the process.
  • Do not expect increases in participation based on changes or reforms in election administration. Satisfaction and convenience can be increased but not participation.

Thus the Ohio research tends to confirm the other studies. (We say “tends to confirm”. It is not as thorough a study as the early ones, since it covers whole statewide elections and is not a thorough comparison between matched districts in states with and without early voting – there are a lot of factors which affect turnout, so just comparing elections in a single state cannot attribute differences to any one factor.

Plus we highlight many instances of votING fraud after almost every election via absentee voting, in Connecticut and across the country <here>

Here is the bottom line:

  • Early Voting (unlimited absentee voting or in-person early voting) does not increase turn out. Alone it decreases turnout.
  • Election Day Registration increases turnout (Except perhaps in Connecticut, where we have implemented in a much less convenient way than in states where it has proven effective)
  • When Early Voting is combined with Election Day Registrati0n (maybe not in Connecticut) turnout is not harmed or helped by Early Voting.
  • In-person Early Voting would be expensive or impossible in Connecticut, given our New England style town by town election administration and jurisdictions. It might be done expensively, and in a way biased against some populations.
  • Fraud has been demonstrated in absentee voting. In Connecticut with excuse absentee voting, it occurs frequently.
  • It does increase convenience.

When you vote in November, consider:  What price convenience? What cost convenience? What individual effort is Democracy worth?

American Voting Experience: A Laudable Report

It seems we have several surprisingly refreshing Government reports in recent weeks, two on reigning in NSA spying, and now an excellent report on improving election administration, the election experience, and a contribution to realizing the ideals of our Declaration of Independence and Constitution.
Many will find a lot to like in the report. Some parts might be taken out of context as it often points out the benefits, costs, and risks of various solutions. Some will use the report to justify doing anything, such as their favored solution, to a problem. That said, we will likely be referencing many areas in the report going forward

It seems we have several surprisingly refreshing  Government reports in recent weeks, two on reigning in NSA spying, and now an excellent report on improving election administration, the election experience, and a contribution to realizing the ideals of our Declaration of Independence and Constitution. The American Voting Experience: Report and Recommendations of the Presidential Commission on Election Administration <read>

Its long, yet refreshingly readable for the average citizen. The .pdf is 112 pages, yet the introduction and main body constitute a bit less at 84 pages. I have read it all, and so have many others.  Let me start with overall impressions:

  • It is readable tutorial covering several areas to a moderate level of depth and detail: Causes and cures for lines; The variety in voting methods, laws, and practices; The synergy between problems and solutions, especially improving registration systems; The need to soon replace aging systems and hardware; the systemic problems in making better systems available; The value, costs, challenges, and security tradeoffs expanding early voting; Serving Military and Overseas Voters responsively and responsibly.
  • The report remains true to its promise of staying within issues and solutions that can be agreed upon by members of both major parties.
  • Given its size, I find very very little to disagree with in the report. Two or three minor quibbles, too minor to mention.
  • There is a lot to be done in many many areas covered in the report, many details to be filled in. There in, will lie many issues to debate, potential for making things better, worse, or wasting huge sums with little actual change.
  • It is most detailed and potentially effective recommendation is in better online voter registration systems and cross checking between states for duplicate/out-of-date registrations and the benefits throughout the system.
  • Its best articulation of problems and opportunities are in the areas of reducing lines, pollworker training, pollworker recruiting and improving ballot design. Its all very common sense, but somehow the system got to be the way it is, and by-and-large not moving to solve the problems.
  • The toughest details to be determined are how to create, pay for, and determine the best way forward to truly better software, hardware, and manual systems.

Many will find a lot to like in the report. Some parts might be taken out of context as it often points out the benefits, costs, and risks of various solutions. Some will use the report to justify doing anything, such as their favored solution, to a problem. That said, we will likely be referencing many areas in the report going forward:

  • We completely agree with its take on Military and Overseas Voting, that recommends against online/Internet voting, yet effective online registration, ballot tracking, and ballot distribution systems.
  • We applaud its recognition that unlimited absentee voting or mail-in voting represents a significant risk for fraud, while early voting, voting centers, and in-person absentee voting can be safe.
  • We agree with its overview of the software/hardware challenges going forward. The problem may not be quite as imminent, yet the  challenges to get going and find good solutions will require a transformation, years of work in regulation, cooperation, and not so common sense between industry, computer scientists, other experts, officials, and government.

Scientists to Evaluate Internet Voting, Will Legislators Listen?

This promises to be an important project. The powerful team all but guarantees a significant, trusted result. Yet, what is critical is that officials and legislators fully understand the result and undertake any Internet voting following any detailed requirements developed by the study. Our own educated prediction is that reasonably safe Internet voting is likely to be judged possible, yet unlikely to be feasible. There are significant security challenges, especially if voting were to be performed from voters’ computers, without requiring sophisticated verification techniques on the part of voters, and expensive security provisions by officials.

A project to evaluate Internet voting has been initiated by The Overseas Vote Foundation:  End-to-End Verifiable Internet Voting Project Announcement <read>

Their efforts aim to produce a system specification and set of testing scenarios, which if they meet the requirements for security, auditability, and usability, will then be placed in the public domain. At the same time, they intend to demonstrate that confidence in a voting system is built on a willingness to verify its security through testing and transparency.

“The secure, tested, certified remote voting systems that election officials envision aren’t even for sale. Available online ballot return systems are not considered secure by the scientific community, nor are they certified. As a result, email has become the default stopgap method for moving ballots online. Email is especially weak on security, yet it is being used regularly by election officials because viable alternatives are not available,” says Susan Dzieduszycka-Suinat, President and CEO of Overseas Vote Foundation, who spearheaded this project…

“There is a historical misunderstanding in the U.S. election community that this project aims to correct. Our country’s best scientists are not against technology advancements, nor are they inherently at odds with the election officials who seek technology improvements to meet their administrative challenges. What the U.S. scientific community takes issue with are the unproven claims of security regarding existing systems that are not publicly

tested or vetted. This study aims to recalibrate this situation. This group of scientific leaders has often pointed out security vulnerabilities in past systems, however they do agree on one thing: that if IV does happen, it should be in a system that takes advantage of end-to-end verifiability and auditability,” said Ms. Dzieduszycka-Suinat.

This promises to be an important project. The powerful team all but guarantees a significant, trusted result. Yet, what is critical is that officials and legislators fully understand the result and undertake any Internet voting following any detailed requirements developed by the study. Our own educated prediction is that reasonably safe Internet voting is likely to be judged possible, yet unlikely to be feasible. There are significant security challenges, especially if voting were to be performed from voters’ computers, without requiring sophisticated verification techniques on the part of voters, and expensive security provisions by officials.

What do voters most want to know from election webs and brochures?

?How do I register? How do I vote absentee? How do I vote? The answer might surprise you.

? How do I register? How do I vote absentee? How do I vote?  The answer might surprise you.

Usage experts have surveyed the real experts – voters – to see what they want in election office brochures. The most desired information matched the result from their study last year of what information voters most desired from election office web sites.

What makes elections information helpful to voters? <read>

Whitney Quesenbery for Civic Designing

Every election department (and many advocacy groups) create flyers and small booklets to help voters learn about elections. But when we looked for guidelines for good communication with voters, we found very little. There were some political science and social psychology experiments that measured the impact of get-out-the-vote campaigns, but there was little about what questions voters have, and how to answer those questions well.

As a companion to the research on county election websites, we did a study of how new voters used election information booklets.

We recruited people who had voted for the first time in the 2008 election or later. Our participants were young people, recently naturalized citizens, and people with lower literacy. As new voters, we hoped that they would remember their first experiences clearly and would still have questions about elections.

We worked with a selection of voter education materials that we thought were pretty good: clearly written, attractively designed, with good information…

We asked our participants to choose two of them to read, marking any sections they thought were particularly good or particularly confusing. And then we talked about what they read.

They had many of the same questions as the participants in the web site study:

  • what’s on the ballot
  • where do I go vote
  • how do I get an absentee ballot

Many other questions were about the basic mechanics of voting, from eligibility and ID requirements, to finding their polling place, to the details of how to mark their ballot.

An ideal guide helps voters plan and act

When we sorted out all the data, we weren’t surprised to find that the overriding concern was being able to act on the information. That fits the definition of plain language: information voters can find, understand, and use.

These less experienced voters wanted specific instructions that would let them vote with confidence. For example, they weren’t sure how long your voter registration “lasts” or even that they might have options for voting, not only on Election Day, but in early voting or by mail. They liked the confirmation and reassurance of seeing information they already knew…

Overseas Vote Foundation, Voting Research Newsletter

Some important and fascinating information in the latest issue of the Voting Research Newsletter. In general there is some good news with regard to improvements over time in return rates of military ballots, yet several types of relevant data not collected or reported for specific states and for all states. Closer to home, Connecticut is one of the many states missing data.

Some important and fascinating information in the latest issue of the Voting Research Newsletter <read>

In general there is some good news with regard to improvements over time in return rates of military ballots, yet several types of relevant data not collected or reported for specific states and for all states. Closer to home, Connecticut is one of the many states missing data.

Some specific highlights we note (see the newsletter for charts and much more fascinating and important information):

THE IMPACT OF THE ELECTRONIC TRANSMISSION OF BLANK BALLOTS IN 2012

…2012 proved to be a tipping point in the use of technology by military and overseas voters, as over 50% of survey respondents indicated using some form of electronic transmission to receive a blank ballot.

…Only 24 states were able to provide a breakdown of paper versus electronic ballots transmitted for the 2012

…Contrary to expectations of many in the election community, the preliminary data indicate that in most states (11 of the 16 respondents) electronic ballots had lower return rates.

…While the impact of electronic transmission of election material is unclear at this time, the overall ballot return rate suggests that the ability of UOCAVA voters to return their ballot on time may be a result of the MOVE Act mandate to send ballots 45 days before an election rather than electronic transmission methods.

…However, despite these improvements, about 25% of UOCAVA ballots are still not returned or their status remains unknown. Second, UOCAVA voter turnout has not increased, but appears relatively stable between 11% and 12%. Despite the new technology, the number of ballots transmitted in 2012 was lower than in 2008.

NEW FEDERAL REPORTS: LOOKING AT THE FVAP AND EAC’S 2012 REPORTS 
[FVAP: Federal Voting Assistance Program. EAC: Election Assistance Commission.]

…With their 2012 report, the FVAP has done much to try and rectify the non-respondent bias in their survey.

…Unfortunately, the 2012 FVAP report does not consider overseas civilian voters. This is a significant flaw and as a result, any reported findings are overstated. The report, and its findings are limited to the population that has been studied (military) and does not take into account the diversity of the UOCAVA population, of which the FVAP is charged with fully serving.

…There are several other problematic elements in the report

…Both the FVAP and EAC reports have improved, but still suffer from inconsistencies within the reporting of the data. In 2014, the EAC and the FVAP will work together to improve the reporting process. In 2014, the EAC will consolidate their survey with FVAP’s local election official survey into one combined instrument. “The 2014 EAC survey is intended to meet the requirements of both the EAC and FVAP to collect election related statistics from local election officials.” The proposed 2014 survey is currently available for public comment.

Nov 2012 Post-Election Audit Report – Flawed From The Start

Coalition Finds Continuing Problems with Election Audit and A New Flaw

Post-Election Audit Flawed from the Start by Highly Inaccurate List
of Election Districts

The report concluded, the official audit results do not inspire confidence because of the:

  • Lack of integrity in the random district selection.
  • Lack of consistency, reliability, and transparency in the conduct of the audit.
  • Discrepancies between machine counts and hand counts reported to the Secretary of the State by municipalities and the lack of standards for determining need for further investigation of discrepancies.
  • Weaknesses in the ballot chain-of-custody.

Coalition spokesperson Luther Weeks noted, “We found significant, unexplained errors, for municipalities across the state, in the list of districts in the random drawing. This random audit was highly flawed from the start because the drawing was highly flawed.”

Cheryl Dunson, President, League of Women Voters of Connecticut, stated,, “Two years ago, the Legislature passed a law, at the Secretary of the State’s request, which was intended to fix inaccuracies in the drawing. For whatever reason, errors in the drawing have dramatically increased.

Weeks added, “Some officials follow the audit procedures and do effective work. This year one town investigated discrepancies and found errors to correct in their election procedures – that is one value of performing the audits as intended.”

Without adherence to procedures, accurate random drawings, a reliable chain-of-custody, and transparent public follow-up, when discrepancies are reported, if there was ever a significant fraud or error it would not be recognized and corrected.
<More Details>

<More Details>

Coalition Finds Continuing Problems with Election Audit and A New Flaw

Post-Election Audit Flawed from the Start by Highly Inaccurate List
of Election Districts

 The report concluded, the official audit results do not inspire confidence because of the:

  • Lack of integrity in the random district selection.
  • Lack of consistency, reliability, and transparency in the conduct of the audit.
  • Discrepancies between machine counts and hand counts reported to the Secretary of the State by municipalities and the lack of standards for determining need for further investigation of discrepancies.
  • Weaknesses in the ballot chain-of-custody.

Coalition spokesperson Luther Weeks noted, “We found significant, unexplained errors, for municipalities across the state, in the list of districts in the random drawing. This random audit was highly flawed from the start because the drawing was highly flawed.”

Cheryl Dunson, President, League of Women Voters of Connecticut, stated,, “Two years ago, the Legislature passed a law, at the Secretary of the State’s request, which was intended to fix inaccuracies in the drawing. For whatever reason, errors in the drawing have dramatically increased.

Weeks added, “Some officials follow the audit procedures and do effective work. This year one town investigated discrepancies and found errors to correct in their election procedures – that is one value of performing the audits as intended.”

Without adherence to procedures, accurate random drawings, a reliable chain-of-custody, and transparent public follow-up, when discrepancies are reported, if there was ever a significant fraud or error it would not be recognized and corrected.
<More Details>

Risk Limiting Audits: Why and How

A recent, paper by the Risk Limiting Audit Working Group, endorsed by The American Statistical Association, articulates and outlines various types of post-election audits, their requirements, and relative advantages.

We cannot help but think that our Coalition audit reports contributed to statements in the section entitled: Trustworthy audits: the virtue lies in the details:

A recent, paper by the Risk Limiting Audit Working Group, endorsed by The American Statistical Association, articulates and outlines various types of post-election audits, their requirements, and relative advantages. <read>

The paper compares and describes three types of risk limiting audits. One table summarizes and simplifies the amount of work required for each type of audit:

We say the table simplifies the comparison, because there are a lot of details behind accomplishing a ballot level comparison audit, which would be the obvious choice all other things being equal.

CTVotersCount strongly supports advances in electronic auditing which provide the opportunity for states to exploit the advantages of ballot level auditing, perhaps combined with our current selection of 10% of districts statewide after each election, that would likely provide Connecticut with a much more comprehensive, accurate audit than the one we have today…provided it is enacted and accomplished to meet appropriate standards of ballot security and public verifiability.

We will have more to say on this in future posts. The technology is maturing, with several states providing public tests of machine auditing. We can expect bills in the CT Legislature this year authorizing electronic auditing. We will work to see that they are sufficient to provide public confidence.

In addition to discussing the statistical requirements for risk limiting audits, the paper describes additional requirements and issues in areas such as ballot security, results reporting, public confidence, and vote confidentiality. Connecticut has a long way to go in several of these areas. We cannot help but think that our Coalition audit reports contributed to the following statements, in the section entitled Trustworthy audits: the virtue lies in the details:

No matter how attractive the inherent properties of an audit trail [such as paper ballots], it is only as reliable as it is secure. Past elections have been tainted by allegations – and even strong evidence – of ballot box “stuffing” after the election; anecdotes abound of ballots gone lost. Auditing or recounting an untrustworthy audit trail yields untrustworthy results. Moreover, it is highly desirable not only to assert that the audit trail has been secured, but to be able to demonstrate that it has. Some analysts speak of a compliance audit to verify that the preconditions for a risk-limiting audit have been satisfied…

Risk-limiting audits are easy to define, and in broad outline they are fairly easy to implement: Draw a sample, look at the ballots in the sample, do some math to see if more counting is required. However, some implementation details need careful attention.

Public observation and transparency

Risk-limiting audits provide one means for citizens to monitor how well election systems are functioning. Audits provide valuable information to election officials, but crucially, they inform the public and provide evidence as to whether reported election outcomes are correct. The Principles and Best Practices for Post-Election Audits state the case…

These principles impose important responsibilities both on election officials and on public observers. When all parties take these responsibilities seriously – but not grimly – audit observation builds positive relationships between election officials and the citizens they serve.

Good audits are confidence-building exercises; not-so-good audits are more like sullen skirmishes. In the past, some audit observers and would-be observers have reported events like these: never receiving advance notice of audits despite statutory or regulatory requirements; being confined in one corner of a room with no meaningful opportunity to observe; receiving no information about the procedures to be used; having no opportunity to ask basic questions; witnessing unambiguous violations of written procedures but being unable to persuade officials to refer to, or conform with, those procedures. Contrariwise, many other observers have reported interacting cordially with election officials and workers, in some cases politely making suggestions that were immediately adopted, and generally forming a favorable opinion of the audit and other election processes. Clear written procedures, made available in advance of the audit, help observers and other interested citizens understand how the audit evinces the integrity of the results…

Some states provide for partisan observers in certain election audit processes. We recommend that audits be explicitly open to non-partisan observers as well. All interested individuals and groups should be permitted to observe the audit process to the greatest possible extent. Effective audit observation can increase public confidence in the audit and in the integrity of elections, by making the process more transparent and providing an independent verification of the results.

Observability includes not only direct public observation of audits, but clear reporting of the audit findings. Election officials should systematically report audit results, identifying any differences between the audit and voting system counts, and explaining them if possible. These reports need not be long in order to be informative and reassuring.31 The audit results should be forwarded to state election officials, who in turn should compile a summary statewide report – perhaps within 30 days of completion of the audit. Among other things, this report should integrate results from local officials in a consistent, comprehensible, searchable format. Such a report may enable state officials to detect patterns of error they otherwise may have missed, or simply to document how well voting systems performed.

Connecticut law provides for public notification, and today, procedures provide for sufficient public observation. Yet, the public notification provisions are inadequate. The open observation process has led to documentation that the process is profoundly inadequate to provide confidence in our elections and election officials. Observations have also demonstrated the far from adequate ballot security as well. Official reporting of election results and audit results falls short of requirements and rigor

We reiterate that we favor automated auditing for Connecticut. Yet we continue to caution that our support is conditional on the details of any proposed solution and law. We must always “be careful what we ask for”, recognizing the devil is in the details.

Caltech/MIT: What has changed, what hasn’t, & what needs improvement

The Caltech/MIT Voting Technology Project has released a thorough, comprehensive, and insightful new report timed to the 2012 election. We find little to quibble with in the report. We agree with all of its recommendations.Several items with which we fully endorse were covered in this report which sometimes are missing from the discussion or often underemphasised.

The report itself is 52 pages, followed by 32 pages of opinions of others, including election officials, advocates, and vendors, some of whom disagree with some aspects of the report. Every page is worth reading. The report is not technical. It covers a wide range of issues, background, and recommendations.

The Caltech/MIT Voting Technology Project has released a thorough, comprehensive, and insightful new report timed to the 2012 election: VOTING: What has changed, what hasn’t, & what needs improvement <read>

The report itself is 52 pages, followed by 32 pages of opinions of others, including election officials, advocates, and vendors, some of whom disagree with some aspects of the report. Every page is worth reading. The report is not technical. It covers a wide range of issues, background, and recommendations.

We find little to quibble with in the report. We agree with all of its recommendations although we might place different emphasis in particular areas:

As we have studied the areas where progress has been made since 2001, and where progress has stalled, we have developed the following recommendations. All have been discussed earlier in our report, and we summarize them here. They are not in priority order. First, regarding voting technology, we recommend:

  • Legislation mandating effective election auditing, which at a minimum would require post-election auditing of all voting technologies used in an election.
  • Continued strong support for voting systems security research, emphasizing auditing and the verifiability of election outcomes.
  • A movement toward mandating statistically meaningful post-election audits, rather than setting security standards for election equipment, as the primary way to safeguard the integrity of the vote.
  • A new business model led by states and localities, with harmonized standards and requirements.

Second, regarding voter registration, we recommend: » Streamlining the provisional balloting process in many states and the creation of common best practices and voluntary standards across states.

  • The development of voter verification systems in which states bear the cost of stringent voter ID regimes, in those states that desire to increase ID requirements for in-person voting.
  • Continued standardization of voter registration databases, so that they can be polled across states.

Third, with respect to polling places and pollworkers, we recommend:

  • Continued improvement of pollworker training and more reliance on network technologies to facilitate pollworker training.
  • Development of applications deployed on mobile devices that bring more information to pollworkers, and transmit real-time data about Election Day workloads back to the central voting office and the public at large.
  • Increased functionality of electronic pollbooks and their wider adoption.
  • Development of applications that gauge how long voters are waiting in line to vote, so that wait times can be better managed and reported to the public.

Fourth, regarding absentee and early voting our first two recommendations repeat those we issued a decade ago; the third is new:

  • Discourage the continued rise of no-excuse absentee balloting and resist pressures to expand all-mail elections. Similarly, discourage the use of Internet voting until the time when auditability can be ensured and the substantial risks entailed by voting over the Internet can be sufficiently mitigated.
  • Require that states publish election returns in such a way that allows the calculation of the residual vote rate by voting mode.
  • Continue research into new methods to get usable ballots to military and overseas civilian voters securely, accurately, and rapidly and to ensure their secure return in time to be counted.

And, finally, regarding the infrastructure and science of elections: » Continued development of the science of elections.

  • Continued, and expanded, support for the research functions of the Election Assistance Commission.
  • Development of an Electoral Extension Service, headquartered in each state’s land-grant colleges, to disseminate new ideas about managing elections in the United States.

Several items with which we fully endorse were covered in this report which sometimes are missing from the discussion or often underemphasised:

The Risks of Mail-in and No-Excuse Absentee Voting

The report thoroughly covers the disenfranchisement risks of mail voting which are about double polling place voting. Such voting does not increase turnout significantly, except in local elections. We would have liked to seen more coverage of the organized fraud, vote buying, and coercion frequently occurring via such voting. These are  not just theoretical risks. New to us was the surveys showing that the public at some level recognizes the risks and show less confidence in elections with expanded absentee or mail-in voting.

The Emphasis on Election Auditing over Machine Testing and Certification

It is theoretically impossible to develop or test a completely safe voting technology. Extreme testing and slow certification requirements stifle innovation, add costs, delay improvements and are ultimately ineffective. High confidence, efficient statistical audits, paper ballots, combined with a strong chain-of-custody are a necessary solution that eclipse the elusive pursuit of technical perfection.

The Need and Value of Quality Voter Registration Combined with Online Voter Check-in

The report points to the fallacy of votER fraud. Yet there are efficiencies and enhanced enfranchisement available from better, more accurage voter registration databases. There are solutions with online check-in that also provide voter-id without the disenfranchising aspects of the currently proposed voter-id laws.

The Challenges of the Election Technology Industry

My years of experience in the software industry always lead me to the conclusion that the election technology industry is a losing business proposition. While I am not enamored with any of the current voting technology vendors, there is little incentive for them or new players to enter the field. The closest analogy is the defense industry. That industry is not fragmented, has essentially one customer, which designs products and pays for research and development. The voting technology industry is fragmented and has a fragmented customer base, with varying demands, coupled with a very difficult sales environment.

Recognition of One of the Risks of the National Popular Vote Agreement

  • The proposed National Popular Vote (NPV) may have negative security implications, since the opportunity to perform proper post-election audits appears to be considerably diminished.

CTVotersCount readers know that we would go farther and cover the risks of a national popular vote in our current state-by-state fragmented system, not designed to provide an accurate national popular total. Alleged popular totals cannot be audited, cannot be recounted, and electors must be chosen before an official count is available. The National Popular Vote agreement does nothing to address the existing risk issues with the Electoral College and, in fact, adds to the risks.

UConn Memory Card Report: More garbage in, some good information out

Once again, we applaud Dr. Alexander Shvartsman and his team for developing the technology to perform these innovative tests, the diligence to perform the tedious tests, and the fortitude to report the facts. Compliance by officials leaves much to be desired:

Prior to the primary 110 out of 598 districts sent cards, that is 18.5% compliance
After the primary 105 out of 598 districts sent cards, that is 17.6% compliance,
however, only 49 of those cards were used in the election, a compliance rate of 8.2%

Last week, the University of Connecticut UConn released its latest memory card report:  Technological Audit of Memory Cards for the April 24, 2012 Connecticut Primary Elections <report>

We can easily echo our summary of the previous report.

We applaud Dr. Alexander Shvartsman and his team for developing the technology to perform these innovative tests, the diligence to perform the tedious tests, and the fortitude to report the facts.

We do not applaud the lack of cooperation of officials in the audit or the lack of official compliance with memory card procedures. We are left wondering if this is the level of compliance and cooperation when officials know their efforts will be disclosed: “What is their compliance when their actions are unlikely or impossible to scrutinize?” Can you imagine such numbers from any other technology or Government function? Where is the outrage?

Take a look at these statistics for the election with 598 districts each expected to send in a card before and after the elections:

Prior to the primary 110 out of 598 districts sent cards, that is 18.5% compliance

After the primary 105 out of 598 districts sent cards, that is 17.6% compliance,

however, only 49 of those cards were used in the election, a compliance rate of 8.2%

UConn expressed doubts, as we also have, that the cards were actually selected randomly as directed.

UConn concluded that there is a need for compliance with directives and procedures, not only in the rate of sending in cards, but in following election procedures:

We make the following concluding remarks and recommendations.

The SOTS Office should continue publicizing proper procedures and continue offering training. In particular, to reinforce the need to prepare all cards for election prior to the election dayand prior to the pre-election audit.

Fewer cards are being duplicated at the districts, and it is important to continue reiterating that cards must never be duplicated. Any cases of duplication should recorded in the moderators’ logs and be brought to the attention of the SOTS Office with a documented explanation of why this is necessary.

It is important for the districts to report any problems during pre-election testing (and any card problems) to the SOTS Oce as soon as possible upon completion of the tests.

It is important for the districts report to the SOTS Office any unexpected behavior of the tabulators that seem to necessitate a restart or a memory card reset. It would be helpful if moderators’ logs contained records of machine restarts, perceived causes, and reasoning for the restart or reset. There was at least one documented case of a tabulator malfunction during  this primary election. In such cases it is strongly recommended that the problematic tabulator is tested by the Center personnel (either at the district or in our laboratory).

The current number of cards with unreadable data (junk data) continues to be high. We have determined that weak batteries are the primary cause of this. The vendor developed a new non-volatile, battery-less memory card, and our ongoing evaluation continues to con rm their compatibility with the AV-OS machines used Connecticut. A limited pilot using the new cards was successfully performed in Vernon. It is expected that a broader pilot deployment of the new cards by the SOTS Oce will occur in the near future. The use of the new card should eliminate the major cause of memory card failures.

It is important that cards sent for the pre-election audit are selected at random. One card randomly selected from four cards in each district is to be randomly selected for the audit. While the districts are encouraged to submit all malfunctioning cards to VoTeR Center, all such cards need to be identi ed separately from the cards randomly selected for the audit. When a suciently large collection of cards is selected randomly for audit, the results of the audit meaningfully represent the overall State landscape and help identify technological and procedural problems that need to be solved. Should the selection not be at random, for example, by avoiding sending duplicated cards in for audit, the results are less representative, and may lead to masking technological problems. Therefore training should continue stressing the need  to submit appropriate cards for the pre-election audit.

For the post-election we received fewer than expected number of cards, 155, out of which only 49 were used in the election. This is a very low number. It would be extremely important in the future to obtain substantially larger numbers of cards from the actual use in the elections.

It is indeed good news that their has been a successful first test of new memory cards. Hopefully, further testing will be successful and will result in a relatively speedy full deployment:

New non-volatile (battery-less) memory card was recently developed by the vendor. Our preliminary analysis of this card con cermed that it is compatible with AV-OS systems deployed in Connecticut. A pilot deployment of the new cards was done in the Town of Vernon using 12 of the new cards. The cards performed well, no failures were detected, and no such cards lost their data. However this is a very small sample of cards. We are currently performing in-depth testing of the non-volatile cards and as of this writing the results are encouraging.

A broader pilot is being planned by the SOTS Oce to occur in the near future. The use of the new card should eliminate the major cause of memory card failures.

Official Audit Report – provides no confidence in officials and machines

Once again, the report is “Flawed by a lack of transparency, incomplete data, and assumed accuracy”

Last week the University of Connecticut (UConn) released its official post-election audit report on the November 2011 election, seven months after the election and one month after the shredding of all ballots. Once again, as we said last time, the report is “Flawed by a lack of transparency, incomplete data, and assumed accuracy”. In our opinion, the report falls short of the rigor of the fine peer reviewed papers  <e.g.> and valuable memory card reports: <e.g.> that UConn  provides.

The report is available at the UConn site: Statistical Analysis of the Post-Election Audit Data 2011 November Election <read>

Our strongest concern with report is the two underlying assumptions which defy common sense and logic:

  • That officials are always correct when they claim or agree that they counted inaccurately, when hand counts and optical scanner tapes do no match.
  • That when officials count inaccurately, it implies that the optical scanners did in fact count accurately.

These assumptions leave us wondering:

  • How do officials know that they counted inaccurately?
  • Should we put blind trust in the judgment of officials that claim they cannot count accurately?
  • How accurate are the unaudited official hand counts used to provide a portion of the totals in each election which are compiled late on election night? We have only one, perhaps extreme, example to go on, coupled with some significant errors in the comparatively ideal counting conditions of the audits.
  • If every difference between scanners and officials is attributed to human error, then in what circumstances would we actually recognize an actual error or fraud should it ever have occurred?

According to the report:

Audit returns included 45 records with discrepancies higher than 5, with the highest reported discrepancy of 40. It is worth noting that 75% (30 out of 45) of the records that were subject to the follow up investigation already contained information indicating that the discrepancies were due to the human error. Following this initial review the SOTS Office [Secretary of the States Office] performed additional information gathering and investigation of those 45 records. The final information was conveyed to the Center on May 18th of 2012[after expiration of the six month ballot retention period]…

For the revised records SOTS Office confirmed with the districts that the discrepancies were due to human counting errors.

So, apparently if any official included text in the local audit report indicating human error, the report was accepted as indicating inaccurate hand counting and implying accurate scanner counting. For example <a 26% difference in counting 50 votes. Or was it actually 64 votes?>

Last time, for the Nov 2010 audit report, we misunderstood and assumed incorrectly that the Secretary of the State’s Office conducted non-public ballot counting to investigate some of the differences. To avoid making that mistake again we asked for a description of the investigations. Peggy Reeves, Assistant to the Secretary of the State for Election, Legislative and Intergovernmental Affairs, provided a prompt description to us:

In response to your inquiry, our office performed the additional investigations referenced in the UCONN report by phone call only and we did not visit any municipalities and did not count any additional ballots. Our office did not create a list of subject towns and as such, have no such list to provide you pursuant to your request. Our office identified subject municipalities by simply reviewing the audit returns submitted to our office and calling the municipalities in question to inquire as to the reason for the discrepancy. In our experience, we do concur with the statement that hand counting errors do create the reported discrepancies.

So, the investigations apparently consisted of calling some or perhaps all local officials and having them agree that they did not count accurately. No list of such towns was created, thus we are left to speculate, if some or all of the towns identified by UConn were contacted.

Unlike the official report, the Coalition actually observes the conduct of the majority of counting sessions of post-election audits and provides comprehensive observation reports on how the local audits are conducted. Also providing ever more extensive detailed data, copies of official local reports, and statistics derived from those local reports, providing the public and officials the opportunity to verify the details in our analysis of discrepancies.

We do agree with the UConn report and the SOTS Office that most differences can be attributed to human counting errors. Coalition reports show that the counting sessions are frequently not well organized, that proven counting methods are frequently not used, the official procedures are frequently not followed, in many cases, officials do not double check ballots and counts, and often that recounting is not performed when differences are found. Yet as we have said over and over:

We have no reason to question the integrity of any official. We have no evidence that our optical scanners have failed to count accurately. However, if every difference between a hand count and a scanner count is dismissed as a human counting error then if a machine has or were ever, by error or fraud, to count inaccurately, it would be unlikely to be recognized by the current system.

Given the above we see no reason to comment on the official statistical analysis of inaccurate data, adjusted without counting or credible investigation.

We will comment that Coalition observations indicate that officials do not understand the intended meaning of “questionable votes” and frequently tend to classify far too many votes as questionable. Votes, which should be expected to be, and normally are, read correctly by the optical scanners.

We do disagree with the Secretary of the State when she and her press release state:

“Connecticut has the toughest elections audit law in the country and I
am confident at the end of this year’s audit the numbers will once again match”…

The provisions in the law, developed in close cooperation with the computer science department at the University of Connecticut, give Connecticut one of the strictest audit statutes in the country…

The 10% audit does entail counting a relatively large percentage of ballots as, is necessary, in a fixed percentage audit in a relatively small state, yet the law is full of loopholes, and we would not characterize the statute nor its operation in practice as “strict”.

***************
Update 07/07/2012: Audit not Independent

We are reminded by a Courant correction today that this audit does not meet any reasonable definition of independent because:

  1. The local counting is supervised by the individuals responsible for the local conduct of the election.
  2. The University of Connecticut is contracted and dependent financially on the Secretary of the State, the Chief Elections Official.
  3. The Secretary of the State also revises and dictates the date used in the report.