UConn Memory Card Report: More garbage in, some good information out

Once again, we applaud Dr. Alexander Shvartsman and his team for developing the technology to perform these innovative tests, the diligence to perform the tedious tests, and the fortitude to report the facts. Compliance by officials leaves much to be desired:

Prior to the primary 110 out of 598 districts sent cards, that is 18.5% compliance
After the primary 105 out of 598 districts sent cards, that is 17.6% compliance,
however, only 49 of those cards were used in the election, a compliance rate of 8.2%

Last week, the University of Connecticut UConn released its latest memory card report:  Technological Audit of Memory Cards for the April 24, 2012 Connecticut Primary Elections <report>

We can easily echo our summary of the previous report.

We applaud Dr. Alexander Shvartsman and his team for developing the technology to perform these innovative tests, the diligence to perform the tedious tests, and the fortitude to report the facts.

We do not applaud the lack of cooperation of officials in the audit or the lack of official compliance with memory card procedures. We are left wondering if this is the level of compliance and cooperation when officials know their efforts will be disclosed: “What is their compliance when their actions are unlikely or impossible to scrutinize?” Can you imagine such numbers from any other technology or Government function? Where is the outrage?

Take a look at these statistics for the election with 598 districts each expected to send in a card before and after the elections:

Prior to the primary 110 out of 598 districts sent cards, that is 18.5% compliance

After the primary 105 out of 598 districts sent cards, that is 17.6% compliance,

however, only 49 of those cards were used in the election, a compliance rate of 8.2%

UConn expressed doubts, as we also have, that the cards were actually selected randomly as directed.

UConn concluded that there is a need for compliance with directives and procedures, not only in the rate of sending in cards, but in following election procedures:

We make the following concluding remarks and recommendations.

The SOTS Office should continue publicizing proper procedures and continue offering training. In particular, to reinforce the need to prepare all cards for election prior to the election dayand prior to the pre-election audit.

Fewer cards are being duplicated at the districts, and it is important to continue reiterating that cards must never be duplicated. Any cases of duplication should recorded in the moderators’ logs and be brought to the attention of the SOTS Office with a documented explanation of why this is necessary.

It is important for the districts to report any problems during pre-election testing (and any card problems) to the SOTS Oce as soon as possible upon completion of the tests.

It is important for the districts report to the SOTS Office any unexpected behavior of the tabulators that seem to necessitate a restart or a memory card reset. It would be helpful if moderators’ logs contained records of machine restarts, perceived causes, and reasoning for the restart or reset. There was at least one documented case of a tabulator malfunction during  this primary election. In such cases it is strongly recommended that the problematic tabulator is tested by the Center personnel (either at the district or in our laboratory).

The current number of cards with unreadable data (junk data) continues to be high. We have determined that weak batteries are the primary cause of this. The vendor developed a new non-volatile, battery-less memory card, and our ongoing evaluation continues to con rm their compatibility with the AV-OS machines used Connecticut. A limited pilot using the new cards was successfully performed in Vernon. It is expected that a broader pilot deployment of the new cards by the SOTS Oce will occur in the near future. The use of the new card should eliminate the major cause of memory card failures.

It is important that cards sent for the pre-election audit are selected at random. One card randomly selected from four cards in each district is to be randomly selected for the audit. While the districts are encouraged to submit all malfunctioning cards to VoTeR Center, all such cards need to be identi ed separately from the cards randomly selected for the audit. When a suciently large collection of cards is selected randomly for audit, the results of the audit meaningfully represent the overall State landscape and help identify technological and procedural problems that need to be solved. Should the selection not be at random, for example, by avoiding sending duplicated cards in for audit, the results are less representative, and may lead to masking technological problems. Therefore training should continue stressing the need  to submit appropriate cards for the pre-election audit.

For the post-election we received fewer than expected number of cards, 155, out of which only 49 were used in the election. This is a very low number. It would be extremely important in the future to obtain substantially larger numbers of cards from the actual use in the elections.

It is indeed good news that their has been a successful first test of new memory cards. Hopefully, further testing will be successful and will result in a relatively speedy full deployment:

New non-volatile (battery-less) memory card was recently developed by the vendor. Our preliminary analysis of this card con cermed that it is compatible with AV-OS systems deployed in Connecticut. A pilot deployment of the new cards was done in the Town of Vernon using 12 of the new cards. The cards performed well, no failures were detected, and no such cards lost their data. However this is a very small sample of cards. We are currently performing in-depth testing of the non-volatile cards and as of this writing the results are encouraging.

A broader pilot is being planned by the SOTS Oce to occur in the near future. The use of the new card should eliminate the major cause of memory card failures.

53 districts in 49 municipalities selected for post-election audit


Yesterday with assistance from Coalition volunteers, Secretary of the State, Denise Merrill conducted the random selection of districts for the post-election audit.

Particularly striking is the rapid consolidation and dramatic drop in the number of polling places over time:

Yesterday with assistance from Coalition volunteers, Secretary of the State, Denise Merrill conducted the random selection of districts for the post-election audit. They are listed in this official press release.

Particularly striking is the rapid consolidation and dramatic drop in the number of polling places over time:

523 Aug 2012 Primary
598 Apr 2012 Presidential Primary
726 Nov 2011 Municipal Election

This is not necessarily a negative trend, in many of our towns where people drive everywhere. Not such a good idea and actually is not part of the trend in our larger cities. What is behind this is saving money, less work recruiting pollworkers, less locations to manage, and less work setting setting up of equipment, while fueled this year by Federal and state redistricting requiring redrawing voting districts anyway. Also after five years of optical scanners, it is obvious that single scanners can handle the volume of all but less than a handful of very large polling places.

The local audit counting period will extend until Sept 17th, 2012

Who lost in the Massachusetts Special Election?

An article in Metro West brings home the points discussed here in two recent posts.

An article in Metro West brings home the points discussed here in two recent posts:

First, Prof. Ron Rivest explained how audits can protect our votes, ironically not used in his home state of Massachussets <see Ron Rivest explains why elections should be audited, especially in MA.>

Second, the lack of audits in Wisconsin and one of the highest purposes of audits: Convince the losers and their supporters that they lost fairly. <see What We Worry Wisconsin! – Look ma no audits!>

We could have titled that last one Look MA no audits! Here is the result of no audits in Massachusetts, brought home this week in Metro West: Was the Brown-Coakley Senate election stolen? <read>

  • unanswered suspicions after an election
  • paper ballots, yet never checked
  • no certainty two years after an election
  • less trust in democracy

But 3 percent of the ballots were counted by hand, in 71 of the state’s smallest communities. If someone was meddling with the computer tally, Simon hypothesized in an August 2010 report, it might be evident by comparing those results with the percentages for the computer-counted ballots.

Simon established a baseline by looking at the previous two Senate races, where Kennedy and Sen. John Kerry defeated little-known opponents by wide margins. In 2008, the margin by which Kerry won in the optical scan ballots was almost identical to his margin on the hand-counted ballots – a disparity of just 1 percent. For the 2006 results, the numbers were similar, with Kennedy taking 69.5 percent of the vote on the opscan ballots and 68.9 percent in the hand-count communities.

Simon checked a third lopsided race, the attorney general contest Coakley won in 2006 and, again, the hand-counted electorate matched up closely to the optically scanned electorate. The disparity in results in that race was just .8 percent…

But 2010 was a different story. In hand-count communities, Coakley won, 51.4 percent to 48.6 percent. On the optically scanned ballots, Brown won, 52.6 percent to 47.4 percent for Coakley. That adds up to a disparity of 8 percent.

That disparity “stands as an unexplained anomaly of dramatic numerical proportions,” Simonwrites. It raises questions he can’t answer, but he concedes it proves nothing. It certainly doesn’t prove anyone falsified the tallies made by computer.

But it is curious enough to make you wish someone was double-checking the results.

Who lost? Everyone interested in democracy, because it requires accurate elections that voters and losers can trust.

Official Audit Report – provides no confidence in officials and machines

Once again, the report is “Flawed by a lack of transparency, incomplete data, and assumed accuracy”

Last week the University of Connecticut (UConn) released its official post-election audit report on the November 2011 election, seven months after the election and one month after the shredding of all ballots. Once again, as we said last time, the report is “Flawed by a lack of transparency, incomplete data, and assumed accuracy”. In our opinion, the report falls short of the rigor of the fine peer reviewed papers  <e.g.> and valuable memory card reports: <e.g.> that UConn  provides.

The report is available at the UConn site: Statistical Analysis of the Post-Election Audit Data 2011 November Election <read>

Our strongest concern with report is the two underlying assumptions which defy common sense and logic:

  • That officials are always correct when they claim or agree that they counted inaccurately, when hand counts and optical scanner tapes do no match.
  • That when officials count inaccurately, it implies that the optical scanners did in fact count accurately.

These assumptions leave us wondering:

  • How do officials know that they counted inaccurately?
  • Should we put blind trust in the judgment of officials that claim they cannot count accurately?
  • How accurate are the unaudited official hand counts used to provide a portion of the totals in each election which are compiled late on election night? We have only one, perhaps extreme, example to go on, coupled with some significant errors in the comparatively ideal counting conditions of the audits.
  • If every difference between scanners and officials is attributed to human error, then in what circumstances would we actually recognize an actual error or fraud should it ever have occurred?

According to the report:

Audit returns included 45 records with discrepancies higher than 5, with the highest reported discrepancy of 40. It is worth noting that 75% (30 out of 45) of the records that were subject to the follow up investigation already contained information indicating that the discrepancies were due to the human error. Following this initial review the SOTS Office [Secretary of the States Office] performed additional information gathering and investigation of those 45 records. The final information was conveyed to the Center on May 18th of 2012[after expiration of the six month ballot retention period]…

For the revised records SOTS Office confirmed with the districts that the discrepancies were due to human counting errors.

So, apparently if any official included text in the local audit report indicating human error, the report was accepted as indicating inaccurate hand counting and implying accurate scanner counting. For example <a 26% difference in counting 50 votes. Or was it actually 64 votes?>

Last time, for the Nov 2010 audit report, we misunderstood and assumed incorrectly that the Secretary of the State’s Office conducted non-public ballot counting to investigate some of the differences. To avoid making that mistake again we asked for a description of the investigations. Peggy Reeves, Assistant to the Secretary of the State for Election, Legislative and Intergovernmental Affairs, provided a prompt description to us:

In response to your inquiry, our office performed the additional investigations referenced in the UCONN report by phone call only and we did not visit any municipalities and did not count any additional ballots. Our office did not create a list of subject towns and as such, have no such list to provide you pursuant to your request. Our office identified subject municipalities by simply reviewing the audit returns submitted to our office and calling the municipalities in question to inquire as to the reason for the discrepancy. In our experience, we do concur with the statement that hand counting errors do create the reported discrepancies.

So, the investigations apparently consisted of calling some or perhaps all local officials and having them agree that they did not count accurately. No list of such towns was created, thus we are left to speculate, if some or all of the towns identified by UConn were contacted.

Unlike the official report, the Coalition actually observes the conduct of the majority of counting sessions of post-election audits and provides comprehensive observation reports on how the local audits are conducted. Also providing ever more extensive detailed data, copies of official local reports, and statistics derived from those local reports, providing the public and officials the opportunity to verify the details in our analysis of discrepancies.

We do agree with the UConn report and the SOTS Office that most differences can be attributed to human counting errors. Coalition reports show that the counting sessions are frequently not well organized, that proven counting methods are frequently not used, the official procedures are frequently not followed, in many cases, officials do not double check ballots and counts, and often that recounting is not performed when differences are found. Yet as we have said over and over:

We have no reason to question the integrity of any official. We have no evidence that our optical scanners have failed to count accurately. However, if every difference between a hand count and a scanner count is dismissed as a human counting error then if a machine has or were ever, by error or fraud, to count inaccurately, it would be unlikely to be recognized by the current system.

Given the above we see no reason to comment on the official statistical analysis of inaccurate data, adjusted without counting or credible investigation.

We will comment that Coalition observations indicate that officials do not understand the intended meaning of “questionable votes” and frequently tend to classify far too many votes as questionable. Votes, which should be expected to be, and normally are, read correctly by the optical scanners.

We do disagree with the Secretary of the State when she and her press release state:

“Connecticut has the toughest elections audit law in the country and I
am confident at the end of this year’s audit the numbers will once again match”…

The provisions in the law, developed in close cooperation with the computer science department at the University of Connecticut, give Connecticut one of the strictest audit statutes in the country…

The 10% audit does entail counting a relatively large percentage of ballots as, is necessary, in a fixed percentage audit in a relatively small state, yet the law is full of loopholes, and we would not characterize the statute nor its operation in practice as “strict”.

***************
Update 07/07/2012: Audit not Independent

We are reminded by a Courant correction today that this audit does not meet any reasonable definition of independent because:

  1. The local counting is supervised by the individuals responsible for the local conduct of the election.
  2. The University of Connecticut is contracted and dependent financially on the Secretary of the State, the Chief Elections Official.
  3. The Secretary of the State also revises and dictates the date used in the report.

Ron Rivest explains why elections should be audited, especially in MA.

Prof Ron Rivest recently summarized in the Boston Globe why elections should be audited. While MIT is a leading source of election integrity research, ironically, it sits in a state with voter verified paper ballots, yet does not use them to verify election results.

Prof Ron Rivest recently summarized in the Boston Globe why elections should be audited. While MIT is a leading source of election integrity research, ironically, it sits in a state with voter verified paper ballots, yet does not use them to verify election results.

The Podium

Protecting Your Vote

THIS STORY APPEARED IN The Boston Globe
April 03, 2012|By Ronald L. Rivest

Sometimes, a few votes make a huge difference.

Just ask Rick Santorum. In January, Rick Santorum won the Iowa caucuses, but, because of vote counting and tabulation errors, Mitt Romney was declared the winner. In the two weeks before the error became clear, Romney’s campaign gained momentum, while Santorum’s withered.

Unfortunately, the same problem – or worse – could easily occur in Massachusetts. This year, voters will choose the president, and control of the US Senate may come down to the race shaping up between Scott Brown and Elizabeth Warren.

How will voters know their votes will be counted accurately? Massachusetts voters cast paper ballots. This is a good foundation for an election system, since the paper ballots form an “audit trail” that can be examined (and if necessary, recounted). In almost all cities and towns in the state, those ballots are slid into machines that read the ballots and total up all the votes at each polling place. The machines are reprogrammed for every election, but only 50 to 75 ballots are used to check the new programming, even though 1,000 ballots or more are likely to be put into each voting machine on Election Day. Votes from each location are then brought together and tabulated. In both steps of the process, there is the possibility of significant error.

As a technologist, I have spent decades working with information systems and computer programs, and can say one thing with certainty: mistakes can happen. In banking, business, and engineering, similar problems often arise, and they are solved elegantly: with random testing. The IRS does not take every tax return on faith – it audits a small number of them. These audits uncover errors and fraud, and serve as deterrent. Athletes are randomly tested for performance-enhancing drugs. Factories pull random samples of their products off the production line and conduct quality control checks. Municipalities send inspectors to gas stations to make sure that when the meter says you have pumped a gallon, there actually is a gallon of gas in your tank.

Audits and random tests are used anytime there are numbers involved and a lot at stake. And what could be more important than the elections we use to choose our government’s leaders?

Twenty-six states have election audits and that number is growing. After an election, the state selects a few random polling places to count the ballots by hand. The hand-counted totals are compared to machine results. If the numbers are close enough, there is confidence that any errors or mis-programming sufficient to have affected the election outcome will be discovered. Because only a few random polling locations are audited, costs are kept low. Many people are surprised to learn that we don’t audit election results here in Massachusetts.

There need not be any big conspiracies or widespread failures to make audits worthwhile. Voting machines are just like any other machine. Sometimes they break. In Waterville, Maine, voting machine malfunctions caused a Senate candidate to receive 27,000 votes – about 16,000 more than the number of registered voters in the entire district. In Barry County, Michigan, flawed programming caused incorrect results. The problem was discovered only when a county clerk received the results from the precinct where he voted and noticed that the candidate for whom he voted for had received no votes.

In addition to providing security and confidence, audits provide information. Information that election officials can use to make sure every person’s vote is counted. Audits can uncover common voter mistakes that could be fixed with, for example, better instructions. Audits can tell election officials if a ballot has been poorly designed in a way many voters cannot understand, so that future ballots can be designed better.

Let’s make 2012 the year where all Massachusetts voters have confidence that their vote will be counted. There is audit legislation pending in the Legislature. Lawmakers should pass it in time for the November election. Elections matter. And every vote counts.

Ronald L. Rivest is a professor of computer science at MIT. He is a founder of RSA Data Security.

Basics you need to know about election integrity in fifteen minutes

Kevin O’Neill, Capitol Thinking, interviews the authors of Broken Ballots – Will Your Vote Count, Prof Doug Jones and Dr. Barbara Simons <podcast> When it comes to elections and verifiability, Doug Jones and Barbara Simons are true experts that everyone can understand.

Kevin O’Neill, Capitol Thinking, interviews the authors of Broken Ballots – Will Your Vote Count, Prof Doug Jones and Dr. Barbara Simons <podcast>

When it comes to elections and verifiability, Doug Jones and Barbara Simons are true experts that everyone can understand.

They discuss the basics of election verification, pre-election testing, election auditing, and internet voting. They also offer graphic examples of what went wrong in recent elections in Iowa and Florida, that were corrected based on paper ballots and post-election audits.

Broken Ballots was released Apr 15, even thought Amazon and Barnes & Noble still list it as available May 15th. Look for a book review here in the near future.

UConn Memory Card Report: Technology 82%-93%, Officials 19%, (Outrage 0%?)

We applaud Dr. Alexander Shvartsman and his team for developing the technology to perform these innovative tests, the diligence to perform the tedious tests, and the fortitude to report the facts.

We do not applaud the lack of cooperation of officials in the audit or the lack of official compliance with memory card procedures. We are left wondering if this is the level of compliance and cooperation when officials know their efforts will be disclosed: “What is their compliance when their actions are unlikely or impossible to scrutinize?” Can you imagine such numbers from any other technology or Government function? Where is the outrage?

The University of Connecticut (UConn) Center for Voting Technology Research posted its memory card report  for the November 2011 election: Technological Audit of Memory Cards for the November 8, 2011 Connecticut Elections <read>

We applaud Dr. Alexander Shvartsman and his team for  developing the technology to perform these innovative tests, the diligence to perform the tedious tests, and the fortitude to report the facts.

We do not applaud the lack of cooperation of officials in the audit or the lack of official compliance with memory card procedures. We are left wondering if this is the level of compliance and cooperation when officials know their efforts will be disclosed: “What is their compliance when their actions are unlikely or impossible to scrutinize?”. Where is the outrage?

Lets start with some good news.

We have had problems for years with bad memory cards which UConn calls “junk data”. Based on the questionable sample of bad cards sent to UConn, the estimate is 7.4% to 17.4% of cards were bad in the Nov 2011 election. This is similar to statistics generated in the Coalition post-election audit survey of officials. The survey showed a huge increase in the number of municipalities reporting bad cards in Nov2011, 90% with the previous high of 56% reported a year earlier <Coalition report page 26-27>.  Anecdotally many towns are hit with an overwhelming percentage of bad cards – we speculate that somehow the programming vendor, LHS Associates, receives batches of returned bad cards, LHS  installs new batteries and the cards tend to stay together, to be used in the next election for many or all of the cards programmed for unlucky municipalities.

The good news is that our memory card nightmare may have a cure in some future election, perhaps 2012 or 2013:

New non-volatile (battery-less) memory card was recently developed by the vendor. Our preliminary analysis of this card confirmed that it is completely compatible with AV-OS systems deployed in Connecticut. It is expected that a pilot deployment of the new cards by the SOTS Offce will occur in the near future. The use of the new card should eliminate the major cause of memory card failures.

No word on State Certification which would presumably be relatively easy, yet required before such cards could be used in an actual election.

At most 30.5% official compliance with pre-election audit requests

For the pre-election audit, the Center received 453 memory cards from 331 districts. Cards were submitted for two reasons per instructions from the SOTS Oce (a) one of the four cards per district was to be selected randomly and submitted directly for the purpose of the audit, and (b) any card was to be submitted if it appeared to be unusable. Given that cards in category (a) were to be randomly selected, while all cards in category (b) were supposed to be submitted, and that the cards were submitted without consistent categorization of the reason, this report considers all unusable cards to fall into category (b).

Among these 453 cards, 223 (49.2%) fall into category (a). 100% these cards were correct. These cards contained valid ballot data and the executable code on these cards was the expected code, with no extraneous data or code on the cards. We note that the adherence to the election procedures by the districts is improving, however the analysis indicates that the established procedures are not always followed; it would be helpful if reasons for these extra-procedural actions were documented and communicated to the SOTS Offce in future elections.

According to the report 331 districts sent 453 cards, but at most only 233 of those cards were not bad cards. Thus at most 233 out of 730 districts in the election, registrars sent in a card as requested by “instructions from the SOTS [Secretary of the State] Office”. How many of these cards were in fact “randomly selected”? There is no way for the public to be sure. So we start with a maximum compliance rate of 233/730 or 30.5%.

Without a full sample, without some assurance of random selection, the statistical significance of the report is questionable and there is clearly a formula for a fraudster to avoid the memory card audit.

Considering pre-election testing we are down to at most 18.4% official (registrar) compliance:

UConn reported that 89 of those 233 cards were not set to pre-election mode yielding 134/233 or 61.8% correctly set in election mode. Thus for 134/730 or 18.4% of districts, registrars complied with both the simple procedures of sending in one card per district and testing all cards, leaving them in election mode.

This is only the most predominant of several problems uncovered:

(b) Card Status Summary:

Here status refers to the current state of the memory card, for example, loaded with an election, set for election, running an election, closed election, and others.

134 cards (60.1%) were in Set For Election state. This is the appropriate status for cards intended to be used in the elections. This percentage is an improvement over the 2010 November pre-election audit, where 41.6% of the cards were set for elections.

89 cards (39.9%) were in Not Set for Election state. This status would be appropriate for the cards that either did not undergo pre-election testing or were not prepared for elections, but not for the cards that are fully prepared for an election. This suggests that the corresponding districts sent these cards for the audit without first fi nalizing the preparation for the election. This is not a security concern, but an indication that not all districts submit cards at the right time (that is, after the completion of pre-election testing and preparation of the cards for the elections).

(c) Card & Counter Status:

Here additional details are provided on the status of the counters on the usable cards. The expected state of the cards following the pre-election testing is Set for Elections with Zero Counters.

All of the 134 cards (60.1%) that were found in Set For Election state had Zero Counters. This is the appropriate status for cards intended to be used in the elections.

85 cards (38.1%) were in Not Set for Election state and had Non-Zero Counters. This is not an expected state prior to an election. This suggests that the cards were subjected to pre-election testing, but were not set for elections prior to their selection for the audit. This situation would have been detected and remedied if such cards were to be used on Election Day as the election cannot be conducted without putting the cards into election mode.

4 cards (1.8%) were found to be in Not Set for Elections state with Zero Counters. This is UConn VoTeR Center April 5, 2012, Version 1.1 9 similar to the 85 cards above. This situation would have been similarly detected and remedied if such cards were to be used on the election day.

Taking the above percentages together, it appears that almost all districts (60:1% + 38:1% = 98:2%) performed pre-election testing before submitting the cards for the audit.

(d) Card Duplication:

The only authorized source of the card programming in Connecticut is the external contractor, LHS Associates. The cards are programmed using the GEMS system. Cards duplications are performed using the AV-OS voting tabulator; one can make a copy (duplicate) of a card on any other card by using the tabulator’s duplication function. SOTS polices do not allow the districts to produce their own cards by means of card duplication.

Card duplication is a concern, as there is no guarantee that duplication faithfully reproduces cards, and it masks the problem with card reliability. Additionally, it is impossible to determine with certainty who and why resorted to card duplication.

There were 18 cards involved in duplication. 12 of these cards (66.7%) were master cards used for duplication. 6 cards (33.3%) were copy cards produced by duplication.

We manually examined the audit logs of all duplicated cards and compared the initialization date of the card against the date of the duplication. We established that most of the cards (16 out of 18) were most likely involved in duplication at LHS. 12 out of 16 were involved in duplication either on the day of initialization, or the day after. The remaining 4 cards were involved in duplication within 4 days of initialization, however they were tested and prepared for election at a later date (4 to 7 days after the duplication occurred).

Only two cards out of 18 were most likely involved in duplication at the district, as they were prepared for election within a few minutes after the duplication event was recorded. This is an improvement from prior audits.

Given the SOTS polices, the districts must not be producing their cards locally. If a district finds it necessary to duplicate cards, they need to make records of this activity and bring this to the attention of SOTS Office.

Post-election, audited districts complied 27.8%

The registrars for districts selected for post-election audit are “asked to submit cards that were used in the election for the post-election technological audit”, 20/73 or 27.8% complied.

For the post-election audit, the Center received 157 cards. Out of these cards only 20 cards were used on Election Day. Given that the small sample of such cards does not allow for a meaningful statistical analysis, we report our nding in abbreviated form. To enable more comprehensive future post-election audits it is important to signi cantly increase the submission of cards that are actually used in the elections.

Cards were submitted to the Center for two reasons per instructions from the SOTS Oce (a) the districts that were involved in the post-election 10% hand-count audit were asked to submit the cards for the post-election technological audit, and (b) the districts were encouraged to submit any cards that appeared to be unusable in the election. Given that cards in category (a) were to be sent from the 10% of randomly selected districts, while all cards in category (b) were supposed to be submitted, and that the cards were submitted without consistent categorization of the reason, the number of unusable cards are disproportionately represented.

Can you imagine such numbers from any other technology or Government function? Where is the outrage?

We all are used to thumb drives, functionally similar technologically, yet much lower cost. What is your experience? Do they fail suddenly 18% of the time, after working correctly for months or years? How about your cell phone or GPS, much more complicated than a memory card?

Recently Connecticut was outraged by 42 state employees charged with illegally obtaining food stamps out of 800 obtaining them. That is a 94.6% compliance rate, quite a bit higher than election official compliance here of 18.4%

Even the UConn Basketball Team does better,  with a quarter of the players graduatingMilner School, subject to our Governor’s concern, had 23.5% of 3rd graders passing the reading test. But this is not like students failing tests, this is more like Boards of Education overseeing that the curriculum is followed less than 19% of the time.

Let us not forget that the most complex memory cards are not tested:

In addition to the four cards for each district, in mid size to large towns absentee ballots are counted centrally by optical scanners with memory cards that a programmed to count ballots for all districts in such towns. These are not included in the post-election audits required by law, and apparently not included in requests for memory card audits.

Sadly most of this is entirely legal

In Connecticut election procedures are not enforceable so there is no penalty for officials not following procedures. The entire memory card audit is based on procedures, not law.

Also check out some of the audit log analysis in the report

UConn inspected audit (event) logs on the memory cards, discovering several instances of where procedures were not followed and other questionable events.

The rules implemented in the audit log checker do not cover all possible sequences, and the Center continues re ning the rules as we are enriching the set of rules based on our experience with the election audits. For any sequence in the audit log that is not covered by the rules a noti cation is issued, and such audit logs are additionally examined manually. For the cases when the audit log is found to be consistent with a proper usage pattern we add rules to the audit log checker so that such audit logs are not flagged in the future.

Out of the 223 correct 6 cards, 54 (24.2%) cards were flagged because their audit logs did not match our sequence rules.

The audit log analysis produced 106 notifi cations. Note that a single card may yield multiple notifi cation. Also recall that not all noti fications necessarily mean that something went wrong | a notifi cation simply means that the sequence of events in the audit log did not match our (not-all- inclusive) rules.

This problem (and solution) would never happen in Connecticut

We use manual addition and transcription to add results. Our audits would not catch errors made outside of polling place scanners.

Palm Beach Post  Vendor: software ‘shortcoming’ led to Wellington election fiasco  <read>

The short version:

  • Polling place machines counted races and votes correctly
  • Mismatched counters on machines used to accumulate results caused two races to be switched and the wrong candidates the apparent winners
  • The problem was discovered and corrected based on a post-election audit
  • The problem went undetected in pre-election testing as the only test is the polling place machines

Hats off to Wellington and their post-election audits. This problem (and solution) would never happen in Connecticut:

  • We leave all our totaling and transcription errors to a two and three level process of manual accounting, so we make our errors the old fashioned way.
  • Our post-election audits only compare the machine tapes to the ballot totals, not to the results posted on the Secretary of the State’s web site (they don’t have enough details even if we wanted to)

Help is on the way as the Secretary of the State is about to pilot a better accumulation system. Perhaps it will include sufficient detail to check for errors in the subset of ballots we audit, and the law will be changed to audit using those numbers rather than the machine tapes.

Enthusiastic support for the Secretary’s Performance Task Force Recommendations

Given the many members, the brief meetings, and the lack of representation of all interests, we were skeptical when the Task Force was convened. To our delight, we find that we can offer endorsement of each of the twenty-one recommendations in the report.

There is a lot to do in all the recommendations. It will take time, money, and deliberate work with everyone at the table. Our hope is that each of the recommendations will be thoroughly explored, evaluated, and acted upon, that none get overlooked.

Last summer and fall, the Secretary of the State convened an Elections Performance Task Force to look at elections and what might be done to improve them in the State of Connecticut. Details, presentations, and videos of the Task Force meetings are available at the Secretary’s web site <here> The Secretary issued a final report and recommendations <here>

Given the many members, the brief meetings, and the lack of representation of all interests, we were skeptical when the Task Force was convened. To our delight, we find that we can  offer endorsement of each of the twenty-one recommendations in the report, starting on page 34.

We strongly endorse those recommendations in bold below [our comments in brackets]

Identify measures that will increase the efficiency and effectiveness of the voting process.

1. The Secretary recommends an amendment to Article 6, Section 7 of the Connecticut State Constitution similar to House Joint Resolution Number 88 of the 2011 legislative session. The amendment would allow the General Assembly to adopt more flexible laws for voting.

2. The Secretary recommends partnering with Professor Heather Gerken to develop a Connecticut Democracy Index. This would allow for benchmarking across municipalities and with other states to track trends in the election process, to measure performance and to gain valuable data that can inform decisions going forward.

3. The Secretary recommends streamlining the absentee ballot process. A working group should be formed to examine and make recommendations around ideas like creating a single absentee ballot application and linking the absentee ballot tracking system with the Centralized Voter Registration System. [Assuming such streamlining does not increase integrity risks or confidence in the process]

4. The Secretary recommends further study of how regionalism could make Connecticut’s electoral  system more cost-effective and consistent. For instance, the use of a statewide online voter registration system, regional on-demand ballot printing, and regional voting centers should all be further explored. [Here we would go further to explore complete regionalizaton, “doing for elections what we have done for probate in Connecticut]

5. The Secretary recommends that the polling place for district elections be the same as for state elections. This will help eliminate voter confusion caused by having to go to different polling locations for different elections. [This would be convenient, yet if mandated, would be challenging for many towns due to different boundaries and contests]

6. The Secretary recommends exploring better ways of coordinating the printing of ballots with programming of memory cards in order to create a more efficient, reliable and cost-effective process.

7. The Secretary recommends the development of a certification process for Registrars of Voters. Additionally, standards and best practices should be developed for that office around issues such as election administration, voter registration and voter outreach. These standards and best practices may need to account for differences in small, medium and large municipalities. Finally, a mechanism for enforcement and, if necessary, the removal of a Registrar of Voters should be created. [We would especially recommend standardization and better practices for post-election audits and recanvasses, along with better manuals, including creating manuals for each pollworker position]

8. The Secretary recommends that a formal study of the cost of elections be undertaken, and that a standardized set of measures for such costs be established.[We would combine this into the Democracy Index, providing ongoing measures and comparison over time]

Maintain the security and integrity of the voting process.

9. The Secretary recommends the development of a secure online voter registration system in Connecticut. The system should be tied to other statewide databases, such as the Department of Social Services, the Department of Developmental Services, and the Department of Motor Vehicles, to allow for verification of data.

10. The Secretary recommends that the state acquire at least one high speed, high volume scanner to be utilized in the post-election auditing process. This centralization of the process will reduce the fiscal and logistical burdens on towns, as well as provide for a more accurate and secure auditing process.[We are a strong supporter of electronic auditing, done effectively and transparently. The number of scanners and their capacities should be a byproduct of an effective electronic auditing pilot, plan, cost benefit analysis, and appropriate law establishing and governing electronic audits]

11. The Secretary recommends that the post-election auditing process be amended to include all ballots that are machine-counted, including those counted centrally.[We would go farther and subject all ballots cast to selection for audit.]

12. The Secretary recommends that a greater emphasis be placed on ballot security. Ballots should be stored in a secure, locked facility. Additionally, two individuals should always be present whenever these facilities are accessed. This policy should be uniformly followed and enforced.

13. The Secretary recommends that the state join the Electronic Registration Information Center (ERIC), an interstate data consortium that the Pew Center on the States is currently building. This data center would allow participating states to streamline the processes for registering eligible voters; update records of existing voters; and remove duplicate and invalid records from state voter files. The Secretary stresses the need to include multiple agencies in the database, including those that offer public assistance, interact with people with disabilities, and otherwise come into contact with eligible voters who may not normally visit the Department of Motor Vehicles. Evaluate ways to integrate technology into our election system.

14. The Secretary recommends further exploring the use of new technologies in the election process through pilot programs and examination of other states’ usage. However, the cost and security of any new technologies should be carefully examined. Examples of new technologies for consideration include:

a. Electronic poll books

   b. More advanced voting systems for the voters with disabilities

    c. Online voter registration

15. The Secretary recommends immediate implementation of a statewide web-based electronic reporting system for election results.

16. The Secretary recommends the use of web-based training to standardize election staff training across the state.[We would like to see video training and manuals having a pollworker focus, designed by professional technical writers]

Find ways to increase voter participation, particularly among minorities, young people, people with disabilities, and military and overseas voters.

17. The Secretary recommends Election Day registration in Connecticut and any necessary adjustments to the voter file system to ensure accuracy. Election Day registration has increased voter participation in states where it has been enacted.

18. The Secretary recommends an effort to increase voter participation in Connecticut, with a particular focus on youth, minorities, people with disabilities, and military and overseas voters.

a. Early voting bears further study as a possible mechanism for reaching minority voters. [We are skeptical that early voting has a particular focus on any group of voters]

   b. Since the electorate is becoming more mobile, voter registrations should be mobile as well.
   c. Connecticut’s curbside voting program should be better advertised to voters with disabilities, all polling  places should be easily handicapped accessible, and poll workers at all locations should be properly trained on utilizing the IVS vote by phone system. A viable, better alternative to the IVS system should also be sought.

   d. The military and overseas voting process should be amended to allow for the facsimile transmittal of completed absentee ballot applications. The original application would then be returned in the envelope along with the completed absentee ballot via mail, in order for the ballot to be counted.[Fax transmission should only be required to obtain a blank ballot in situations where the voter cannot print a blank ballot]

e. The military and overseas voting process should be streamlined by the electronic transmission of printable, mailable ballots. This, along with the above recommendation, would eliminate the mailing time of transmitting completed applications and blank ballots through manual post, and would allow for more time for participation by military and overseas voters.

f. The electronic transmission of ballots to military and overseas voters should be further streamlined through the use of the Centralized Voter Registration System.[Having the system aid the overseas voter in downloading their correct blank ballot]

19. The Secretary recommends that existing voter registration provisions included in legislation such as the National Voter Registration Act be fully enforced. The Secretary further recommends that Connecticut’s Department of Corrections be designated as an official voter registration agency.

20. The Secretary recommends a concerted effort to educate the public and the incarcerated population about the voting rights of those detained pre-sentencing and the restoration of voting rights to felons. The Secretary further recommends that the restoration of voting rights be extended to include parolees, as is the case in over a dozen states.

21. The Secretary recommends that Election Day be declared a holiday, as it is in many countries, and/or that elections include in-person voting on a weekend day. This would grant citizens more time to vote and would allow for the use of students and persons with the day off as poll workers.

We note several caveats:

Our endorsement of proposals is conditional. Conditional on the details of any proposed implementation or law. For instance, although we support Election Day Registration, we do not support the current bill before the Legislature which would call for Election Day Registration, because the bill is inadequate to protect the rights of EDR voters, other voters, and could result in chaos and uncertainty.

The report is the Secretary of the State’s, not approved by or endorsed by the Task Force as a whole.

Contained in this report are the findings of the Election Performance Task Force, organized by subcommittee subject matter, with the additional category of voting technology. The Secretary utilized these findings along with feedback from members of the task force, other interested parties, and the public to shape the recommendations that are detailed at the end of this report.

While we endorse the recommendations, we do not endorse the details in the report itself:

  • The statistical information and conclusions do not come close to meeting rigorous standards in justifying the conclusions reached.
  • As noted in the report, the cost of elections information provided is questionable. We find it wildly inaccurate to include data that elections might have been conducted at costs per voter less than the cost of printing a single ballot.
  • We strongly disagree that there is any basis to predict that online voting will be a safe and accepted practice within ten years.

There is a lot to do in all the recommendations. It will take time, money, and deliberate work with everyone at the table. Our hope is that each of the recommendations will be thoroughly explored, evaluated, and acted upon, that none get overlooked.

Improved election reporting on the way

But big changes are coming, including a precinct-by-precinct election reporting system that the state hopes to test in April and use publicly in August to gather unofficial results during the expected primaries for U.S. Senate and state legislative races.

This sounds like the kind of change we called for during the 2010 election for Secretary of the State

CTMirror: Connecticut gets low grade for online election info, but big changes are coming <read>

We are not surprised that the state elections web gets low grades again when compared to other states. But there is some potential good news.

But big changes are coming, including a precinct-by-precinct election reporting system that the state hopes to test in April and use publicly in August to gather unofficial results during the expected primaries for U.S. Senate and state legislative races.

This sounds like the kind of change we called for during the 2010 election for Secretary of the State: What could a Secretary of the State Do? <read>

Provide detailed, accurate, downloadable, election information and notices on the Secretary of the State’s web site

In a PEW study the Connecticut site ranked 48th out of 50 states.  We could debate if we should be higher in the rankings, or instead work to emulate and surpass the top ranked states.

The process of accumulating voting results in Connecticut is an error-prone three step process of addition and transcription, from polling place, to town hall, to the Secretary of the State’s Office, and to the web.  Citizens have identified errors large and moderate – errors of a magnitude  which could change election results, the initiation of recanvasses, or ballot access. See <here> <here>

Without reliable, publicly posted results, post-election audits cannot be accomplished which inspire confidence and provide integrity.  A trusted audit requires selecting districts for audit against previously posted results.  Since we audit against optical scanner tapes, and the tape results are not posted, then we fail to meet that requirement.

What can be done?

  • Post copies of the original documents: All district and central count absented ballot Moderator’s’ Reports and copies of scanner tapes should be faxed to the Secretary of the State’s Office and posted on the SOTS web site. (We know this is easily possible since the SOTS web site has recently included images of all local ballots, and is capable of the quick addition of press releases)
  • Post detailed and summary data: The SOTS could use temporary employees or outsourcing to input and double check the input of all that data, then post it to the web site in human and downloadable formats.
  • Side benefit: A free public audit: As a byproduct the public, candidates, and parties could check and audit the data at no cost to the state.  To do that today would involve visiting town halls across the state and performing all the calculations done today by hand – efficient auditing of selected districts is not possible because detailed data is not currently posted

Not quite as far as we would like and the details in the article are a bit sketchy. Hopefully the data will include details for each district that can be used to compare with the audit. Since the audit exempts all hand counted and all centrally counted absentee ballots from the audit, to be useful results must provide separate totals for hand counted and machine counted ballots in each district polling place and separate absentee counts by district centrally counted. Important as well would be to have the data in a downloadable format.

We would also hope that all registrars and moderators will support this initiative. We are not sure this requirement can actually be required and enforced without a change in the law.