The University of Connecticut (UConn) Center for Voting Technology Research posted its memory card report for the November 2011 election: Technological Audit of Memory Cards for the November 8, 2011 Connecticut Elections <read>
We applaud Dr. Alexander Shvartsman and his team for developing the technology to perform these innovative tests, the diligence to perform the tedious tests, and the fortitude to report the facts.
We do not applaud the lack of cooperation of officials in the audit or the lack of official compliance with memory card procedures. We are left wondering if this is the level of compliance and cooperation when officials know their efforts will be disclosed: “What is their compliance when their actions are unlikely or impossible to scrutinize?”. Where is the outrage?
Lets start with some good news.
We have had problems for years with bad memory cards which UConn calls “junk data”. Based on the questionable sample of bad cards sent to UConn, the estimate is 7.4% to 17.4% of cards were bad in the Nov 2011 election. This is similar to statistics generated in the Coalition post-election audit survey of officials. The survey showed a huge increase in the number of municipalities reporting bad cards in Nov2011, 90% with the previous high of 56% reported a year earlier <Coalition report page 26-27>. Anecdotally many towns are hit with an overwhelming percentage of bad cards – we speculate that somehow the programming vendor, LHS Associates, receives batches of returned bad cards, LHS installs new batteries and the cards tend to stay together, to be used in the next election for many or all of the cards programmed for unlucky municipalities.
The good news is that our memory card nightmare may have a cure in some future election, perhaps 2012 or 2013:
New non-volatile (battery-less) memory card was recently developed by the vendor. Our preliminary analysis of this card confirmed that it is completely compatible with AV-OS systems deployed in Connecticut. It is expected that a pilot deployment of the new cards by the SOTS Offce will occur in the near future. The use of the new card should eliminate the major cause of memory card failures.
No word on State Certification which would presumably be relatively easy, yet required before such cards could be used in an actual election.
At most 30.5% official compliance with pre-election audit requests
For the pre-election audit, the Center received 453 memory cards from 331 districts. Cards were submitted for two reasons per instructions from the SOTS Oce (a) one of the four cards per district was to be selected randomly and submitted directly for the purpose of the audit, and (b) any card was to be submitted if it appeared to be unusable. Given that cards in category (a) were to be randomly selected, while all cards in category (b) were supposed to be submitted, and that the cards were submitted without consistent categorization of the reason, this report considers all unusable cards to fall into category (b).
Among these 453 cards, 223 (49.2%) fall into category (a). 100% these cards were correct. These cards contained valid ballot data and the executable code on these cards was the expected code, with no extraneous data or code on the cards. We note that the adherence to the election procedures by the districts is improving, however the analysis indicates that the established procedures are not always followed; it would be helpful if reasons for these extra-procedural actions were documented and communicated to the SOTS Offce in future elections.
According to the report 331 districts sent 453 cards, but at most only 233 of those cards were not bad cards. Thus at most 233 out of 730 districts in the election, registrars sent in a card as requested by “instructions from the SOTS [Secretary of the State] Office”. How many of these cards were in fact “randomly selected”? There is no way for the public to be sure. So we start with a maximum compliance rate of 233/730 or 30.5%.
Without a full sample, without some assurance of random selection, the statistical significance of the report is questionable and there is clearly a formula for a fraudster to avoid the memory card audit.
Considering pre-election testing we are down to at most 18.4% official (registrar) compliance:
UConn reported that 89 of those 233 cards were not set to pre-election mode yielding 134/233 or 61.8% correctly set in election mode. Thus for 134/730 or 18.4% of districts, registrars complied with both the simple procedures of sending in one card per district and testing all cards, leaving them in election mode.
This is only the most predominant of several problems uncovered:
(b) Card Status Summary:
Here status refers to the current state of the memory card, for example, loaded with an election, set for election, running an election, closed election, and others.
134 cards (60.1%) were in Set For Election state. This is the appropriate status for cards intended to be used in the elections. This percentage is an improvement over the 2010 November pre-election audit, where 41.6% of the cards were set for elections.
89 cards (39.9%) were in Not Set for Election state. This status would be appropriate for the cards that either did not undergo pre-election testing or were not prepared for elections, but not for the cards that are fully prepared for an election. This suggests that the corresponding districts sent these cards for the audit without first finalizing the preparation for the election. This is not a security concern, but an indication that not all districts submit cards at the right time (that is, after the completion of pre-election testing and preparation of the cards for the elections).
(c) Card & Counter Status:
Here additional details are provided on the status of the counters on the usable cards. The expected state of the cards following the pre-election testing is Set for Elections with Zero Counters.
All of the 134 cards (60.1%) that were found in Set For Election state had Zero Counters. This is the appropriate status for cards intended to be used in the elections.
85 cards (38.1%) were in Not Set for Election state and had Non-Zero Counters. This is not an expected state prior to an election. This suggests that the cards were subjected to pre-election testing, but were not set for elections prior to their selection for the audit. This situation would have been detected and remedied if such cards were to be used on Election Day as the election cannot be conducted without putting the cards into election mode.
4 cards (1.8%) were found to be in Not Set for Elections state with Zero Counters. This is UConn VoTeR Center April 5, 2012, Version 1.1 9 similar to the 85 cards above. This situation would have been similarly detected and remedied if such cards were to be used on the election day.
Taking the above percentages together, it appears that almost all districts (60:1% + 38:1% = 98:2%) performed pre-election testing before submitting the cards for the audit.
(d) Card Duplication:
The only authorized source of the card programming in Connecticut is the external contractor, LHS Associates. The cards are programmed using the GEMS system. Cards duplications are performed using the AV-OS voting tabulator; one can make a copy (duplicate) of a card on any other card by using the tabulator’s duplication function. SOTS polices do not allow the districts to produce their own cards by means of card duplication.
Card duplication is a concern, as there is no guarantee that duplication faithfully reproduces cards, and it masks the problem with card reliability. Additionally, it is impossible to determine with certainty who and why resorted to card duplication.
There were 18 cards involved in duplication. 12 of these cards (66.7%) were master cards used for duplication. 6 cards (33.3%) were copy cards produced by duplication.
We manually examined the audit logs of all duplicated cards and compared the initialization date of the card against the date of the duplication. We established that most of the cards (16 out of 18) were most likely involved in duplication at LHS. 12 out of 16 were involved in duplication either on the day of initialization, or the day after. The remaining 4 cards were involved in duplication within 4 days of initialization, however they were tested and prepared for election at a later date (4 to 7 days after the duplication occurred).
Only two cards out of 18 were most likely involved in duplication at the district, as they were prepared for election within a few minutes after the duplication event was recorded. This is an improvement from prior audits.
Given the SOTS polices, the districts must not be producing their cards locally. If a district finds it necessary to duplicate cards, they need to make records of this activity and bring this to the attention of SOTS Office.
Post-election, audited districts complied 27.8%
The registrars for districts selected for post-election audit are “asked to submit cards that were used in the election for the post-election technological audit”, 20/73 or 27.8% complied.
For the post-election audit, the Center received 157 cards. Out of these cards only 20 cards were used on Election Day. Given that the small sample of such cards does not allow for a meaningful statistical analysis, we report our nding in abbreviated form. To enable more comprehensive future post-election audits it is important to signicantly increase the submission of cards that are actually used in the elections.
Cards were submitted to the Center for two reasons per instructions from the SOTS Oce (a) the districts that were involved in the post-election 10% hand-count audit were asked to submit the cards for the post-election technological audit, and (b) the districts were encouraged to submit any cards that appeared to be unusable in the election. Given that cards in category (a) were to be sent from the 10% of randomly selected districts, while all cards in category (b) were supposed to be submitted, and that the cards were submitted without consistent categorization of the reason, the number of unusable cards are disproportionately represented.
Can you imagine such numbers from any other technology or Government function? Where is the outrage?
We all are used to thumb drives, functionally similar technologically, yet much lower cost. What is your experience? Do they fail suddenly 18% of the time, after working correctly for months or years? How about your cell phone or GPS, much more complicated than a memory card?
Recently Connecticut was outraged by 42 state employees charged with illegally obtaining food stamps out of 800 obtaining them. That is a 94.6% compliance rate, quite a bit higher than election official compliance here of 18.4%
Even the UConn Basketball Team does better, with a quarter of the players graduating. Milner School, subject to our Governor’s concern, had 23.5% of 3rd graders passing the reading test. But this is not like students failing tests, this is more like Boards of Education overseeing that the curriculum is followed less than 19% of the time.
Let us not forget that the most complex memory cards are not tested:
In addition to the four cards for each district, in mid size to large towns absentee ballots are counted centrally by optical scanners with memory cards that a programmed to count ballots for all districts in such towns. These are not included in the post-election audits required by law, and apparently not included in requests for memory card audits.
Sadly most of this is entirely legal
In Connecticut election procedures are not enforceable so there is no penalty for officials not following procedures. The entire memory card audit is based on procedures, not law.
Also check out some of the audit log analysis in the report
UConn inspected audit (event) logs on the memory cards, discovering several instances of where procedures were not followed and other questionable events.
The rules implemented in the audit log checker do not cover all possible sequences, and the Center continues rening the rules as we are enriching the set of rules based on our experience with the election audits. For any sequence in the audit log that is not covered by the rules a notication is issued, and such audit logs are additionally examined manually. For the cases when the audit log is found to be consistent with a proper usage pattern we add rules to the audit log checker so that such audit logs are not flagged in the future.
Out of the 223 correct 6 cards, 54 (24.2%) cards were flagged because their audit logs did not match our sequence rules.
The audit log analysis produced 106 notifications. Note that a single card may yield multiple notification. Also recall that not all notifications necessarily mean that something went wrong | a notification simply means that the sequence of events in the audit log did not match our (not-all- inclusive) rules.