Audit Report: Flawed by lack of transparency, incomplete data, and assumed accuracy

Update 8/12: We have received a clarification of the official report from the Deputy Secretary of the State, which modifies our opinion <read>


Last week, the University of Connecticut released its official post-election audit report on the November 2010 election, just short of seven months after the election: Statistical Analysis of the Post-Election Audit Data, 2010 November Election <read>

Like previous reports, this official report fails to provide confidence in the post-election audit process and in the accuracy of the election itself.

The audit data received by the VoTeR Center contains 867 records. Among the 867 records received by the Center, 20 records (2.3%) were incomplete. This report deals with 847 records (97.7%) among which 799 records (94.3%) are from the original data and 48 records (5.7%) were revised based on the follow up conducted by the SOTS [Secretary of the State’s] office.

As demonstrated by the Coalition audit reports there are major shortcomings in the post-election audit process where official hand counts do not match optical scanner counts and incomplete reports are submitted. Based on the UConn report, these differences are addressed in three inadequate ways:

1) Some results are recounted by state officials outside of public view:

The VoTeR Center’s initial review of audit reports prepared by the towns revealed a number of returns with unexplained differences between hand and machine counts. The vast majority of records with high discrepancies were concentrated in the following three districts: East Haven (Deer Run School) with the highest reported discrepancy of 180, Hartford (Burns School) with the highest reported discrepancy of 170, and Preston (Town Hall) with the highest reported discrepancy of 55. Additionally, one or more discrepancies were reported in all but one district for the town of Orange; here the highest reported discrepancy was 14, however this could not be explained as no questionable ballots were reported. Following this initial review the SOTS Office performed additional information gathering and investigation and, in some cases, conducted independent hand-counting of ballots in the four districts mentioned above. The final information was conveyed to the VoTeR Center on June 17th of 2011 for the 48 records pertaining to those districts. The rest of the records (799 out of 847) discussed in this audit report are the original records reported by the towns. [Emphasis ours in all quotes]

We interpret this to mean that some of the counts with differences were recounted by state officials behind closed doors. And based on those counts the Secretary of the State’s Office assumed that all the counts in those districts with differences were hand counting errors. Further, assuming that when election officials make an error in counting in one case in a district and that the machine was accurate in that case, then in all cases where there were differences in those districts that the machine did not make errors.

For the last couple of years we have repeatedly, to no avail, requested that recounts of ballots for the audit be announced and open to public observation or at least the Coalition be notified and given the opportunity to observe. Lack of transparency in the process, provides no basis for public confidence in the process, in the audit, and ultimately in our election system.

As a service to the public, the Coalition provides transparent access to the official audit count reports. You can see scanned copies of the original reports from the towns mentioned <here>

2) Other differences were not recounted but “affirmed” to be hand count errors:

187 records (22.1%) showing discrepancy of 2 to 5 votes, 42 records (4.9%) showing discrepancy of 6 to 13 votes (for this group, although no manual review of the discrepancies was conducted, the SOTS Office affirmed that the discrepancies were due to hand counting errors),

We interpret this to mean that the Secretary of the State’s Office has such faith in the accuracy of our optical scanners and the election process that they assume that any differences must be caused by hand counting errors. The purpose of the audit is to determine the accuracy of the optical scanners, that purpose is negated when the accuracy is assumed.

3) Some audit reports contained incomplete data and such data was not included in the report:

The audit data received by the VoTeR Center contains 867 records, where each record represents information about a given candidate: date, district, machine seal number, office, candidate, machine counted total, hand counted total of the votes considered unquestionable by the auditors, hand counted total of the votes considered questionable by the auditors, and the hand counted total, that is, the sum of undisputed and questionable ballots. This report contains several statistical analyses of the audit returns and recommendations. The statistical analysis in this report deals with the 847 records that are sufficiently complete to perform the analysis.

We interpret this to mean an assumption that the legally mandated audit of  three races in a randomly selected 10% of districts is valid even if some of the results are not reported. A statistical calculation based on randomly selected data, omitting some of that data not randomly chosen for omission, is invalid.

Spelling out our concerns, as we said last year, after the November 2009 official UConn report:

  1. All counting and review of ballots should be transparent and open to public observation.  Both this year and last year we have asked that such counting be open and publicly announced in advance. [And again in 2011 to the new administration]
  2. Simply accepting the word of election officials that they counted inaccurately is hardly reliable, scientific, or likely to instill trust in the integrity of elections.  How do we know how accurate the machines are without a complete audit, any error or fraud would likely result in a count difference, and would be [or could have been] very likely dismissed.
  3. Even if, in every cases officials are correct that they did not count accurately, it cannot be assumed that the associated machines counted accurately.
  4. Simply ignoring the initial results in the analysis of the data provides a simple formula to cover-up, or not recognize error and fraud in the future.

As we have said before we do not question the integrity of any individual, yet closed counting of ballots leaves an opening for fraud and error to go undetected and defeats the purpose and integrity of the audit.

We also note that in several cases officials continued to fail perform the audit as required by law or to provide incomplete reports.

There are other flaws in the audit law.

  • For instance, there is no legally mandated deadline for the towns to submit audit reports to the Secretary of the State’s Office or for UConn to provide the analysis. We believe seven months after an election is a long time for the public and candidates to wait.
  • As the Coalition covered in our August 2010 Post-Election Audit Report and the November 2010 Post-Election Audit Report, the list of districts used in the random district drawing is inaccurate and challenging to verify. This also negates reliability, accuracy, and confidence in the random audit.
  • The November 2010 election, most glaringly pointed out the need for the audit to select from all ballots in the election, not just those counted by optical scanners in the polling place. Among the omitted are centrally counted optically scanned ballots and all originally hand counted ballots. The Coalition Bridgeport Recount Report demonstrated to the public that hand counted ballots can be counted inaccurately on election night. The official election system was not able to audit or recanvass those ballots and has never officially, as far as we know, recognized even the possibility the original hand counted, hand transcribed, and hand totaled numbers may be inaccurate.

As we have pointed out, over and over:

We have no reason to question the integrity of any official. We have no evidence that our optical scanners have failed to count accurately. However, if every difference between a hand count and a scanner count is dismissed as a human counting error then if a machine has or were ever, by error or fraud, to count inaccurately, it would be unlikely to be recognized by the current system.

People and scanners have and will make counting errors. The solution is transparent counting, multiple times to insure accuracy, along with credibly ballot security.


Leave a Reply

You must be logged in to post a comment.