Official Audit Report – provides no confidence in officials and machines

Last week the University of Connecticut (UConn) released its official post-election audit report on the November 2011 election, seven months after the election and one month after the shredding of all ballots. Once again, as we said last time, the report is “Flawed by a lack of transparency, incomplete data, and assumed accuracy”. In our opinion, the report falls short of the rigor of the fine peer reviewed papers  <e.g.> and valuable memory card reports: <e.g.> that UConn  provides.

The report is available at the UConn site: Statistical Analysis of the Post-Election Audit Data 2011 November Election <read>

Our strongest concern with report is the two underlying assumptions which defy common sense and logic:

  • That officials are always correct when they claim or agree that they counted inaccurately, when hand counts and optical scanner tapes do no match.
  • That when officials count inaccurately, it implies that the optical scanners did in fact count accurately.

These assumptions leave us wondering:

  • How do officials know that they counted inaccurately?
  • Should we put blind trust in the judgment of officials that claim they cannot count accurately?
  • How accurate are the unaudited official hand counts used to provide a portion of the totals in each election which are compiled late on election night? We have only one, perhaps extreme, example to go on, coupled with some significant errors in the comparatively ideal counting conditions of the audits.
  • If every difference between scanners and officials is attributed to human error, then in what circumstances would we actually recognize an actual error or fraud should it ever have occurred?

According to the report:

Audit returns included 45 records with discrepancies higher than 5, with the highest reported discrepancy of 40. It is worth noting that 75% (30 out of 45) of the records that were subject to the follow up investigation already contained information indicating that the discrepancies were due to the human error. Following this initial review the SOTS Office [Secretary of the States Office] performed additional information gathering and investigation of those 45 records. The final information was conveyed to the Center on May 18th of 2012[after expiration of the six month ballot retention period]…

For the revised records SOTS Office confirmed with the districts that the discrepancies were due to human counting errors.

So, apparently if any official included text in the local audit report indicating human error, the report was accepted as indicating inaccurate hand counting and implying accurate scanner counting. For example <a 26% difference in counting 50 votes. Or was it actually 64 votes?>

Last time, for the Nov 2010 audit report, we misunderstood and assumed incorrectly that the Secretary of the State’s Office conducted non-public ballot counting to investigate some of the differences. To avoid making that mistake again we asked for a description of the investigations. Peggy Reeves, Assistant to the Secretary of the State for Election, Legislative and Intergovernmental Affairs, provided a prompt description to us:

In response to your inquiry, our office performed the additional investigations referenced in the UCONN report by phone call only and we did not visit any municipalities and did not count any additional ballots. Our office did not create a list of subject towns and as such, have no such list to provide you pursuant to your request. Our office identified subject municipalities by simply reviewing the audit returns submitted to our office and calling the municipalities in question to inquire as to the reason for the discrepancy. In our experience, we do concur with the statement that hand counting errors do create the reported discrepancies.

So, the investigations apparently consisted of calling some or perhaps all local officials and having them agree that they did not count accurately. No list of such towns was created, thus we are left to speculate, if some or all of the towns identified by UConn were contacted.

Unlike the official report, the Coalition actually observes the conduct of the majority of counting sessions of post-election audits and provides comprehensive observation reports on how the local audits are conducted. Also providing ever more extensive detailed data, copies of official local reports, and statistics derived from those local reports, providing the public and officials the opportunity to verify the details in our analysis of discrepancies.

We do agree with the UConn report and the SOTS Office that most differences can be attributed to human counting errors. Coalition reports show that the counting sessions are frequently not well organized, that proven counting methods are frequently not used, the official procedures are frequently not followed, in many cases, officials do not double check ballots and counts, and often that recounting is not performed when differences are found. Yet as we have said over and over:

We have no reason to question the integrity of any official. We have no evidence that our optical scanners have failed to count accurately. However, if every difference between a hand count and a scanner count is dismissed as a human counting error then if a machine has or were ever, by error or fraud, to count inaccurately, it would be unlikely to be recognized by the current system.

Given the above we see no reason to comment on the official statistical analysis of inaccurate data, adjusted without counting or credible investigation.

We will comment that Coalition observations indicate that officials do not understand the intended meaning of “questionable votes” and frequently tend to classify far too many votes as questionable. Votes, which should be expected to be, and normally are, read correctly by the optical scanners.

We do disagree with the Secretary of the State when she and her press release state:

“Connecticut has the toughest elections audit law in the country and I
am confident at the end of this year’s audit the numbers will once again match”…

The provisions in the law, developed in close cooperation with the computer science department at the University of Connecticut, give Connecticut one of the strictest audit statutes in the country…

The 10% audit does entail counting a relatively large percentage of ballots as, is necessary, in a fixed percentage audit in a relatively small state, yet the law is full of loopholes, and we would not characterize the statute nor its operation in practice as “strict”.

Update 07/07/2012: Audit not Independent

We are reminded by a Courant correction today that this audit does not meet any reasonable definition of independent because:

  1. The local counting is supervised by the individuals responsible for the local conduct of the election.
  2. The University of Connecticut is contracted and dependent financially on the Secretary of the State, the Chief Elections Official.
  3. The Secretary of the State also revises and dictates the date used in the report.

Leave a Reply

You must be logged in to post a comment.