Brennan Center: The Machinery Of Democracy

On August 1st the Brannan Center released a report Post-Election Audits: Restoring Trust in Elections which has been covered on Today we will look at the security portion of an earlier report The Machinery Of Democracy: Accessibility, Usability, and Cost and its implications for Connecticut.

The tone of the report is serious. The conclusions are serious. Like all computer voting machines, optical scan voting machines are vulnerable, they are most vulnerable to malicious software, they “pose a real danger to election integrity”, and most jurisdictions have implemented none of the counter measures recommended.

A key finding:

The Brennan Center’s Task Force on Voting System Security reviewed more than 120 potential threats to voting systems…attacks involving the insertion of software attack programs or other corrupt software are the least difficult attacks against all electronic systems currently purchased when the goal is to change the outcome of a close statewide election.

Note that the report emphasizes changing the outcome of close statewide elections. Vulnerabilities were assessed on several factors including the number of people needed to steal a reasonably close statewide race and the difficulty of counter measures . This does not mean that the vulnerabilities for nationwide, house district, or local races are insignificant. In fact, the fewer districts in a race the more difficult vulnerabilities may be to detect and the fewer people required.

Three fundamental points listed in the report:

(1.) [All current voting systems including optical scan] “have significant security and reliability vulnerabilities, which pose a real danger to the integrity of national, state, and local elections. (2) The most troubling vulnerabilities…can be substantially remedied…at the state and local level. (3) Few jurisdictions have implemented any of the key countermeasures that could make the least difficult attacks against voting systems much more difficult to execute successfully.

Two pieces of bad news and one of good news. We can remedy the problems. First we must be aware of the problems, and recognize that software is the most vulnerable part of the system. We must be concerned about all attacks, listen to vendors when they advise strong security procedures for memory cards. But recognize that is only one measure that will protect us from some of the attacks some of the time. We must recognize the weakest part of the system, which is the programming of the system and memory cards. Those who program the cards and election system control the election results. Those with detailed knowledge of the system can collude with unscrupulous individuals to attack the system after it is programmed.

The Task Force recommended the implementation of six security measures.

  1. Solid, routine, paper to electronic audits after every election
  2. Realistic parallel testing of election machines on Election Day
  3. Ban wireless components
  4. Transparent random selection for all auditing procedures
  5. Decentralized programming and administration
  6. Clear and effective procedures for acting on errors detected

While I don’t completely agree with all of the recommendations, most are valuable and effective, and several are absolutely necessary. Lets start with a summary where I grade CT on a 0 to 100 scale, and my letter grade which takes into account the cost/benefit:

Recommendation Necessary CT Grade
1 Solid Audits
Yes F-20% 10% of districts, 3 or 20% of races, no audit of questions or referendums, many municipal elections and races not audited
2 Parallel Testing
C-50% No parallel testing. But does have independent testing and reporting of software and equipment by Uconn
3 Ban Wireless
A-100% If you think this is a minor item, read the report
4 Random Selection
Yes C-67% Random selection of districts and races, however, no random selection of election staff to districts for the audits.
5 Decentralized programming/admin
No D-25%
Management decentralized to Registrars and centralized to Secretary of the State. Programming centralized and outsourced.
6 Clear error actions
Yes D-50%
PA 07-194 provides some direction, but lacking in clear objective and subjective criteria.


Solid Audits – Since this report there have been several reports dealing with the statistical levels of audit necessary to detect errors or fraud. Most of these reports define the level of audit necessary to have a level of confidence (a highly technical term in statistics) that the candidate chosen by the intent of the voters was also the winner of the audited contest. The most cost effective audits are those that base the % of audit on the initially reported election margin and the number of districts involved in the race. With a 10% audit of districts Connecticut has a number that is normally more than sufficient for statewide races, often sufficient for U. S. House races, but quite insufficient for municipal, state senate, and state house races. Also Connecticut only audits 3 or 20% of races in the 10% of districts selected, leaving most races in most elections unaudited. Referendums and questions are never audited. Attempted fraud in a statewide or U.S. House race may risk detection, however, the odds of detection in more local races is not likely to be a deterrent.

Parallel Testing – I understand the potential deterrent, but I am skeptical this is worth the cost. The parallel testing envisioned by the report is random selection of machines to test on Election Day in a realistic conditions as possible. To do this on any reasonable scale would be expensive, especially considering the need for election staff to be conducting an election. To do it effectively would require testing as many polling places as a random audit and would require staff or volunteers to fill out a reasonably accurate group of ballots with all the randomness of the public on election day, and feed them in over time simulating the real election. It would cost much less to do a better job with random audits. However, doing perhaps a small or single sample district as part of the certification of equipment and software versions would be a good idea, at less expense, but requiring a very robust operation with new races and ballots for each test that cannot be distinguished from an actual Connecticut election.

Random Selection – Connecticut specifies random selection of districts and random selection of races. However, there is nothing to preclude and everything to assume that the staff assigned to manual counts will most likely be from the same town and district being audited. Manual audits and recounts gain the most integrity and confidence if they are conducted by randomly assigned individuals from different districts, different towns, and perhaps not involved in the election day itself.

Decentralized Programming and Administration – Decentralized programming is not always the answer or better. Connecticut is relatively small and does not have real county government, other states have county election management. Decentralized programming and administration may help isolate problems with statewide elections, yet may offer more opportunities for problems. However, Connecticut outsources the programming of its elections to one firm, Premier Election Solutions, Inc., formerly Diebold – our elections are programmed in secret by people over whom we have little control, who have the highest levels of knowledge and the most extensive access to the most vulnerable part of the system at the most vulnerable point in the process. There is no foolproof (or criminal-proof) way to prevent intentional or inadvertent corruption of the election cards, yet this seems to be among the least likely ways to generate confidence in the process.

Clear Error Actions – SB 1311 – PA 07-194 contains provisions for recounts based on audits. Clarity specifying exact criteria for recounts is needed. The criteria should be based on discrepancies in the total vote reported on election night vs. the total hand count of all ballots. Excuses for the machine not reading the voters intent accurately may be acceptable for voting machine certification, but they must be included in determining if there is a potential for the election night winner to be inaccurate. In addition to clear objective criteria, subjective criteria should be included, especially based on statistical analysis of district by district results that point to various anomalies that raise doubts and suspicions. The audit process, subjective, and objective criteria should be subject to the review and oversight by a body independent of the Secretary of the State and others responsible for selecting equipment, certifying software, and conducting the elections.

The later, August report, is for the most part, a detailing of some of the criteria for post-election audits. There is one contradiction between the two reports. The October report recommends that districts be chosen for audit by 9AM the day after the election and commence at 9AM the day after the election. Fast audits leave little time for ballots to be changed to match the machine totals or be “lost” or subject to custody problems. The August report strongly recommends that the choice not be made until after all districts have reported results, so that late results cannot be adjusted if a district is selected or not selected for audit. These conflicts could be reconciled by ordering full hand recounts of any districts not reporting before the random drawing commences.


Leave a Reply

You must be logged in to post a comment.