Audit Report: Flawed by lack of transparency, incomplete data, and assumed accuracy

Lack of transparency in the process, provides no basis for public confidence in the process, in the audit, and ultimately in our election system. The purpose of the audit is to determine the accuracy of the optical scanners, that purpose is negated when the accuracy is assumed. A statistical calculation based on randomly selected data, omitting some of that data not randomly chosen for omission, is invalid.

Update 8/12: We have received a clarification of the official report from the Deputy Secretary of the State, which modifies our opinion <read>

******************

Last week, the University of Connecticut released its official post-election audit report on the November 2010 election, just short of seven months after the election: Statistical Analysis of the Post-Election Audit Data, 2010 November Election <read>

Like previous reports, this official report fails to provide confidence in the post-election audit process and in the accuracy of the election itself.

The audit data received by the VoTeR Center contains 867 records. Among the 867 records received by the Center, 20 records (2.3%) were incomplete. This report deals with 847 records (97.7%) among which 799 records (94.3%) are from the original data and 48 records (5.7%) were revised based on the follow up conducted by the SOTS [Secretary of the State’s] office.

As demonstrated by the Coalition audit reports there are major shortcomings in the post-election audit process where official hand counts do not match optical scanner counts and incomplete reports are submitted. Based on the UConn report, these differences are addressed in three inadequate ways:

1) Some results are recounted by state officials outside of public view:

The VoTeR Center’s initial review of audit reports prepared by the towns revealed a number of returns with unexplained differences between hand and machine counts. The vast majority of records with high discrepancies were concentrated in the following three districts: East Haven (Deer Run School) with the highest reported discrepancy of 180, Hartford (Burns School) with the highest reported discrepancy of 170, and Preston (Town Hall) with the highest reported discrepancy of 55. Additionally, one or more discrepancies were reported in all but one district for the town of Orange; here the highest reported discrepancy was 14, however this could not be explained as no questionable ballots were reported. Following this initial review the SOTS Office performed additional information gathering and investigation and, in some cases, conducted independent hand-counting of ballots in the four districts mentioned above. The final information was conveyed to the VoTeR Center on June 17th of 2011 for the 48 records pertaining to those districts. The rest of the records (799 out of 847) discussed in this audit report are the original records reported by the towns. [Emphasis ours in all quotes]

We interpret this to mean that some of the counts with differences were recounted by state officials behind closed doors. And based on those counts the Secretary of the State’s Office assumed that all the counts in those districts with differences were hand counting errors. Further, assuming that when election officials make an error in counting in one case in a district and that the machine was accurate in that case, then in all cases where there were differences in those districts that the machine did not make errors.

For the last couple of years we have repeatedly, to no avail, requested that recounts of ballots for the audit be announced and open to public observation or at least the Coalition be notified and given the opportunity to observe. Lack of transparency in the process, provides no basis for public confidence in the process, in the audit, and ultimately in our election system.

As a service to the public, the Coalition provides transparent access to the official audit count reports. You can see scanned copies of the original reports from the towns mentioned <here>

2) Other differences were not recounted but “affirmed” to be hand count errors:

187 records (22.1%) showing discrepancy of 2 to 5 votes, 42 records (4.9%) showing discrepancy of 6 to 13 votes (for this group, although no manual review of the discrepancies was conducted, the SOTS Office affirmed that the discrepancies were due to hand counting errors),

We interpret this to mean that the Secretary of the State’s Office has such faith in the accuracy of our optical scanners and the election process that they assume that any differences must be caused by hand counting errors. The purpose of the audit is to determine the accuracy of the optical scanners, that purpose is negated when the accuracy is assumed.

3) Some audit reports contained incomplete data and such data was not included in the report:

The audit data received by the VoTeR Center contains 867 records, where each record represents information about a given candidate: date, district, machine seal number, office, candidate, machine counted total, hand counted total of the votes considered unquestionable by the auditors, hand counted total of the votes considered questionable by the auditors, and the hand counted total, that is, the sum of undisputed and questionable ballots. This report contains several statistical analyses of the audit returns and recommendations. The statistical analysis in this report deals with the 847 records that are sufficiently complete to perform the analysis.

We interpret this to mean an assumption that the legally mandated audit of  three races in a randomly selected 10% of districts is valid even if some of the results are not reported. A statistical calculation based on randomly selected data, omitting some of that data not randomly chosen for omission, is invalid.

Spelling out our concerns, as we said last year, after the November 2009 official UConn report:

  1. All counting and review of ballots should be transparent and open to public observation.  Both this year and last year we have asked that such counting be open and publicly announced in advance. [And again in 2011 to the new administration]
  2. Simply accepting the word of election officials that they counted inaccurately is hardly reliable, scientific, or likely to instill trust in the integrity of elections.  How do we know how accurate the machines are without a complete audit, any error or fraud would likely result in a count difference, and would be [or could have been] very likely dismissed.
  3. Even if, in every cases officials are correct that they did not count accurately, it cannot be assumed that the associated machines counted accurately.
  4. Simply ignoring the initial results in the analysis of the data provides a simple formula to cover-up, or not recognize error and fraud in the future.

As we have said before we do not question the integrity of any individual, yet closed counting of ballots leaves an opening for fraud and error to go undetected and defeats the purpose and integrity of the audit.

We also note that in several cases officials continued to fail perform the audit as required by law or to provide incomplete reports.

There are other flaws in the audit law.

  • For instance, there is no legally mandated deadline for the towns to submit audit reports to the Secretary of the State’s Office or for UConn to provide the analysis. We believe seven months after an election is a long time for the public and candidates to wait.
  • As the Coalition covered in our August 2010 Post-Election Audit Report and the November 2010 Post-Election Audit Report, the list of districts used in the random district drawing is inaccurate and challenging to verify. This also negates reliability, accuracy, and confidence in the random audit.
  • The November 2010 election, most glaringly pointed out the need for the audit to select from all ballots in the election, not just those counted by optical scanners in the polling place. Among the omitted are centrally counted optically scanned ballots and all originally hand counted ballots. The Coalition Bridgeport Recount Report demonstrated to the public that hand counted ballots can be counted inaccurately on election night. The official election system was not able to audit or recanvass those ballots and has never officially, as far as we know, recognized even the possibility the original hand counted, hand transcribed, and hand totaled numbers may be inaccurate.

As we have pointed out, over and over:

We have no reason to question the integrity of any official. We have no evidence that our optical scanners have failed to count accurately. However, if every difference between a hand count and a scanner count is dismissed as a human counting error then if a machine has or were ever, by error or fraud, to count inaccurately, it would be unlikely to be recognized by the current system.

People and scanners have and will make counting errors. The solution is transparent counting, multiple times to insure accuracy, along with credibly ballot security.

4th of July Suggestion

This weekend is a great time to [re-]read the Declaration of Independence.
The Declaration of Independence asserts our rights to determine and change our form of government – without voting integrity we lose that most fundamental of rights.

This weekend is a great time to [re-]read the Declaration of Independence. We find it very inspiring to read it sometime around the 4th of July each year.  As we have discussed before, some believe that the right to vote is more fundamental than the Constitution. Here is a link to a copy for your reading <Declaration of Independence>

The Declaration of Independence asserts our rights to determine and change our form of government – without voting integrity we lose that most fundamental of rights.

“The right to vote… is the primary right by which other rights are protected” – Thomas Paine

 

David Jefferson: Email Voting — A National Security Threat in Government Elections

While all Internet voting systems are vulnerable to such attacks and thus should be unacceptable to anyone, email voting is by far the worst Internet voting choice from a national security point of view since it is the easiest to attack in the largest number of different ways.

Security expert David Jefferson, articulates the vulnerabilities of email voting, perhaps the most vulnerable form of Internet voting (and that is saying a lot, since all forms of Internet voting are very risky). <read>

David Jefferson is a computer scientist and researcher at Lawrence Livermore National Laboratory in California where he studies cyber security and ways to protect the nation’s military, civilian, and government networks from cyber attack.  He is also the Chairman of the Board of Verified Voting, and has been studying electronic and Internet voting for over a decade, advising five successive California Secretaries of State on voting technology issues.

Excerpts:

Neither the Internet itself, nor voters’ computers, nor the email vote collection servers are secure against any of a hundred different cyber attacks that might be launched by anyone in the world from a self-aggrandizing loner to a foreign intelligence agency. Such an attack might allow automated and undetectable modification or loss of any or all of the votes transmitted.

While all Internet voting systems are vulnerable to such attacks and thus should be unacceptable to anyone, email voting is by far the worst Internet voting choice from a national security point of view since it is the easiest to attack in the largest number of different ways.

The technical points I am about to state are not my opinions alone. The computer security research community in the U.S. is essentially unanimous in its condemnation of any currently feasible form of Internet voting, but most especially of email voting. I strongly urge legislators in states considering e-mail voting to request testimony from other independent computer network security experts who are not affiliated with or paid by any voting system vendor. Email voting is extremely dangerous in ways that people without strong technical background are not likely to anticipate.

Here are the problems with email voting:

1. Lack of privacy:

2. Vote manipulation while in transit:

3. Server penetration attacks:

4. Ballot files can carry malware into the election network:

5. Voters’ computers infected with malware:

6. Denial of service attacks:

7. Email ballots are unauditable; attacks are undetectable and irreparable:

8. Multiple simultaneous attacks:

9. These facts will not change:

10. Similar problems with FAX voting:

11. Move toward Internet distribution of blank ballots.

For these reasons I strongly urge states that do not currently provide for email voting not to start down that path. In my professional opinion this path leads only to a major risk to U.S. national security, exposing our elections to easy manipulation by anyone in the world.

Voter fraud? Or thinly disguised agenda?

Where there is smoke we expect fire. Where there are extraordinary claims we need at least reasonable evidence.

From New Mexico, officials claim voter fraud, but withhold evidence: In voter fraud case, officials err on the side of secrecy <read>

My efforts to obtain the evidence behind Secretary of State Dianna Duran’s claim that she has found instances of foreign nationals illegally voting have been shot down again, this time by the Taxation and Revenue Department

Two months ago I asserted that Secretary of State Dianna Duran failed the open government test because she put a number of hurdles – some of them illegal – in front of my efforts to obtain the “evidence” she claims to have found of foreign nationals illegally voting in elections.

Where there is smoke we expect fire. Where there are extraordinary claims we need at least reasonable evidence.

Will Internet voting cost small Canadian town $10,000 to 30,000?

No. That is the estimated cost of the “business case”. It sounds like they are asking the right questions, but may be getting in over their head in doing the “business case”.

As CTVotersCount readers know, our Secretary of the State has been charged by the Legislature “within available appropriations, recommend a method to allow for on-line voting by military personnel stationed out of state”. Its quite a task to do what the Defense Department, scientists, and security experts say cannot be done with today’s technology, at any cost, while taking resources from operations and other initiatives to make the report.

Grande Prairie, Alberta, Canada is considering the same thing for its elections, but wisely is considering funding a detailed business case, including security and recountability before proceeding: <read>

Munic­i­pal Affairs Min­is­ter Hec­tor Goudreau requested the busi­ness case in order to for­mal­ize a city request to pilot online voting.

“The busi­ness case would need to address the per­ti­nent issues, such as the need for Inter­net vot­ing in the city, who is the licensed provider, how is secu­rity guar­an­teed, how is voter val­i­da­tion dealt with, what are the costs, and how are results ver­i­fied and recounts con­ducted,” Goudreau wrote.

The estimated costs of the business case?

Audrey Cerny, City Hall’s leg­isla­tive ser­vices man­ager, told the com­mit­tee it would take at least four to five weeks of staff time to develop a busi­ness case. But she said it is pos­si­ble to develop one that is less costly than the esti­mated $30,000.

“It is depend­ing on how much exter­nal con­sul­tant time is needed,” she said. “If the con­sul­tant is uti­lized for a fewer num­ber of days, the costs obvi­ously could be lower. So essen­tially it could be $10,000.”

In order for the province to study the con­cept and make a deci­sion in time for the 2013 munic­i­pal elec­tion, a busi­ness case would have to be fin­ished by Sep­tem­ber or Octo­ber, she said. That means an out­side con­sul­tant would be necessary.

“There’s no guar­an­tee (our) inter­nal resources may be able to fully com­plete this with­out using an exter­nal con­sul­tant,” she said.

It sounds like they are asking the right questions, but may be getting in over their head in doing the “business case”. We are bit skeptical that it can be done will for $30,000 or $10,000. Yet, perhaps with effective research into what others have tried a general cost estimate can be obtained and a review of the the security risks can be developed. They should also be wary of the vendor being selected as part of the business case, or of relying on vendors for “helping” with the security and recount portions of the evaluation.

Let us consider doing for Elections what we have done for Probate

The legislature should be considering doing for our elections what we have done for probate. I am not the 1st to suggest this, let us hope that our legislature is not the last to consider it.

CTMirror: A turnaround for the fiscally troubled probate courts <read>

We particularly note from the article:

On the heels of a major consolidation, Connecticut’s probate court system will end a year in the black for the first time in six fiscal years later this month, reducing its reliance on the General Fund and returning more than $5 million to the state’s coffers…

One of the oldest probate courts systems in the nation with roots dating back over 300 years, the Connecticut courts underwent a dramatic restructuring in January to reverse growing financial woes…

“The first point of the whole reorganization was to stem the hemorrhaging once we abandoned the idea that the courts could pay entirely for themselves,” said Rep. Robert Godfrey, D-Danbury, who spearheaded the reform effort that took two years to move through the legislature. “The fact that it happened was a miracle.”

As we said in 2nd comment on the CTNewsJunkie article:

The legislature should be considering doing for our elections what we have done for probate. We have 169 towns, each with at least two elected registrars of voters. Consolidation, if done appropriately, could yield decreased costs, increased professionalism, increased convenience, increased integrity, and confidence.

I am not the 1st to suggest this, lets hope that our legislature is not the last to consider it.

There are several ways this could be accomplished. Regionalization of dual elected registrars is one. Another way, which I would favor, would be regional professional, civil service, directors responsible for voting, perhaps with local registrars elected and paid a small stipend to watch out for the integrity interests of municipalities, voters, and parties. No matter how it is strutured it can be done well or poorly.  The current system is inefficient and has proven problematic both in small towns and in large cities.

Some Question Integrity Of Union Vote

There seems to be several issues. First, does the overall process and accounting for approval correspond to predetermined bylaws and rules? Second, will the election itself be free from manipulation? And Third, is it sufficiently transparent to generate confidence in the losing side that they actually lost?

CTNewsJunkie: Union Voting Begins, Integrity of Vote Called Into Question By Some <read>

Union leaders are optimistic the labor agreement reached with Gov. Dannel P. Malloy’s administration will be ratified despite vocal opponents questioning the integrity of the voting process…

But there is some distrust among members regarding the process and a strong opposition movement encouraging members to vote against the agreement. Several union members have also emailed CTNewsJunkie with questions regarding how the vote will be counted, how that 80 percent will be calculated and what sort of oversight the process will have…

One member said he could not locate language in the union’s bylaws requiring the 80 percent vote or specifying how it is calculated. However, section 10 of SEBAC’s bylaws state the agreement requires a “four-fifths majority of representatives in good standing and not more than one bargaining agent voting in opposition.”

[State Employee Bargaining Agent Coalition spokesman Matt] O’Connor likened the process to the presidential Electoral College. Unions vary in the size of their membership, and their votes are weighted depending on that size, he said…

But one member said that system lends itself to rigging and dropping a piece of paper into a cardboard box gives few members confidence.

“This is outrageous and is not logical, unless you’re attempting to fix the vote to pass the most difficult aspect. The correct way is to let all votes stand as they have been cast, report those for and against numbers to SEBAC and combine all the yays and nays and arrive at a true membership percentage,” he wrote…

Members have also expressed concern over how the voting process will be monitored and whether or not an impartial third party would have oversight of it. O’Connor said the process is as transparent as possible but pointed out that the specifics of how votes are cast are really up to each individual union. Much like how each state in the nation decides its own voting process, each element of the coalition sets its own rules…

Members looking for specifics about the process can find them spelled out in their union’s individual constitution, he said.

Will there be a third party overseeing the votes of each union and their subsequent bargaining units?

“We’re not inviting the Carter Center in if that’s what you’re asking,” O’Connor joked.

There seems to be several issues.  First, does the overall process and accounting for approval correspond to predetermined bylaws and rules?  Second, will the election itself be free from manipulation? And Third, is it sufficiently transparent to generate confidence in the losing side that they actually lost?

If the sketchy details are consistent with the voting system, then we  suspect that the voting process may not be transparent enough to generate that confidence. Voting with slips of paper in a box is fine if the vote takes place all at once and the ballots are publicly counted immediately thereafter. There seem to be two problems with the system described – the ballots need to be counted immediately and publicly after the box is opened, and the box must be on public display and actually observed by opposing interests the whole time.

Connecticut’s alternative for our elections is ballots publicly inserted into optical scanners then a result tape immediately produced and displayed publicly – followed by a post-election audit.  This could work for the unions except that if there are multi-day elections the issue arises of security of the ballots and scanners between voting times. (Unfortunately, in the case of Connecticut elections ballot security and chain of custody between election day and the audit is insufficient to provide confidence.)

How Anonymous Are Paper Ballots?

A new research report brings into question the degree of anonymity in paper ballots. The finding raises potential concerns for states and election jurisdictions considering the merits of either making ballots available for public review or releasing them under freedom of information requests. We find reasons for concern with ballot anonymity and reasons for skepticism that the result will hold under additional research.

A new research report brings into question the degree of anonymity in paper ballots:  New Research Result: Bubble Forms Not So Anonymous <read overview> <report>

From the overview:

Today, Joe Calandrino, Ed Felten and I are releasing a new result regarding the anonymity of fill-in-the-bubble forms. These forms, popular for their use with standardized tests, require respondents to select answer choices by filling in a corresponding bubble. Contradicting a widespread implicit assumption, we show that individuals create distinctive marks on these forms, allowing use of the marks as a biometric. Using a sample of 92 surveys, we show that an individual’s markings enable unique re-identification within the sample set more than half of the time. The potential impact of this work is as diverse as use of the forms themselves, ranging from cheating detection on standardized tests to identifying the individuals behind “anonymous” surveys or election ballots.

The data is based on a sample of 92 ballots filled out at the same time, on the same form, using the same writing instrument:

To test the limits of our analysis approach, we obtained a set of 92 surveys and extracted 20 bubbles from each of those surveys. We set aside 8 bubbles per survey to test our identification accuracy and trained our model on the remaining 12 bubbles per survey…

Additional testing—particularly using forms completed at different times—is necessary to assess the real-world impact of this work. Nevertheless, the strength of these preliminary results suggests both positive and negative implications depending on the application. For standardized tests, the potential impact is largely positive. Imagine that a student takes a standardized test, performs poorly, and pays someone to repeat the test on his behalf. Comparing the bubble marks on both answer sheets could provide evidence of such cheating. A similar approach could detect third-party modification of certain answers on a single test.

The possible impact on elections using optical scan ballots is more mixed. One positive use is to detect ballot box stuffing—our methods could help identify whether someone replaced a subset of the legitimate ballots with a set of fraudulent ballots completed by herself. On the other hand, our approach could help an adversary with access to the physical ballots or scans of them to undermine ballot secrecy. Suppose an unscrupulous employer uses a bubble form employment application. That employer could test the markings against ballots from an employee’s jurisdiction to locate the employee’s ballot. This threat is more realistic in jurisdictions that release scans of ballots.

The finding raises potential concerns for states and election jurisdictions considering the merits of either making ballots available for public review or releasing them under freedom of information requests. We find reasons for concern with ballot anonymity and reasons for skepticism that the result will hold under additional research. Before concluding a number of serious implications, it is critical to do longitudinal studies, as recommended in the report, and to study several other challenging dimensions.  Considerations and directions include:

  • On a small sample, a 51% chance of the most likely individual being correctly identified may not be all that useful, not knowing which 51% are the correct identifications.
  • How does the probability of the detection of correct correspondent vary with the number of voters? 100, 200, 400, 800 etc.
  • Are there classes of voters that clump and are hard to distinguish and others that are fairly unique?  Is this similar to blood type classifications, with more types, but much less distinct classes? Or is it similar to DNA with many variations, but again nowhere near as distinct?
  • From looking at a lot of ballots in audits and recanvasses it is clear to me that people do make consistent marks in bubbles on a single ballot, with a single instrument, on a single day, however:
    • Do voters make the same marks over time and in different contexts?
    • To what extent do single voters or collective groups of voters fill in bubbles the same way from election to election?  I suspect it varies from person to person as well,. For me I suspect I am very inconsistent from election to election, except that I do tend to fill in complete bubbles – which would place me in a large class of voters difficult to distinguish individually.
    • Filling out an SAT or survey can be quite different than voting. In an SAT we think more and in different ways, under much more stress. In a survey we may hardly think or care at all.
  • In Connecticut we use felt tip pens in polling places. To what extent does such a thicker instrument make the classification more or less accurate? I would suspect the thicker the instrument the more difficult the classification in general.
  • In longitudinal studies (using forms filled out on different occasions, days, weeks, months, or years apart): How much more difficult is identification when the instrument varies? e.g. Felt tip pens can be drier or wetter, vary in thickness based on use.  Pencils can vary by sharpness, vary by manufacturer. Pens and pencil marks may vary in the way the instrument is be able to be gripped or is gripped on a particular occasion.
  • What good are past examples from one type of test/ballot type to another?  I suspect difficulties based on bubble size, bubble shape, rectangles, or connecting lines – even shape of ballot/test form, layout, lighting, sitting vs. standing etc.
  • For example, let us say an employer, union, government entity, criminal enterprise, or church wanted to use this method to test votes of individual employees/members, without their knowledge. What accuracy/confidence could they expect with samples from presumably a small subset of voters in a precinct when attempting to identify their ballots in a sea of ballots filled out by other voters?

More research is necessary before we can conclude the degree to which bubble analysis can be used to identify voters.  Even so there would be trade-offs between public the positive value and risks of public availability of ballots for review. There are mechanisms of election transparency short of public disclosure of complete paper ballots  – methods which could reduce risks but at some risks to credibility and transparency. Of course we could eliminate paper ballots all together and take the greater risks of errors, skulduggery, and lack of confidence of electronic voting like we have seen in recently in New Jersey, last year in Kentucky and several years ago in Sarasota.

Flip-Flopping has its place, but not in voting

Reading and listening to the media we are led to believe that flip-flopping is the worst possible political sin. Wrong. Much of the time we spend writing, voice-mailing, or speaking with legislators is working to convince them to understand a more complete picture; to change their positions on issues.

Brad Friedman covers the case of a voting machine flip-flopping in Los Vegas and the history of flip-flopping and completely missing votes: Las Vegas Mayoral Candidate Sees Own Vote Flipped to Opponent on Touch-screen Voting Machine <read>

It took two tries, but Carolyn Goodman, candidate for Mayor of Las Vegas, and wife of current Mayor Oscar Goodman, was finally able to vote for herself today on Nevada’s illegally-certified, 100% unverifiable Sequoia AVC Edge touch-screen voting machines. At least she thinks she did. Whether her vote will actually be counted for her is something that nobody can ever know…

The failure that Goodman had, and noticed, is just the latest in a string of celebs and candidates who have had similar problems with 100% unverifiable voting machines — as still used by some 20 to 30% of voters in the U.S. — either flipping their vote, or not allowing them to vote at all…

[Randy] Wooten was running for mayor in the rural Poinsett County town with a population of just 80 people that year, when he learned, after the close of polls on Election Night, that he had received a grand total of ZERO votes, as reported by the county’s ES&S iVotronic touch-screen voting systems.

As AP noted at the time Wooten said, “I had at least eight or nine people who said they voted for me, so something is wrong with this picture.” Among those people who Wooten believed had voted for him: himself and his wife.

In Praise Of Flip-Flopping

Reading and listening to the media we are led to believe that flip-flopping is the worst possible political sin. Wrong. Much of the time we spend writing, voice-mailing, or speaking with legislators is working to convince them to understand a more complete picture; to change their positions on issues.

In 2005, Secretary of the State, Susan Bysiewicz, publicly tested and was about to choose unverifiable touch screen (DRE) voting machines similar to those in Los Vegas – then she looked at the evidence, considered additional information from vocal advocates, and flip-flopped. That is why we have voter verifiable paper ballots and optical scanners in Connecticut, with a side benefit of saving about half the cost of DREs.

Senate passes risky, expensive online voting bill – Now on consent calendar

Despite opposition by the Secretary of the State and promises to the contrary, the Senate passed S.B.939 with online voting, placing it on the Senate consent calendar.

Despite opposition by the Secretary of the State and promises to the contrary, the Senate passed S.B.939 with online voting, placing it on the Senate consent calendar.  Now Section 59 rather than Section 60:

Sec. 59. (Effective from passage) The Secretary of the State shall, within available appropriations, recommend a method to allow for on-line voting by military personnel stationed out of state. The Secretary shall 1830 look at what other states have done to reduce any potential for fraud in on-line voting and determine whether any such state’s on-line voting system could be appropriate for adapted use by this state. Not later than January 1, 2012, the secretary shall, in accordance with the provisions of section 11-4a of the general statutes, report any progress made toward recommending such a method to the joint standing committee of the General Assembly having cognizance of matters relating to elections.

For more information see < post on bill status> and <Op-Ed on online voting>.

Update: It has been pointed out to me that the word “recommend” in the amended bill replaces “establish” in the previous version.

Update: It is the law now. Passed by House with debate, but none about the Online voting provision. Only Representative Tim O’Brien voted against.