Wednesday, May 21, 2008

Parliamentary Elections in Georgia | ODIHR Observation

With today's elections in Georgia, various themes come to mind. Certainly, elections have come a long way: by now, the Georgian government employs a series of highly qualified consultants, including Greenberg Quinlan Rosner of Clinton-fame, plus a Brussels-based PR firm, as well as working with experienced teams from the Baltics. This, then, is no longer the game of the 1990s, or 2003. Election observers know that they in turn will be observed, and maybe that's how it should be.

Note, also, the use of the Internet: so the United National Movement today is employing 150 minibuses to ferry voters around. And: they decided to put the number plates of these buses online. That doesn't make the move more popular with the opposition, but it's no longer the early-morning hush-hush thing of the past.

We're also currently working on a short paper arguing that OSCE's classical method of election observation needs to be overhauled. ODIHR, as OSCE's election observation arm is called, has an approach that has the feel of an undergraduate research project, and there is fairly little systemic thinking on how to do an observation well. While observers are briefed (often in tedious detail), there is no applied training on the minutiae of election observation. While there are legal, media, gender, minority analysts, CEC liaison, and security people there is no training officer.

In a good mission, the Long Term Observers will actually compensate for the institutional shortcomings. With bad LTOs (and having been on a fair number myself, it's noticeable how some dunderheads get recycled from mission to mission) it can be a farcical exercise.

Ultimately, research methods really matter: ODIHR (as OSCE's election observation arm is called) makes assertions about empirically verifiable facts, and this is precisely where social science methodology has come a long way.

Take this example from OSCE's Final Report on the Georgian Presidential Elections
Now, given that there were so many election observers out there (495 observers, that means almost 250 teams) a casual reader may assume that this is broadly representative: the count will have been bad in roughly 23% of stations throughout the country, right? Even if you do not draw this conclusion, test it out on friends or colleagues, and this is the assumption many will walk away with.

Now, as it ends up, that assumption may very well be mistaken. A team of election observers typically is visiting up to 10 polling stations on their observation day. They normally are instructed to pick a polling station in which they think "things will be bad" (politicized/ incompetent chairperson, some irregularities such as irreconcilable numbers during the day). As a result, there will be tremendous selection bias.

In other words, 23% of observers, untrained but looking hard, managed to find precincts in which counts were bad or very bad. Unfortunately, that number says little about what the percentage of precincts in the country is in which the count really was bad. It could be half that number, or even less (or more, given the lack of observer training!). An easy mistake to make, and just one example of what would need to be fixed in the reporting.

Time for ODIHR to undergo a rigorous external evaluation.


Eistein G. said...

Hello, Hans.
For me the exit polls are of particular interest. Ballott rigging and other funny stuff also happen in Norway and USa...and..and.. But exit polls, given the use of the right methodology, use to be relatively close to the final result. Maybe you could comment on how this is conducted in this case?

HansG said...


will comment on exit polls -- although we are still trying to get the full story there. They can work, but are also susceptible to manipulation (essentially just a handful of people you need to bribe, those working with the dataset).

Anyway, more soon -- and, belatedly, Happy Constitution Day.