Showing posts with label Political efficacy. Show all posts
Showing posts with label Political efficacy. Show all posts

Monday, May 13, 2019

Pessimism about Georgia’s direction hides room for optimism

[Note: This article was co-published with OC-Media. The article was written by Koba Turmanidze, the Director of CRRC-Georgia. The views expressed in this article represent the author’s alone and do not necessarily represent the views of the National Democratic Institute or any other entity.]


While a large number of Georgians think the country is going in the wrong direction, the fact that they are judging the country’s performance based on issues rather than political partisanship alone is a good sign.

A quick and simple look at where people think the country is headed is not very hopeful in Georgia: NDI/CRRC survey data show that at least one in three adults believe that the country has been going in the wrong direction for the past five years, and for most of that time, more people have reported the country was going in the wrong direction than the right one.

While a first look suggests a less than rosy picture, the data do hide some positive news. People, at least in part, appear to judge direction based on policy performance rather than only whether their preferred party is in power, something that has inhibited the development of a stable political and party system in Georgia.

Looking at demographic factors that might influence assessments of the country’s direction, including age, gender, education, employment, household economic status, and household size suggests demographics explain relatively little in terms of attitudes towards the direction of the country. Only tertiary education is associated with having a more negative attitude towards the direction of the country among these variables.

Yet, a statistical analysis that includes people’s assessments of specific policies and party preferences shows a strong link with how people perceive the direction of the country. People who negatively assess a specific issue are two to three times more likely to assess the country’s general direction negatively. Of 16 issues asked about on the survey, the only exception was inflation, where a negative assessment influences perceptions of the country’s general direction relatively little, all else equal.


Surely, some issues are more important for people than others, jobs being at the top of the list in Georgia. In contrast, freedom of speech was close to the bottom at the time of the survey, with only 2% naming it as an issue of national importance in the same survey wave.

Yet, no matter the relative importance of the issue, the relationship described above still holds. The chart below illustrates the point. A person with a negative assessment of the country’s direction is 25 percentage points more likely to say that the situation regarding jobs is going in the wrong direction. Likewise, people who say that the situation regarding freedom of speech is going in the wrong direction are 32 percentage points more likely to assess the country’s direction negatively.


Attitudes towards political parties are also associated with assessments of the country’s general direction. On the survey, people were asked whether there was a party they would never vote for, a question used to measure negative partisanship.

As one would expect, a negative attitude towards Georgian Dream, the ruling party, is positively associated with a negative assessment of Georgia’s general direction, while a negative attitude towards the United National Movement is associated with more positive assessments. This holds for both the direction of the country as well as individual policies in most cases.


While people’s partisanship matters, so do their assessments of particular issues. Both predict whether or not someone thinks the country is headed in the right or wrong direction, controlling for the other factors.

The chart showing assessments regarding each of the 16 issues by negative attitudes towards the two largest parties illustrates the point. Whether people dislike Georgian Dream or the United National Movement, a negative assessment of a specific issue is associated with a negative assessment of the country’s direction. The same observation holds for people who do not hold a negative predisposition towards any political party.

This matters. Citizens are not looking at specific and general issues through narrow partisan lenses alone. Instead, the data suggest assessments are at least partly independent from party labels, which provides parties with the opportunity to campaign on issues instead of merely blaming each other for their failures and attempting to cultivate followings around charismatic leaders.   

Note: The above analysis is based on a series of logistic regressions, where the dependent variable is a negative assessment of Georgia’s general direction, key independent variables are a negative assessment on each of 16 specific policy issues as well as attitudes towards political parties. In addition, all models have demographic control variables including, gender, age, settlement type, education, employment status, household size, and household’s economic status. Replication code of the full analysis is available here

Monday, January 14, 2019

Institutions need to replace personality

[This article was first published on OC-Media.]

A fair amount of scholarship indicates that (dis)trust in political institutions provides an indication of how well the institutions work. Hence, trust in political institutions is an important indicator for the functioning of a democratic government.

Following this line of logic, one would expect that trust in institutions reflects the public’s trust in who runs them. Caucasus Barometer (CB) data from 2011 to 2017 support this argument.

Overall, the data indicates that trust in political institutions has declined since 2011. None of the political institutions asked about on CB (the president, local government, executive government, parliament, and political parties) received as high a level of trust on the 2017 Caucasus Barometer as on the 2011 or 2012 waves of the survey.

While trust has declined overall, the relative levels of trust have largely been in sync with the changes of power in the country.

After Georgian Dream came to power in 2012, there was an increase in trust towards the executive government (from 39% to 48%) and parliament (from 37 to 44%), the two institutions that changed political leadership.

Trust in the president continued to decline from 58% in 2011 to 28% in 2012 and 23% in 2013. All of these surveys were done while Mikheil Saakashvili was still president.

Trust in the president grew in 2015, the first wave of CB after the 2013 presidential elections, which ended Mikheil Saakashvili’s presidency and brought Giorgi Margvelashvili to office.

Public trust in local government did not follow the same logic as executive government, parliament, and the presidency. Even though Georgian Dream won the 2014 local elections, trust in local government did not change between 2013 and 2015.

This could be due to the relatively weak public expectations of local government. Indeed, in 2013, only 4% of the public reported they had attended a local government meeting in the last year on a CRRC/TI survey. Besides low expectations, many local government officials had defected from the UNM to GD in the years since the change of power. Hence, it is not clear that the elections truly marked a change of power.

At the same time that trust in local government did not increase, trust towards executive authorities and the parliament declined as the popular glow surrounding Georgian Dream wore away. Trust towards the executive fell from 48 percent in 2012 to 26 percent in 2017. While 44 percent trusted parliament in 2012, trust fell to 22 percent in 2017. Meanwhile, trust in President Margvelashvili continued grow, which might be attributable to his de facto opposition to the ruling party, without defection to the UNM.

Trust in political parties has remained low and showed little change from year to year. It declined between 2011 and 2015, yet, trust in political parties does not appear to follow the electoral cycle as trust in institutions controlled by specific parties appears to.

Growing public mistrust toward political institutions in Georgia is a sign of weak state institutions in the country. Renewed optimism and trust in institutions appear to follow changes in political leadership, but without strong institution-building processes, optimism turns into disappointment.

While Georgian democracy has made consistent progress for the last three decades, transitioning from personality to policy driven politics remains a challenge for Georgia’s democratic consolidation.

This article was written by Kristina Vacharadze, Programs Director at CRRC-Georgia. The opinions expressed in the article do not represent the views of CRRC-Georgia or any related entity.

To explore the data further, visit our Online Data Analysis tool.


Monday, May 01, 2017

Rising expectations: People report more positive expectations about MPs immediately after elections

Research suggests that voters not only become more knowledgeable about political issues, but also more politically engaged during electoral campaigns. CRRC/NDI survey data also suggest that during the periods immediately following elections, there are more positive expectations about elected officials. These, however, do not last long after elections.

A citizen’s knowledge of which Member of Parliament (MP) represents her or him ebbs and flows with election cycles in Georgia. In March 2016, three and a half years after the most recent parliamentary elections in October 2012, only 31% of the population of Georgia answered correctly who their majoritarian member of parliament was at the time. A month after the October 2016 parliamentary elections, in November 2016, the respective share nearly doubled (57%). The findings before and after the 2012 parliamentary elections are similar. In the period between the elections, knowledge of which MP represented a constituent declined.


People’s expectations of their MPs also oscillate with the electoral cycle, with higher expectations immediately after elections. After both the October 2012 and October 2016 elections, the share of those reporting that MPs will serve people’s interests increased almost two-fold compared to early spring of the election year. Expectations that MPs will serve only their own interests have a tendency to decrease immediately after elections. However, they gradually increase later on. The expectations that MPs will do what their political party will tell them to also decrease immediately after the elections, although the gaps are smaller.


Note: In the survey waves from February 2012 through April 2014, the question was asked about majoritarian members of parliament. Since April 2015, the question was asked about members of parliament in general.

Positive expectations also increased after the October 2016 elections in respect to whether the newly elected MPs will take into account the opinions of regular people – “people like you,” as it was worded in the questionnaire. On the November 2016 survey, 63% either completely or somewhat agreed with this opinion, while the respective share was only 28% in March 2016, when the same question was asked about the MPs that were in office at the time.
This evidence suggests that people in Georgia become more optimistic about members of parliament in the months immediately following elections, believing that elected politicians will serve people’s interests. However, as time passes, they become disillusioned and their expectations become more skeptical.

To explore the CRRC/NDI survey findings, visit CRRC’s Online Data Analysis portal.

Thursday, August 25, 2016

Making Votes Count: Statistical Anomalies in Election Statistics

[Note: In order to help monitor the fidelity of the October 2016 parliamentary election results, CRRC-Georgia will carry out quantitative analysis of election-related statistics using methods from the field of election forensics within the auspices of the Detecting Election Fraud through Data Analysis (DEFDA) project. The Project is funded by the Embassy of the United States of America in Georgia, however, none of the views expressed in the following blog post represent the views of the US Embassy in Georgia or any related US Government entity.]

On Friday, August 19th, CRRC-Georgia presented and published a pre-analysis report for the Detecting Election Fraud through Data Analysis project, which contained analysis of the new electoral boundaries set up following the 2015 constitutional court ruling that the previous boundaries were unconstitutional.

The report also demonstrated how the methods of statistical analysis that CRRC-Georgia will use to monitor the 2016 elections work in practice. To do so, we used precinct level data from the 2012 party list elections. Specifically, CRRC-Georgia carried out two types of statistical analyses:

  • Logical checks of official election returns, which test whether there were data entry errors when the vote was being recorded and collated; 
  • Tests for statistical anomalies in the official electoral returns, which may suggest electoral malfeasance. 

While Monday’s blog shows the logical checks that CRRC will apply to the final CEC vote records, today we discuss the tests used to identify statistical anomalies in vote counts.

Election Forensics: Detecting statistical anomalies in voting data

Direct observation of polling stations is the best method available to ensure the accuracy of the vote, however, election observers cannot be everywhere all the time. Given this fact, the field of election forensics, a subfield of political science, has developed a number of statistical tests to look for statistical anomalies in election returns, which may suggest suspicious election-related activity. Although a number of rather complicated statistics exist, we focus on a number of simpler tests. Specifically, we use tests based on the distribution of the second digit in the number of votes cast, the final digit in the number of votes cast, and the distribution of turnout within an electoral district.

Second digit tests are based on Benford’s law. Benford’s law provides the expected probability of the first digit being any digit one through nine in a number with multiple digits. Although one might expect this number to be equally likely to be any number, in fact 1 is more likely than 2, 2 more likely than 3, etc. Using Benford’s Law, accountants test various documents for anomalies that may suggest issues in documents. This law also applies to the second digit in a number, which researchers have found is more suitable for testing election results. A similar logic is applied to elections as in accounting, and in this blog, we specifically test whether the skew, kurtosis, and the average of the second digit and its distribution follow the expected distribution or not. Instances of non-conformity to Benford’s law may suggest electoral malfeasance.

Besides second digit tests, a number of tests have been proposed for the last digit in vote counts. Here, the expected distribution of digits is much more intuitive, and one expects each digit, zero through nine, to be approximately 10% of the total distribution. Based on this distribution, we test the mean of the last digit and of the mean of the count of zeros and fives in the final digits of votes.

In order to test whether the above noted digit tests in fact indicate potential issues or whether the difference between the observed and expected values was a chance variation, we use a statistical method called bootstrapping. This method lets us to estimate 99% confidence intervals. In the present case, the confidence intervals provide a range within which the result could have fallen by chance. If the range covered does not include the expected value for a given test statistic, we conclude with 99% confidence that the number is different not by chance alone.

Finally, voter turnout is expected to have a relatively normal distribution with a single mode. Based on this expectation, we test whether voter turnout in each electoral district has a single mode or multiple modes using what statisticians refer to as a dip test.

Before reporting the test results, it is worth noting several important caveats when interpreting these tests:

  • Test results are probabilistic, which means that they say the distribution is highly unlikely (would occur 1% of the time in the present case), rather than impossible to occur in the absence of issues. For the tests, we calculated 99% confidence intervals. With 99% confidence intervals and having conducted 444 tests on the 2012 proportional election results, statistically we would expect between four and five tests to be set off in the absence of issues due to chance alone. 
  • The lack of a test being set off does not necessarily mean a problem occurred, but it does suggest the need for further examination; 

In total, 11 districts show statistical anomalies in the test results, and a total of 15 tests report suspicious results. Results are presented in Table 5. In the rows with district names and numbers, the actual test values are reported. In the row below the district name, 99% confidence intervals are reported. Red cells in the table indicate the presence of a statistical anomaly.



Rustavi’s electoral returns set off three statistical tests. Given that we have no reason to expect specific voting patterns in Rustavi compared to other areas in the country that did not set off suspicious tests, this suggests that there may have been electoral malfeasance in Rustavi in 2012. Reviews of election monitoring reports, however, did not suggest electoral malfeasance. This test may be picking up on undetected electoral malfeasance from 2012 in Rustavi. Although unlikely, these three tests could have also been set off by chance.

In Kobuleti, two tests were also set off. In Kobuleti, we would not expect a particularly distinctive voting pattern. Hence, there is a relatively strong reason to believe that electoral malfeasance may have occurred in Kobuleti in the 2012 elections. This contention is supported by election monitoring reports, which reported issues in Kobuleti.

In Bolnisi, two tests were set off. Complaints were filed in Bolnisi on election day, and the test may have been set off by these issues. However, given Bolnisi’s relatively high ethnic minority population and distinctive voting pattern, the tests could have been set off by this rather than malfeasance.

Eight other districts had single positive tests for electoral malfeasance, including Vake, Saburtalo, Kareli, Akhaltsikhe, Adigeni, Vani, Senaki, and Martvili. A review of the OSCE and GYLA election monitoring reports suggest that issues may have occurred in at least half of these districts. Although these positive tests could have occurred by chance alone, the four districts in which a test was set off and observers did not report malfeasance in may also suggest unreported problems in the 2012 elections.

This blog post has described the methods CRRC-Georgia will use to detect statistical anomalies in election returns. For more on the methods CRRC-Georgia will use to monitor the elections, see our pre-analysis report, here, and take a look at Monday’s blog post on logical inconsistencies in election records.

Monday, August 22, 2016

Making Votes Count: Logical Inconsistencies in Voting Records

In order to help monitor the fidelity of the October 2016 parliamentary election results, CRRC-Georgia will carry out quantitative analysis of election-related statistics using methods from the field of election forensics within the auspices of the Detecting Election Fraud through Data Analysis (DEFDA) project. The Project is funded by the Embassy of the United States of America in Georgia, however, none of the views expressed in the following blog posts represent the views of the US Embassy in Georgia or any related US Government entity.

On Friday, August 19th, CRRC-Georgia presented and published a pre-analysis report for the project, which contained analysis of the new electoral boundaries set up following the 2015 constitutional court ruling that the previous boundaries were unconstitutional. The report also demonstrated how the methods of statistical analysis that CRRC-Georgia will use to monitor the 2016 elections work in practice. To do so, we used precinct level data from the 2012 party list elections. Specifically, CRRC-Georgia carried out two types of statistical analyses:

  • Logical checks of official election returns, which test whether there were data entry errors when the vote was being recorded and collated; 
  • Tests for statistical anomalies in the official electoral returns, which may suggest electoral malfeasance. 
While today’s blog shows the logical checks that CRRC will apply to the final CEC vote records, tomorrow we will discuss the tests used to identify statistical anomalies in vote counts.

Logical inconsistencies in voting records
For the 2016 elections we will carry out two types of checks of the logical consistency of votes. Specifically, we will check:
  • Whether there are more or less votes and invalid ballots than signatures recorded on voter rolls;
  • Whether turnout increases over the course of the day.
Voter signatures - Votes recorded - invalid ballots ≠ 0
Taken together, the number of signatures recorded for ballots minus the number of votes recorded minus the number of invalid ballots should equal zero. However, in the 2012 parliamentary proportional list elections this was not the case in approximately 25% of precincts. From the 3,680 precincts which had ten votes or more:
  • 936 precincts had more or less signatures than votes and invalid ballots (25% of all precincts); 
  • Of these, 918 had more signatures registered than votes recorded for a party or ballots registered as invalid combined; 
  • 18 precincts had fewer signatures than votes registered for a party and invalid ballots combined.
These phenomena likely have numerous causes. While some are problematic, others are benign.

To start with the 918 cases of fewer votes registered for a party or invalid ballots than signatures recorded, the severity of the issue varies widely. In order to provide some sense of the gravity of the issue, we have grouped precincts by the number of extra signatures into three categories: unlikely to be problematic (1-9 extra signatures), potentially problematic (10-49 extra signatures), and suspicious (50 or more extra signatures). Table 1 presents the number of precincts that fall into each category:


Unlikely to be problematic Potentially Problematic Suspicious
# of Precincts 816 (89%) 56 (6%) 46 (5%)
Count foreign 0 4 42

Notably, of the 46 suspicious cases, 42 are in foreign precincts. With foreign precincts, we strongly suspect that there was a data entry error as discussed in more depth in our report. Among domestic precincts, there are four suspicious precincts with more than 50 extra signatures. In Marneuli’s 22nd precinct, there were 51 extra signatures. In Khashuri’s 32nd precinct, there were 63 extra signatures. In Gori’s 63rd precinct, there were 71 extra signatures, and in Bolnisi’s 62nd precinct, there were 87 extra signatures.

Potential causes for this situation include voters coming to polling stations, and:
  • Signing the voter list and leaving without voting;
  • Voting only in the majoritarian race rather than in both the proportional and majoritarian races;
  • Additionally, Precinct Electoral Commissions may have inaccurately recorded votes, invalid ballots, and/or signature counts.
In 18 cases, there were less signatures on voter rolls than ballots declared invalid and votes recorded. In 17 of the 18 cases there were 10 votes or less that were without a signature. However, in Gori there were 196. This may stem from a recording error, since there was a very high number of invalid ballots (221), or this may stem from another issue. Generally however, the causes of there being more votes and invalid ballots than signature recorded, the causes are less benign. They include:

  • Precinct electoral commissions may have incorrectly counted or reported vote statistics;
  • Voters were allowed to vote without signing the voter list;
  • Ballot box stuffing occurred.

Declining turnout
Another clear logical inconsistency in the official statistics on the 2012 elections is that the number of votes in several precincts declined between 12PM and 5PM, as well as in one district between 5PM and 8PM. That is to say, according to the official record, fewer people had voted at 5PM, in total, compared to five hours earlier at 12PM in these districts.

District
Saburtalo Nadzaladevi Dmanisi Dmanisi Akhalkalaki Mestia Kobuleti
Precinct 63 44 23 30 48 25 14
Votes between 12PM and 5PM -1 -159 -19 -58 -40 -43 -210

This is likely to be caused by a reporting error, with precinct officials recording the number of votes between these hours rather than the total number of votes at 5PM.

Conclusions
While each of the above logical inconsistencies in recording the vote is clearly an issue, which could imply malfeasance, we strongly suspect that the vast majority of cases described above stem from recording and data entry errors. While, we do not suspect malfeasance in any particular case, and do not believe that recording issues affected the outcome of the 2012 elections, the illogical recording of the vote is a serious issue.

In Georgia, elections and the outcomes of elections are regularly contested, with accusations of all sorts following the results. If Georgian voters see that the voting records have logical inconsistencies in them, this could undermine citizens’ confidence in the accuracy of the vote, and thus the legitimacy of election results.

Based on this, we recommend that the Central Election Commission, District Election Commissions, and Precinct Election Commissions check for logical inconsistencies in election protocols on election day and explain logical inconsistencies in a public and transparent manner if they do occur. Particular emphasis in trainings should be placed on how to fill out voter protocols.

In Thursday’s blog, we show how we will carry out tests for electoral malfeasance in the 2016 elections using tests from the field of election forensics. In the meantime, check out our full report or this visualization of the issues which Jumpstart Georgia created.

Monday, June 06, 2016

Attitudes towards public opinion polls in Georgia (Part 2)

CRRC/NDI’s public opinion polls become the subject of intense discussions after the results of every wave of the survey are released, with politicians from various political parties criticizing the polls. Such a situation, though, is not unique to Georgia. As Professor Arthur Lupia recently put it, pollsters are a “popular whipping boy in politics”, yet they also “can give people a stronger voice”. In a previous blog post, we showed that attitudes toward public opinion poll results are mixed in Georgia, with nearly equal shares of the population trusting, distrusting, and neither trusting nor distrusting the results. This blog post shows that even though public opinion polls are regularly criticized in Georgia, there is still a public demand for them. 

CRRC’s 2015 Caucasus Barometer survey asked respondents to rate the level of their agreement or disagreement with the following statements:

“Public opinion polls help all of us get better knowledge about the society we live in”;
“Ordinary people trust public opinion poll results only when they like the results”; 
“Public opinion polls can only work well in developed democratic countries, but not in countries like Georgia”;
“The government should consider the results of public opinion polls while making political decisions”;
“Politicians trust public opinion poll results only when these are favorable for them or for their party”;
“I think I understand quite well how public opinion polls are conducted”.

Those who, while answering the previous question about trust in polling results, reported they did not know anything about public opinion polls, were not asked these questions.
Two-thirds of the population agrees with the statement that the government should consider the results of public opinion polls while making decisions, and nearly half agrees that polls help everyone to better understand the society they live in.   

Note: A 10-point scale was used to record answers to these questions. On the original scale, code ‘1’ corresponded to the option “Completely disagree” and code ‘10’ corresponded to the option “Completely agree”. For the charts in this blog post, the answers were grouped as follows: codes ‘1’ through ‘4’ were labeled “Disagree”; codes ‘5’ and ‘6’ were labeled “Neutral”; codes ‘7’ through ‘10’ were labeled “Agree”. Options “Don’t know” and “Refuse to answer” aren’t shown on the charts.

The share of the population who disagree with the statement that “polls can only work in developed democratic countries, but not in countries like Georgia,” is almost twice as large as the share of those who agree with this statement.


At the same time, people don’t feel they have a good knowledge of how public opinion polls are conducted. Only 36% report believing they have a good understanding of it. 45% also report that ordinary people trust the results of public opinion polls only when they like them, and 62% report the same in the case of politicians. Increasing knowledge of and trust in polls are clear challenges for pollsters in Georgia.

Whether people trust them or not, polls are important for society, and the results presented in this blog post show that people do acknowledge this importance. Polls help everyone grasp what society thinks, and the majority of the population thinks the government should consider poll results when making decisions.

To learn more about public opinion polls, take a look at earlier blog posts including Attitudes toward public opinion polls in Georgia,  Ask CRRC | Survey vs Census and Pre-Election Polls | what would be needed. To learn more about how CRRC collects data, take a look at this video or read CRRC-Georgia’s Research Guidelines