Friday, January 27, 2017

The state procurement system in Georgia: Companies’ views (Part 1)

The Unified Electronic System for State Procurement was introduced in Georgia in 2010. The system aimed to simplify the state procurement process and make it transparent. According to the State Procurement Agency, “Every year, the state spends hundreds of millions of lari on procurement of different kinds of goods, services and construction. … Accordingly, private companies ought to be interested in state procurement as an important potential source of increasing their incomes.” However, according to the findings of a Survey of companies on the state procurement system conducted by CRRC-Georgia for Deloitte Consulting LLP and USAID in August 2016, a majority of companies do not actively participate in the state procurement process. Based on CRRC-Georgia’s report on the subject, this blog post discusses problems with the system in the companies’ views.

According to the State Procurement Agency’s 2015 annual report, 15.6% of active companies have bid on state procurement tenders (pg.17). On CRRC-Georgia’s survey 17% of companies report taking part in the state procurement process, and approximately half of these companies report doing so only sometimes or rarely. Seventy-three percent of companies report not being registered in the Unified Electronic System for State Procurement (UES), which is a requirement for bidding on state procurement tenders.


Note: 2% of companies, whose representatives answered “Don’t know” to the question: “Is your company registered in Unified Electronic System for State Procurement?” were excluded from the analysis. 

The results of the survey provide some insight about why companies do not participate in state procurement. Most frequently (56%), a lack of interest in participating in the state procurement process was mentioned as the main reason for not participating. We do not, however, have any information about why there is a lack of interest. The second most common reason company representatives mentioned for not participating was that the tenders announced are not applicable to the company’s field of activity (27%).


Note: Only answers of the representatives of companies that are not registered in the Unified Electronic System for State Procurement (73%) are presented in the chart above. 

A majority of companies (64%) report having no information about the announcement of state procurement tenders. Given this general lack of information, it is not surprising that their representatives found it difficult to assess how fairly different types of tenders are conducted. Notably, representatives of 76% of the companies report they have not heard about seminars which the State Procurement Agency conducts with the aim of increasing the knowledge of business people about the state procurement system.


Note: The chart shows distribution of answers of the 83% of companies that have not participated in state procurement system. Companies whose representatives answered “Refuse to answer” to the question are excluded from analysis. There are five types of state procurement tenders in Georgia: simplified procurement, simplified electronic tender, electronic tender, consolidated tender and contest. Definitions of each type of tender are available here.

It is possible that the lack of information is an obstacle to greater participation in state procurement processes. Thus, the State Procurement Agency should better inform companies about its activities.

The second part of this blog post, which will be published next Monday, shows how representatives of companies assess the state procurement system based on whether they have or have not participated in the state procurement system.

Tuesday, January 24, 2017

Developing the “culture of polling” in Georgia (Part 2): The misinterpretation and misuse of survey data

[Note: This is the second part of a guest blog post from Natia Mestvirishvili, a Researcher at International Centre for Migration Policy Development (ICMPD) and a former Senior Researcher at CRRC-Georgia. The first part of this blog post is available here. This post was co-published with the Clarion]

The misinterpretation of survey findings is a rather widespread problem in Georgia. Unfortunately, it often leads to the misuse of data, which not only diminishes the importance of survey research, but also leads to more serious consequences for the country.

To illustrate how one might misinterpret survey data, the following example from CRRC’s 2015 Caucasus Barometer survey can be used. When asked, “What do you think is the most important issue facing Georgia at the moment?”, only 3% of the population mentioned low pensions, 2% the unaffordability of healthcare, and 2% the low quality of education. A number of issues including the violation of human rights, unfairness of courts, corruption, unfairness of elections, unaffordability of professional or higher education, the violation of property rights, gender inequality, religious intolerance and emigration were grouped into the category “Other”, because, in total, only 7% of the population mentioned these issues.

Based on these findings, one might think that these issues are unimportant in Georgia. However, this would be a misinterpretation, which happens for a number of reasons. Here, I focus on two. The first is:

1. Not paying attention to the exact formulation (wording) of the survey question, answer options, and instructions 

One reason a large share of the population did not mention the violation of human rights, gender inequality and religious intolerance as important issues is because each respondent could name only one issue. The options they chose (unemployment and poverty were named most often) were more important to them than human rights, gender inequality, and religious intolerance.

If a different question – “How important is the issue of human rights [or gender inequality, or religious intolerance] for Georgia?” – had been asked, the share of people who would answer that these issues are important would very likely be much higher than one or two percent. This wording would make people judge the issue not in relative, but absolute terms.

While working with survey findings, the exact wording of question(s) should always be taken into account. When the question is interpreted or reworded, it will almost inevitably lead to some degree of misinterpretation. More often than not, fieldwork instructions should also be taken into account. For example, was a show card used for the question? Was the number of answer options a respondent could choose limited or not?

Thus, it is crucial that survey results are understood and reported, keeping in mind the exact wording of the question(s), answer options provided, and any instruction(s) that had to be followed during the interviews. This will help minimize the risk of misinterpretation.

A second common cause of misinterpretation of public opinion polls in Georgia is:

2. Interpreting public opinion survey results as ‘reality’ rather than perceptions 

Even if the question discussed above had been asked so that the absolute rather than relative importance of the issues was measured and the survey findings still suggested that people thought the violation of human rights, gender inequality and religious intolerance were not important issues for the country, the findings should not be interpreted as a direct reflection of ‘reality.’ As discussed in the first part of this blog post, public perceptions are not ‘reality’.

Interpreting public perceptions as objective ‘reality’ is incorrect, because both perceptions and misperceptions, information and misinformation shape public opinion. It is equally important to remember that, sometimes, ‘reality’ simply does not exist. Moreover, as a number of studies have shown, it is often the case that people are simply wrong about a wide variety of things.

None of the above, however, diminishes the role and importance of public opinion polls. In fact, the misperceptions that survey findings can uncover are often among the most important outcomes for policymakers. Instead of putting an equal sign between public perceptions and ‘reality,’ data analysts and policymakers should critically analyze and address gaps between the two.

Going back to the above example, an accurate interpretation would consider the findings in the context of other studies that are specifically focused on human rights (or gender equality or religious tolerance). Indeed, numerous studies indicate that Georgia has serious problems with all three issues i.e., the population does not have much respect for human rights, gender equality, or people of other religions. Only looking at the latest Human Rights Watch report on Georgia makes this quite clear.

Looking at inconsistencies between people’s answers to different questions, or between survey findings and other types of data when available and relevant, is a good way to uncover misperceptions. For example, a 2014 CRRC/NDI survey found that roughly every fourth person reports there is gender equality in Georgia. However, about half of those who think so also think that taking care of the home and family makes women as satisfied as having a paid job, and that in order to preserve the family, the wife should endure a lot from her spouse.

The answers to these three questions should be presented and discussed not separately, as independent findings, but rather as interrelated findings that, taken together, give a better understanding of the assessments of and attitudes towards gender equality in Georgia. In this context, the question that needs to be raised and answered is why and how this inconsistency between answers occurs.

The misuse of survey findings happens when findings are presented and used in a way that reinforces people’s misperceptions and prejudices. The misinterpretation of findings often leads to their misuse, and eventually, can lead to serious issues.

Again, going back to the most important issue example, it would be a misuse of survey findings to conclude that since the violation of human rights, religious intolerance or gender inequality seem to not be perceived as important issues in Georgia, no policy is needed to address them. As demonstrated above, alternative sources show that these issues need to be addressed, and, at the very least, awareness of them needs to increase. Thus, policy intervention is needed.

What the survey findings tell us in this case is that people underestimate the importance of these issues. In turn, this contributes to the worsening of the problems. If you believe gender inequality or religious intolerance are not important, you probably would not care about these issues either. Thus, the larger is the gap between public perceptions and reality, the more important it is for policy makers to address the issue.

Public opinion should not be used as a directive for policy making without careful analysis of misperceptions and alternative sources of information.

Unfortunately, in Georgia sometimes it’s exactly the misperceptions that drive policy. Speaking of recent developments, misperceptions about homosexuality have lead politicians to talk more about the prohibition of same-sex marriage, something that has never been allowed in Georgia in the first place, than about human rights issues. Misperceptions about gender roles led politicians to reject a proposal that would define femicide as a premeditated murder of a woman based on her gender. Looking forward, the country cannot allow the misperception that the EU threatens Georgia’s traditions to drive the country’s foreign policy.

Now more than ever, when Georgia is still attempting to transition into a stable, democratic country, the country needs policymakers and researchers who have the knowledge and skills to critically analyze survey findings and use their potential for the development of the country.


Monday, January 16, 2017

Developing the “culture of polling” in Georgia (Part 1): Survey criticism in Georgia

[Note: This is a guest post from Natia Mestvirishvili, a Researcher at International Centre for Migration Policy Development (ICMPD) and former Senior Researcher at CRRC-Georgia. This post was co-published with the Clarion.]

Intense public debate usually accompanies the publication of survey findings in Georgia, especially when the findings are about politics. The discussions are often extremely critical or even call for the rejection of the results.

Normally criticism of surveys would focus on the shortcomings of the research process and help guide researchers towards better practices to make surveys a better tool to understand society. In Georgia most of the current criticism of surveys is, unfortunately, counterproductive and mainly driven by an unwillingness to accept the findings, because the critics do not like them. This blog post outlines some features of survey criticism in Georgia and highlights the need for constructive criticism aimed at the improvement of research practice, because constructive criticism is extremely important and useful for the development of the “culture of polling” in Georgia.

Often, discrepancies between the findings and the critics’ opinion about public opinion cause criticism of surveys in Georgia. Hence, the survey critics claim that the findings do not correspond to ‘reality’. Or rather, their reality.

But, are surveys meant to measure ‘reality’? For the most part, no. Rather, public opinion polls measure and report public opinion which is shaped not only by perceptions, but also by misperceptions i.e., the views and opinions that people have. There is no ‘right’ or ‘wrong’ opinion. It is equally important that these are opinions that people feel comfortable sharing during interviews –while talking to complete strangers. Consequently, and leaving aside deeply philosophical discussions about what reality is and whether it exists at all, public opinion surveys measure perceptions, not reality.

Among the many assumptions that may underlie criticism of surveys in Georgia, critics often suggest that:

  1. They know best what people around them think;
  2. What people around them think represents the opinions of the country’s entire population. 

However, both of these assumptions are wrong, because, in fact:

  1. Although people in general believe that they know others well, they don’t. Extensive psychological research shows that there are common illusions which make us think we know and understand other people better than we actually do – even when it comes to our partners and close friends;
  2. Not only does everyone have a limited choice of opinions and points of view in their immediate surroundings compared to the ‘entire’ society, but it has also been shown that people are attracted to similarity. As a result, primary social groups are composed of people who are alike. Thus, people tend to be exposed to the opinions of their peers, people who think alike. There are many points of view in other social groups that a person may never come across, not to mention understand or hold; 
  3. Even if a person has contacts with a wide diversity of people, these will never be enough to be representative of the entire society. Even if it were, individuals lack the ability to judge how opinions are distributed within a society.


To make an analogy, assuming the opinions we hear around us can be generalized to the entire society is very similar to zooming in on a particularly large country, like Canada, on a map of a global freedom index, and assuming that since Canada is green, i.e. rated as “Free”, the same is true for the rest of the world. In fact, if we zoom out, we will be able to see that the whole world is all but green. Rather, it is very colorful, with most of the countries being of different colors than green, and “big” Canada is no indication of the state of the rest of the world.



Source: www.freedomhouse.org

People who think that what people around them think (or, to be even more precise – who think that what they think that people around them think) can be generalized to the whole country make a similar mistake.

Instead of objective and constructive criticism based on unbiased and informed opinions and professional knowledge, public opinion polls in Georgia are mostly discussed based on emotions and personal preferences. Professional expertise is almost entirely lacking in those discussions.
Politicians citing questions from the same survey in either a negative or positive context, depending on whether they like the results or not, is a good illustration of the above claim. For example, positive evaluations of a policy or development by the public is often proudly cited by political actors without doubting the quality of the survey. At the same time, low and/or decreasing public support for a particular party according to the findings of the same survey is “explained away” by the same actors as poor data quality. Subsequently, politicians may express their distrust in the research institution which has conducted the survey.

In Georgia and elsewhere, survey criticism should be focused on the process of research and should be aimed at its improvement rather than the rejection of the role and importance of polling. It is the duty of journalists, researchers and policymakers to foster healthy public debate on survey research. Instead of emotional messages aimed at demolishing trust in public opinion polls and pollsters in general, rationally and carefully discussing the research process and its limitations, research findings and their meaning/significance and, where possible, pointing to possible improvements of survey practice is needed.

Criticism focused on “unclear” or “incorrect” methodology should be further elaborated by professionally specifying the aspects that are unclear or problematic. Research organizations in Georgia will highly appreciate criticism that asks specific questions aimed at improving the survey process. For example, does the sample design allow for the generalization of the survey results to the entire population? How were misleading questions avoided? How have the interviewers been trained and monitored to minimize bias and maximize the quality of the interviews?

This blog post argued that survey criticism in Georgia is often based on inaccurate assumptions and conveys messages that are not helpful for research organizations from the point of view of improving their practice. These messages are also often dangerous as they encourage uninformed skepticism towards survey research in general. Rather than these unhelpful messages, I call on actors to engage in constructive criticism which will contribute to the improvement of the quality of surveys in Georgia, which in turn will allow people’s voices to be brought to policymakers and their decisions to be informed by objective data.

The second part of this blog post, to be published on January 23, continues the topic, focusing on examples of misinterpretation and misuse of survey data in Georgia.

Tuesday, January 10, 2017

Sex selective abortion is likely less common in Georgia than previously thought

[This blog post was co-published with EurasianetThe views presented in this article do not necessarily reflect the views of CRRC-Georgia.]

Sex-selective abortion in Georgia is a topic that has caught international attention. From an Economist article published in September 2013 to a 2015 UN report, Georgia tends to be portrayed as having one of the worst sex-selective abortion problems in the world. Closer inspection of the data, however, suggests the issue may be blown out of proportion.

The first study to draw attention to the sex-selective abortion issue in Georgia was published in 2013 in the journal International Perspectives on Sexual and Reproductive Health, and relied on statistics compiled by the World Health Organization. The authors found a sex-at-birth ratio of 121 boys for every 100 girls born in Georgia from 2005-2009. That number suggested there was a problem: one of the most common estimates of the natural sex-at-birth ratio is 105 boys for every 100 girls, or 95.2 girls for every 100 boys. Any difference between the natural and observed ratios in favor of boys is generally thought to be an proxy for sex selective abortion.

The study suggested that Georgia had one of the largest sex selective abortion problems in the world.

However, a missing data issue, a rounding error, and an anomalous sex at birth ratio in 2008, in the original study drove up the reported sex at birth ratio in Georgia. In the article, the sex at birth ratio between 2005 and 2009 is actually the average of the ratios in 2005 and 2008.  Martin McKee, one of the co-authors of the study stated, "The figure of 121 boys to 100 girls in 2005-2009 was calculated on the basis of the data submitted to the WHO at the time, from which several years were missing."

The missing data had a very large effect on the results of the study. In 2008, the ratio of boys to girls born in Georgia was exceptionally high at 128 boys born for every 100 girls. In 2005, 113 boys were born for every 100 girls, another high year for Georgia. Using these two years leads to an average of 120 boys born for every 100 girls between 2005 and 2009.

Notably, when asked about the discrepancy between the article reported 121 boys and the 120 boys to 100 girls ratio in the data, McKee acknowledged, “A very small rounding error crept in.”

With the full data between 2005 and 2009, however, the average sex at birth ratio drops to 113 boys for every 100 girls, rather than 120 – about half the reported deviation from the natural rate.


On top of the missing data, the fact that 2008’s sex at birth ratio is an outlier further exaggerates the reported magnitude of sex selective abortion in Georgia. If between 2005 and 2009, the average ratio was 113 boys for every 100 girls, the average ratio for the same period excluding 2008 is 110 boys for every 100 girls. That is to say, by excluding 2008, there were 5 excess boys born for every 100 girls rather than 8.

To flip the statistic around by looking at the ratio of girls born for every 100 boys, the average between 2005 and 2009 was 88 including 2008 and 91 when excluding it.  Translating this into the number of missing girls by subtracting the number of girls expected from the number born according to official data, suggests 6.74 missing girls for every 100 boys born when including the 2008 data. Without 2008, this drops to 4.20.


The exact causes of the situation recorded in 2008 are unknown. Although a higher than natural sex at birth ratio favoring boys is often explained by sex selective abortions and infanticide, comparing an estimate of the number of missing girls to the number of abortions over time suggests that some other factor may be at work.

Dividing the number of missing girls by the number of abortions in a year provides an estimate of the share of abortions that would need to be sex selective for it to explain the sex at birth imbalance. These calculations would suggest that sex selective abortion increased from 6% of all registered abortions in 2007 to 24% in 2008.

The calculations suggest one of three things: there was a dramatic increase in sex selective abortions in 2008, the number of unregistered abortions dramatically increased and they were also predominantly sex selective, or something else was driving the anomalous sex at birth ratio.

In the other category, many possible explanations exist. Notably, given the often poor state of data collection at the municipal level in Georgia, where births are recorded, recording error could explain the discrepancy.

The data alone cannot tell us whether 2008 saw a dramatic increase in the number of sex selective abortions or something else drove the anomalous sex at birth ratio. What is clear is that Georgia’s problem with sex-selective abortion is smaller than often portrayed.

That isn’t to say it is not a problem. In 2015, there were still about 4 missing girls for every 100 boys born.

Understanding the magnitude of the problem though is a first step towards addressing it.

Dustin Gilbreath is a Policy Analyst at CRRC-Georgia. He co-edits the organization’s blog Social Science in the Caucasus

To view the data used to calculate the figures used in this article, click here.

Monday, January 02, 2017

Three months before the 2016 Parliamentary elections: Trust in the Central Election Commission and election observers in Georgia


The June 2016 CRRC/NDI Public attitudes in Georgia survey, conducted three months before the Parliamentary elections, provides interesting information about trust in the Central Election Commission (CEC) and election observers, both local and international.

The CEC’s role in conducting elections in Georgia has been subject to contentious political debates about the organization’s impartiality. The survey data demonstrates the public’s lack of trust in the institution. In June, only 29% of the population of Georgia believed that the CEC would conduct parliamentary elections “well” or “very well”. In contrast to this general opinion, a majority (60%) of likely voters for the incumbent Georgian Dream party believed the same, while less than a third of likely voters for the two other parties that won seats in parliament (the United National Movement and Alliance of Patriots of Georgia) believed that the CEC would conduct the elections “well” or “very well”.


Note: The shares of those reporting they would vote for either Movement State for People or Alliance of Patriots of Georgia was very small (respectively, 4% and 3%), and the results for the supporters of these two parties are only indicative. 

Unsurprisingly, trust towards Georgian and international observers also differs. Overall, the population of Georgia tends to trust international observers more than Georgian observers. Forty eight percent report either “fully trusting” or “trusting” international observers, compared to 34% who report trust in Georgian observers. There are even wider gaps in trust in these two groups of observers depending on party support: while 63% of United National Movement supporters report either “fully trusting” or “trusting” international observers, only 29% “fully trust” or “trust” Georgian observers.


Note: The shares of those reporting they would vote for either Movement State for People or Alliance of Patriots of Georgia was very small (respectively, 4% and 3%), and the results for the supporters of these two parties are only indicative.

To explore the CRRC/NDI June 2016 survey findings, visit CRRC’s Online Data Analysis portal. On the topic of anomalies in the voting process, CRRC-Georgia recently conducted the Detecting Election Fraud through Data Analysis (DEFDA) project regarding the 2016 parliamentary elections. Preliminary findings can be found here. CRRC-Georgia has also previously published blog posts on the electoral process in Georgia, including on government spending before elections and public opinion shifts before and after elections.