Survey evidence indicates a majority of Russian citizens support their country’s military action in Ukraine. But does this give an accurate picture of public opinion? Using an innovative list experiment, Philipp Chapkovski and Max Schaub demonstrate that a significant percentage of Russian citizens are likely to be hiding their true views about the conflict.
In war, the saying goes, truth is often the first casualty. So when several polls showed that up to 80 percent of Russians supported the war against Ukraine, many doubted these figures, including on this blog. Is it possible that support for the brutal invasion is really so high – or is it just that individuals are afraid to share their true opinions? While it is well known that fear of repression can lead to preference falsification, i.e. people publicly supporting positions they privately don’t share, showing that this mechanism is at work is not easy. After all, people are unlikely to say whether they are hiding their true preferences or not if they are hesitant to reveal these preferences in the first place.
Why is it important to know the true level of public support for the invasion? One aim of the Western sanctions is to weaken Vladimir Putin’s regime, and at least some observers hope that this will ultimately end Putin’s reign. And while authoritarian regimes are usually undermined by actors from within the elite, popular support for regime change also matters. Without popular support, a coup is doomed, often with fatal consequences for its leaders. The level of popular support for, or disapproval of the war, therefore, is an important determinant for how realistic the hope for regime change is.
A list experiment
As a way to find out if people reported their attitudes towards the war truthfully, we conducted an online survey and a list experiment. In the list experiment, respondents were asked whether they personally supported none, one, two, three, or four of the following things (shown in a random order): 1) monetary monthly transfers for poor Russian families; 2) legalisation of same-sex marriage in Russia; 3) state measures to prevent abortion; and 4) the actions of the Russian armed forces in Ukraine.
Respondents were explicitly asked to only indicate how many of the items they supported, not which one(s). This means that no one will ever know if a given respondent supports the war (item 4). The list experiment therefore solves the problem of preference falsification – it allows respondents to tell the truth. This method has been used to reveal true levels of racism in the United States or to quantify the occurrence of sexual violence during war, to name but two examples.
However, if we cannot interpret an individual respondent’s answers, how, then, do we measure support for the invasion? The answer is simple yet elegant. The above four-item list was only presented to half of the respondents. The other half received a three-item list, in which the fourth item (support for the invasion) was left out (see Figure 1 below). Who was shown the three-item list, and who was shown the four-item list was determined randomly. The difference in the average number of items supported, therefore, is an answer to our question of how many Russians support the invasion.
Figure 1: Screenshots of list experiment in treatment (left) and control condition (right)
Note: Respondents were either shown the question on the left (with the ‘Actions of the Russian armed forces in Ukraine’ option included) or the question on the right. Data collected on 4 April 2022.
We conducted the list experiment among a sample of 3,000 Russians whom we recruited on the online platform Toloka. Toloka is the Russian equivalent of the US-based Amazon MTurk platform, which has frequently been used by political scientists for conducting experiments. The respondents on the platform are not a perfect mirror image of Russian society, of course. They tend to be younger, more urban, and better educated (Table 1). As such, our respondents are likely more liberal than the population average, meaning that our estimates may represent a lower bound of support for the war.
Table 1: Summary statistics of respondent sample vs Russian population
Note: Compiled by the authors.
The goals of our analysis are twofold: first, we want to find out the true level of support for the invasion among our respondents. This answer is provided by our list experiment. Second, we want to determine whether people falsify their preferences. This answer is provided by comparing the rate of support as determined by the list experiment to the answer to a direct question. After replying to the three-item list, respondents in the control condition were asked “Do you personally support the actions of the Russian armed forces in Ukraine?” That is, we used the wording from the list experiment to formulate a direct question assessing the support for the invasion. This allows us to compare the two different ways of asking and, hence, to quantify the amount of preference falsification among our respondents.
Results
So how much do Russians support the invasion, and do people tell the truth when asked? The bar plots in Figure 2 below show the level of support as assessed by both the direct question and the list experiment. When asked directly, 68% of our respondents stated that they personally supported the war – a two-thirds majority. However, when using the list experiment to assess the support for the invasion, this share drops significantly, to 53%. In other words, when allowed to reveal their true private preferences, only a simple majority of respondents supports the war. This difference of 15 percentage points is highly statistically significant, meaning that it is extremely unlikely to be the result of chance. Russians, at least those in our sample, clearly hide their true attitudes towards the war.
Figure 2: Support for the Russian invasion of Ukraine
Note: Bars show averages, vertical lines show 95% confidence intervals.
In order to illustrate the extent to which the falsification of preferences is driven by fear, we can split the sample by those generally willing to take risks vs those typically hesitant to do so. The risk question is regularly used by social scientists, and an important predictor for economic and other behaviours.
As shown in Figure 3, risk also plays a role for whether individuals reveal their true attitudes towards the invasion. Among the half of the respondents who are more willing to take risks, it makes little difference whether the question about the invasion is asked directly or by means of the list experiment. In contrast, among those who are more risk-averse, preference falsification is rampant: the share of respondents ostensibly supporting the invasion is 1.5 times higher when asked directly (66%) than when asked by means of the list experiment (46%).
Figure 3: Support for the Russian invasion of Ukraine, by level of risk aversion
Note: Bars show averages, vertical lines show 95% confidence intervals.
Do Russians tell the full truth when asked about their support for the war? Based on our experiment, we can safely conclude that they do not. On the one hand, this is good news. It means that the prospect for regime change may not be completely implausible. Given that private preferences for the war are significantly lower than those shown in public opinion polls, if and when the regime starts to unravel, new leaders could pursue peace without fearing a popular backlash.
However, being against the war is not the same as being against Putin, whose high levels of support might well be real, as shown by another list experiment. What is more, the fact that a large number of Russians support the invasion even when given the option to reveal their true private preferences is extremely concerning. After all, our estimates come from a population sample that presumably is more liberal than average. Yet even among this group of people – and despite the fact that the brutality of the war is becoming more evident by the day – the Russian leadership ostensibly can count on the genuine support of a substantial part of the population.
The data for this study was collected on 4 April 2022. The data, otree script, and code used for producing the figures can be found on Harvard’s Dataverse
Note: This article gives the views of the authors, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: Max Titov on Unsplash
The list experiment may not completely reveal all preference falsification, because if respondents suspect a trap they could still be caught.
An industrious security service could simply investigate an individual’s opinions on the other 3 questions.
The assumption that the “list method” unveils true preferences is not valid: a rational respondent in a vindictive autocracy could easily reply “all of the above” to the question even if they don’t support the war, fearing that even the *possibility* and appearance of them not supporting the invasion could be held against them.
Conceptually, we took care to avoid ceiling effects by including issues that enjoy little support in Russia (same-sex marriage) or are highly contested (abortion). This, in theory, should make it less likely that people would be tempted to simply agree with all items/would expect others to act this way. And indeed, empirically, we see very few people who say they support all items – just 7% of respondents in the control (three-item) condition, and 3% of respondents in the treatment (4-item) condition. This said, the genuine level of support remains subject to speculation. We can mainly show that Russians do seem to falsify their preferences when asked about the war.
Good attempt to detect authentic attitudes.
But the conclusion is critical: “…a large number of Russians support the invasion even when given the option to reveal their true private preferences”. This is often a majority of Russians, in these figures.
This is more important politically than the headline synopsis:”a significant percentage of Russian citizens are likely to be hiding their true views about the conflict.” To which the response is, of course.
I now have list experiments in a dozen autocracies confirming some similar response biases.
Can you correct for the bias of the sample and try to estimate the true preferences of all the citizens, not just the types who use Toloka? Kind of useless without that correction.
Is the 15% difference more significant that the respondents levels of education?
Had anyone tried to apply post-stratification to this study data to obtain estimate of discrepancy between direct and list experiment for general population?
We did indeed tried to use weights that reflect Russian population averages in terms of gender (m/w), age (under 40/40+), and education (no university/some). Applying these weights shows higher average support for the war both when using the direct question (71%) and also when using the list experiment (61%). The reason is that older and less educated individuals are underrepresented in our sample – but it’s among these groups that (genuine) support is higher, meaning there also is less ‘need’ to falsify preferences.
However, our respondents – online workers – may still differ in many ways from the general population. For instance they are more tech-savvy and hence potentially better informed than the general population. We are therefore not confident that applying weights helps to solve the representativeness problem (which is why we decided against it, and rather provided summary stats in Table 1).
Is Figure 3 mislabeled? It say that the ‘Likes Risk’ group (left two bars) shows higher ‘support for war’ (label on y-axis) than the ‘Risk Averse’ group on the right. Are the labels on x-axis interchanged? (If so, please correct and erase this comment)
People who like risk seem more likely to support the war but the point is really about the differences in each group when you ask them about the war directly. The risk averse group has a much bigger gap between the list experiment and the direct question. That is evidence that people who are less willing to take risks are more likely to hide their true feelings.
If you call a war ‘an action’, you can’t count on public condemnation of it. Many Russians have no idea of what really is going on in Ukraine, yet to most a war is an absolute crime to be avoided.
Wouldn’t you want to have 5 options in the one that includes the war, to ensure that people who don’t support any of the options can answer “0” without the fear of being persecuted?
That would be better but still not perfect.
This is an intrinsic problem of the lost approach, I think.
Depending on the probability distribution of people’s answers, there is always partial information in a Bayesian sense that can be inferred from answers.
If you say 0, then it’s definitely clear that you don’t support it. Same with all, you must support all.
Solutions could be to increase number of options to 10 or even more.
But that increases noise and requires more participants.
Was there any correlation between risk aversion, age and education? In our surveys the risk aversion grows with age, and in your material it was strongly connected with preference falsification. I wonder if pref fals could be more rampant in older groups?
In our sample, risk aversion is almost entirely uncorrelated with age and education (r0.5). So the split by risk aversion doesn’t just capture age effects. As for age: in fact, preference falsification is _lower_ among the old/ higher among the young – not least because genuine support is higher among older people – in particular older men – so there is less ‘need’ to falsify preferences.
I am looking at the data downloaded from an article in the Washington Post:
https://www.washingtonpost.com/world/2022/03/08/russia-public-opinion-ukraine-invasion/
That poll (link in the article is no longer active, but I did get the data before it was removed) showed support at 58% support for the war.
March 31 the New York Times reported a Levada poll showing 83% support
That is a jump of 25% in two weeks
In the polling literature social desirability biasing (giving the interviewer the answer the R thinks he/she want to hear) is well documented. (Over reporting “Did you vote in the last election?” is frequently cited as an example.) However this big a change is such a short period is unprecedented.
You mention R fear as a cause. But you did not measure fear as a psycho-physiological state.
I am thinking that a poll in Russia right now should not be compared to polling as practiced in a liberal democracy.
I tend to think that in an authoritarian state responses may make more sense as a part of a programed litany, a thoughtless response. In that case the autocrat, Putin, broadcasts the state position on a topic and the public, driven my television coverage, thoughtlessly reply “yes I support” Hence opinion is being manufactured, not modified.
A lot like going to a mass.
Would have been interesting to weight the data to better represent the actual population. Did you have a chance to look at it?
We did – I outlined the results from the weighting exercise in a comment on April 8 above. Basically, average support goes up somewhat, and there is a little less preference falsification – but we’re not too confident that weighting fully addresses the representativeness issues.
This statement is wrong as such, too strong:
> The list experiment therefore solves the problem of preference falsification – it allows respondents to tell the truth.
There is always at least partial information about one’s choice leaked. This is easy to show using a simple Bayesian type calculation.
The partial information can turn into full information if respondents truly want to answer 0 or all.
Even partial information can be considered risky. The amount of information leaked depends on the probability distribution of the true preferences. Increasing number of options helps but never equals 0.
Smart risk averse people will still not answer faithfully. So if anything you may be underestimating the support.
I agree with Y White that respondents are only somewhat likely to feel safe omitting the “war option” when counting. The “allowance increase option” is to an almost similar degree something that respondents would feel that they are expected to tick if they are “patriotic” (what kind of person would support an expensive war on foreign territory but not an increase of allowances to the domestic poor?), and the remaining two options are on issues that are at the same time divisive (respondents may already have voiced opinions about them in other contexts) and relatively easy to correlate with other information about people’s demographic characteristics and online behaviour.
> [W]hat kind of person would support an expensive war on foreign territory but not an increase of allowances to the domestic poor?
Many conservatives, apparently. This is a very common attitude in the U.S.
I understand that it is a hard time for research on Russia but I do believe that we should follows established rules of scientific inquiry. There are several major questions about methodology of your research that, in my humble opinion, delegitimize your findings.
1. I believe you widely simplify the concept of preference falsification. You talk exclusively about fear of repression and make a direct connection to authoritarianism. On the other hand, Kuran argued that there are various reasons to say or not say certain things. Kuran’s view relies more on psychology research, which makes it more “social”. Because of this he also noticed that in democracies the preference falsification is present as well. In other words, if the preference falsification is a sufficient problem to reject the result of a particular opinion poll, the same rule should be applied to any.
2. The above issues directly connects with statements about the “truth” throughout the article (such as “it allows respondents to tell the truth”). In my opinion, you should clearly state that there are limitations, which make it not possible to establish “the truth”. Since many public figures and newspapers already used your article non-critically, there is a serious issue of misrepresentation of your findings in public sphere.
3. What is “Toloka”? You mean Толока-Яндекс? How did you ensure that the responses are from Russians? There are plenty of Russian-speaking people there, including from Ukraine. The questions is significant since, since you claim “we want to find out the true level of support for the invasion among our respondents” and “Do Russians tell the full truth when asked about their support for the war? Based on our experiment, we can safely conclude that they do not. ”
4. Your screenshot shows questions asked in English. I hope you did not ask Russians in English. Could you clarify?
5. You are using Toloka, which is a website where you have to pay for responses, am I correct? It is a do-the-task-receive-the-money website? How do you account for a well-proven tendency to select an answer in the middle if someone just wants to receive money for a quickly done test? If you have 5 options, the “2” answer will be easily picked. If you have 4 options, answers “1” and “2” could be easily picked. In other words, if someone supports Russian military’s actions (let’s say select 2 in “treatments condition”) it might respond “1” or “2” in control conditions. Could you provide a “response time” sheet?
6. There is a huge problem with your formulations. First, you asked “Actions of Russian armed forces in Ukraine”, then you write “Support for the Russian invasion”. Those are two very different things. You have to maintain the same wording throughout the article. I believe that is a rule of scientific investigation. You cannot report “support for the Russian invasion” if the respondents expressed a support for “actions”.
To conclude, I am not in any way trying to attack your integrity but there are significant problems with your methodology. The concern is that it gets recited at the moment without any mentions of those problems/limitations but as facts and evidence.
This in response to the last three comments by Cornelius Roemer, O.G. and Ignat Vershinin. We appreciate your comments, and agree that we should have formulated more carefully in some places. Since publishing this post, we have worked on a working paper where we address some of the issues you raised. First, it is correct that the list experiment can reduce bias, but will not get rid of all of it because careful respondents might still avoid revealing their true preferences. So, yes, it may be that we are still overestimating true support. Second, on preference falsification, it is true that this may not only be driven by fear, but also by an urge to comply with what is perceived as the majority opinion. In order to illustrate the point, in the additional experiment that we ran (for the working paper version), we asked people about their media consumption habits. Here we see that those receiving most of their news from TV, and are thus heavily exposed to state propaganda show extraordinary high rates of support for the war when asked directly (87%), but show similar levels of support to those not using TV as primary news source when asked indirectly with the list experiment. We take this as evidence that some of the differences we are observing between the direct question and the list experiment is driven by people trying to conform with what they perceive as the socially appropriate opinion.
On some of the smaller points: yes, we collected data on Яндекс Толока. We had a screener question, which allowed us to screen out people who fail to pay a minimum level of attention (around 100 out of 3100), the item order was fully randomized, and, of course, all items were formulated in Russian. The specific formulation ‘actions of the Russian military in Ukraine’ followed a question used by the Levada center (see https://www.levada.ru/2022/03/31/konflikt-s-ukrainoj/).
Thank you for the clarifications!
What bugs me is that, as in way too many online articles, the publisher has offered up diagrams without legends. What do those decimals on the left actually represent? Are they fractions per-head — ie, if I multiply by 100, do I get the percentage of all respondents? It’s such a simple thing to add a legend to a graph. And LSE ought to know better than to publish data without full qualification somewhere on the same page.
Small quibble from an illiterate person who knows nothing of polling but has always been irritated by the probable variability created by the way questions are formulated in polls:
It seems to me “……actions of the Russian military…..” would cause some respondents to factor in some aspects of patriotism into their answer i.e. “Hmmmm….well I do support my troops.”
If the question had been somehow been more directed at the “invasion” aspects OR the concept of “military decisions” aspect……and not the possible concept/interpretation of “supporting our boys” (my words)…..then there would probably have been more negativity. (Personally, when I first saw the question, “supporting our boys” was one of my first reactions. Go figure.)
There is now a published version of this analysis, using different data but confirming our core results: Publicly stated levels of support for the war are high (>60%) but there also is significant preference falsification (~10% points), meaning true, private support is likely lower. Here is the link to the study: https://journals.sagepub.com/doi/full/10.1177/20531680221108328