We live in a world in which citizens daily need to process statistics on GDP, unemployment, COVID-19 cases and deaths, and other myriad macroeconomic predictions. Are statistics and their interpretative analysis sufficiently accurate, though, to support credible inferences?
Whoever receives numerical signals must also discern the quality of the data they consume. However, this discernment is arduous. Statistics can be methodologically unsound in ways that the novice reader might overlook. Worse still, statistics can be manipulated.
Institutional data are not intrinsically unreliable. To the contrary, in an era of ubiquitous influence by conspiracy theories and fake news, some institutions offer our best estimates of the causes of things, and they safeguard against new social dangers in statistical consumption. Nonetheless, it is important to provide a few instances where statistics are vulnerable to inaccuracies or potential manipulation.
Take the Covid-19 pandemic framework, for example. A prominent global database administered at Johns Hopkins University is daily updated with national estimates of reported coronavirus cases and deaths. But even though international comparisons between countries of similar population sizes such as Belgium and Greece might make sense, they are not always informative. Officials all-too-confidently compare countries as though contextual differences can plausibly be held equal. Little attention is paid on each country’s capacity to perform mass and targeted testing, identify covid-19 related deaths or process health data. Standard rules of inference earn short shrift in these comparisons. In addition, media outlets do little to help.
The quantification of important macroeconomic indexes complicates matters yet further. Accurately identifying the unemployment rate or GDP is a hard task in the context of a shadow economy and deliberate citizen misrepresentation. As a result, true global GDP rankings remain obscure.
Statistics can also be manipulated to serve political interests. In a recent discussion with representatives from across the Hellenic political spectrum, Yanis Varoufakis called for the incumbent Greek government to correct the ‘statistical dishonesty’ in its production of the most recent quarterly growth rate. Additionally, his party, acting as a watchdog for decent national statistics production, observed discrepancies on the number of the unemployed people in Greece between different government reports.
What are the implications for democracy of government-driven data manipulation, and for whom?
In a representative democracy, manipulated statistics exacerbate informational asymmetries. According to contractual incompleteness theory notes, information imbalances induce exploitation and undermine democracy. In this way, the public is unknowingly served with political justification for policies and measures that would otherwise be questionable, anti-popular or even socially toxic.
In countries characterized by crony capitalism and ‘corporatocracy’, information imbalances conceal governmental preference for elites whose disclosure trigger political distrust. Those information imbalances, in turn, camouflage information that could be used to advance justice. Simply put, a government that distorts information capitalizes on the public’s ignorance.
Numerical descriptions may be both a blessing and a curse. Statistics are susceptible to misconception. In policy terms, even institutional data sometimes mislead. And, the democratic bargaining power of the people deteriorates in cases of conscious and deliberate statistical manipulation from governments. Citizens need to be aware of this phenomenon and approach statistical data with the appropriate caution.
This article gives the views of the authors, and not the position of the Social Policy Blog, nor of the London School of Economics.