LSE - Small Logo
LSE - Small Logo

LSE BPP

June 19th, 2020

Behavioral public performance: Making effective use of metrics about government activity

0 comments | 1 shares

Estimated reading time: 5 minutes

LSE BPP

June 19th, 2020

Behavioral public performance: Making effective use of metrics about government activity

0 comments | 1 shares

Estimated reading time: 5 minutes

Oliver James, Donald Moynihan, Asmus Olsen and Gregg van Ryzin discuss how insights from behavioural science show the way managers, politicians and citizens make sense of, and act on, information about government performance.

Over the past four decades, two revolutions arose separately but parallel to one another. The first is a revolution in governance that has made quantification of performance a central strategy in managing public goods and services. Public sector reforms have embedded these metrics into internal and external processes of accountability and contracting. Many national and international policy programmes report progress on a range of metrics (such as the UN Sustainable Development Goals). Similarly, most local governments, cities and states have some form of metric-based reporting of performance to citizens.

The second is a revolution in social science research that employs new behavioral theories and experimental tools of inquiry to better understand how we think and behave. In our new book, we set out a general framework to connect these two revolutions with a model of behavioral public performance. By doing so, the focus on real-world performance metrics reveals the usefulness of behavioral theories for understanding often messy and complicated questions in politics and administration, such as those we are seeing with the COVID-19 crisis.

Debates about government performance are inherently debates about numbers. One extreme government response to poor performance is to fudge the numbers, or even make them disappear, as Brazil has done with its COVID-19 data. We report evidence that when asked to make difficult decisions, people express a desire to see performance metrics but then neglect to make use of that data or interpret the data in a biased manner. People’s use of performance metrics is affected by superficial characteristics of numbers, framing of information, choice of benchmarks or comparisons, motivated reasoning and differing trust in information sources. These factors combine with individual characteristics and the political context to shape perceptions, judgment and decisions.

Stories vs metrics

Yet stories about performance are more compelling than metrics for many people. Numerical performance information is perceived as more objective and authoritative by citizens. However, whilst this is their stated preference, they have a revealed preference for other sorts of information. For example, when presented with health data and stories of individual patients, they find anecdotes more compelling and memorable. In the context of COVID-19, this suggests that the reporting of tragic cases will tend to be more influential than the raw numbers. Stories, for example, of an errant government advisor (or two) violating lockdown procedures may also have a greater impact on people’s judgment about safety then evidence-based representation of data.

Framing matters

People are not natural statisticians: in our book we show experimental evidence that framing of data matters. People suffer from denominator neglect, meaning they pay less attention to the denominator in ratios. For example, people feel less safe if presented with crime data showing 4,000 violent crimes per one million inhabitants instead of .004 violent crimes per inhabitant, although the ratios are mathematically equivalent. This is one reason why discussion of mortality rates (saying 1.5% of those infected might die, for example) seems to carry less of an impact than communicating the total number of deaths.

The politics of performance data

In our book, we also devote a good deal of attention considering the politics of performance data. When asked to examine the same performance data, both members of the public and politicians engage in motivated reasoning, interpreting data to fit with their prior political beliefs. For example, conservatives tend to judge data about the Obama-era Affordable Care Act as showing poor performance relative to liberals. The reluctance of particular communities to confront the public health implications of COVID-19 is consistent with the role of identity protection in interpretation of metrics about the disease. In this way, our research suggests that decoupling COVID-19 from political disputes could help build consensus about facts and promote more effective policy responses.

Across a variety of contexts, citizens tend to be more doubtful about the reported good performance of public policies and programmes when reported by government itself, as opposed to independent sources. This “incredibly good performance effect” is likely to be compounded in contexts where society is skeptical about the public sector and its competence. In the COVID-19 crisis, issues about trust in information from national and international sources has similarly been evident. Establishing credible information is central to persuading people to take notice, and especially take action that incurs a personal cost, in efforts to mitigate COVID-19.

Using data for good

Our research also offers some practical advice for how to use behavioral public performance information for good. Returning to the power of framing, we illustrate that if politicians can credibly frame metrics in terms of collective challenges, people will respond more rationally in how they interpret the information. In the case of the Affordable Care Act in the US, priming people to think about their healthcare needs (in contrast to a political prime) increased the consensus about which data should be used and how it should be interpreted. Some leaders seem to have pulled off the ability to use the data as part of a narrative about the collective during the COVID-19 emergency. Even beyond the current moment, such insights are critical if governments and societies are going to take advantage of the huge amount of performance data available to improve policy and outcomes.

_____________________

Note: the above draws on the authors’ new book, Behavioral Public Performance, which is free to download (until 24 June 2020).

About the Authors

Oliver James is Professor in the Department of Politics at the University of Exeter.

Donald P. Moynihan is the inaugural McCourt Chair at the McCourt School of Public Policy, Georgetown University.

Asmus Leth Olsen is Professor in the Department of Political Science at the University of Copenhagen.

Gregg G. Van Ryzin is Professor in the School of Public Affairs and Administration at Rutgers University.

 

All articles posted on this blog give the views of the author(s), and not the position of LSE British Politics and Policy, nor of the London School of Economics and Political Science. Featured image credit:  by Franki Chamaki on Unsplash.

Print Friendly, PDF & Email

About the author

LSE BPP

Posted In: COVID-19 | Featured | Local government

Leave a Reply

Your email address will not be published. Required fields are marked *

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by British Politics and Policy at LSE is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.