LSE - Small Logo
LSE - Small Logo

Blog Admin

June 27th, 2016

How to make better mistakes in public policy: Learn from the negative results just as much as the positive ones.

0 comments

Estimated reading time: 5 minutes

Blog Admin

June 27th, 2016

How to make better mistakes in public policy: Learn from the negative results just as much as the positive ones.

0 comments

Estimated reading time: 5 minutes

Kevin Arceneaux 80x108Daniel Butler 80x108We all make mistakes, a tendency which also extends to those who work in public policy. But we often only hear about successes. Bucking this trend, Kevin Arceneaux and Daniel Butler describe a recent pilot program aimed at boosting civic engagement. Rather than increasing the number of people who volunteered for town committees as intended, the three tactics they tested actually had no effect or reduced the chances that people would volunteer.

This piece originally appeared on LSE USAPP Blog.

It is a truism that we learn just as much from things that don’t work than from things that do. It’s the ill-fated decisions of life — the easily avoided sunburns and haste-induced accidents — that teach what not to do in the future.

The same is true for public policy and perhaps even more so. Unlike that easily avoided sunburn, public policy tackles complex issues that do not have easy solutions which are akin to putting on sunscreen.  All we can do is do our best with the knowledge we have, recognizing that a lot of the tactics we choose will not work out as planned. The silver lining here is that we can learn from those little failures.

Nonetheless, the publications in most academic journals seem to only tell us what works. Part of the reason lies in publication bias. There is a human tendency to want to hear what about what works and, therefore, editors and reviewers privilege research that shows positive results and researchers respond to these incentives by putting their studies that don’t produce dramatic results in the proverbial file drawer.

Fortunately, the editors of Public Administration Review took a different tack and gave us a chance to report what does not work when it comes to increasing local civic engagement. In early 2013, we were connected with a small city looking to boost civic engagement through the Laboratories of Democracy. Its mission is not simply about helping policymakers get advice from policy experts. Its primary aim is to foster collaborations between policymakers and academic researchers that will put policy advice to the test.

Although academics often have well-informed ideas about what policies governments should pursue, those ideas should be tested on the ground.  Even something that worked in one place and time may not work in another. Moreover, policymakers often need solutions tailored to the specific problems they face. Consequently, policy experts are often asked to extrapolate from prior research, which makes the effectiveness of proposed solutions even less certain.

broenPixabay public domain

With this background in mind, we worked with a small Midwestern town on recruiting citizens to serve on the unpaid, volunteer committees.  We specially worked with town leaders to increase the likelihood that the town committees reflected the town’s heterogeneity.  In practice many of the committee positions went unfilled and those who did choose to serve on the committees overwhelmingly tended to be affluent.

We knew a good deal about what works when it comes to getting people to vote, but less about how to get people to commit to long-term civic engagement.  So, we tried to extrapolate from the existing Get Out the Vote literature in crafting a policy solution for the city, but we were unsure if it would work as intended.  Consequently, we convinced the city to conduct a small pilot experiment to see if our proposal worked as we expected.  Fortunately, the town agreed to run the small experiment, because our policy proposal decidedly did not work as intended.

We offered two tactics to increase participation on town committees.  The first was to provide public recognition for those who served on committees and the second was to provide free training. The first tactic drew on the “nudge” psychology that uses small gestures to motivate consequential behaviors. The second tactic attempted to address resourced-based reasons why less affluent individuals may not participate.

We embedded both tactics in a town survey taken by 340 individuals. People could have received one of three messages: 1) a simple request to sign up to be on a committee, 2) the simple request plus a promise of social recognition, or 3) the simple request plus a promise to provide free training. These treatments were assigned at random, so if more people were interested in signing up after receiving, say, the social recognition message, then it would suggest that we had hit on a winner.

Unfortunately, we did not hit on any winners.  The social recognition message had virtually no effect. People who received it were just as likely to sign up for a town committee as people who received the simple request message.  The training message actually decreased the level of interest relative to the simple request message.

When we dug a little deeper we found that less affluent individuals were the ones most turned off by the training message, which was exactly opposite of what we had intended to happen. Among the low-income individuals who receive the simple request to sign up, 20 percent expressed interest, whereas among the low-income individuals who received the training message, only 6 percent did so.

In other words, we had hit on a method to get people to not participate on town committees, rather than one that would increase it.  In hindsight, we believe that the training message backfired because it communicated that the task was going to be so onerous that one needed training. Because less affluent individuals tend to have less leisure time, it is understandable that individuals from this population would be especially wary of volunteering for a time-consuming task.

However, we only have the benefit of hindsight because we conducted the pilot experiment.  Imagine what would have happened if the town simply took our advice and created an expensive training program?

This article is based on the paper, ‘How Not to Increase Participation in Local Government: The Advantages of Experiments When Testing Policy Interventions’ in Public Administration Review. 

Shortened URL for this post: http://bit.ly/28PuDFF

Note: This article gives the views of the author, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Authors

Kevin ArceneauxTemple University
Kevin Arceneaux is Professor of Political Science, Director of the Behavioral Foundations Lab, and Institute for Public Affairs research affiliate at Temple University.

Daniel M. ButlerUniversity of Washington in Saint Louis
Daniel M. Butler is Associate Professor of Political Science, Director of the Laboratories of Democracy, and Weidenbaum Center research fellow at the University of Washington in Saint Louis.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Data science | Evidence-based policy | Evidence-based research | Government

Leave a Reply

Your email address will not be published. Required fields are marked *