LSE - Small Logo
LSE - Small Logo

Daniel S. Schiff

Kaylyn Jackson Schiff

January 11th, 2024

In shaping AI policy, stories about social impacts are just as important as expert information.

0 comments | 2 shares

Estimated reading time: 8 minutes

Daniel S. Schiff

Kaylyn Jackson Schiff

January 11th, 2024

In shaping AI policy, stories about social impacts are just as important as expert information.

0 comments | 2 shares

Estimated reading time: 8 minutes

Tools which use artificial intelligence (AI) present a challenge for policymakers – are they an opportunity, threat, or combination of both? In new survey research, Daniel S. Schiff and Kaylyn Jackson Schiff tested how US state legislators react to different ways of framing AI. They find that narratives about AI were just as persuasive as expert information, and that stories – in the form of discussions of AI’s social, ethical, and economic impacts – appear to help legislators get up to speed quickly. 

Will artificial intelligence (AI) save the world or destroy it? Will it lead to the end of manual labor and an era of leisure and luxury, or to more surveillance and job insecurity? Is it the start of a revolution in innovation that will transform the economy for the better? Or does it represent a novel threat to human rights?

Irrespective of what turns out to be the truth, what our key policymakers believe about these questions matters. It will shape how they think about the underlying problems that AI policy is aiming to address, and which solutions are appropriate to do so. Legislators around the world will have their hands full in the 2020s with regulating the many applications and implications of AI. Many key decisions are on the horizon as we set the agenda for AI policy.

For this reason, many observers and scholars have wondered about the various frames, narratives, and ideas circulating about AI. Who is promoting these ideas and why? Which are gaining traction and seem to be particularly influential? Are policymakers driven by the most persuasive stories, or by hard facts, or public scandals? As early scholars of AI policy, and Co-Directors of the Governance and Responsible AI Lab (GRAIL) at Purdue University, these are some of the questions we’ve been investigating.

Attitudes towards AI among state legislators

In late 2021, we ran a study to better understand the impact of policy narratives on the behavior of policymakers. We focused on US state legislators, in part because this is an important and understudied group, and in part to provide a sufficient sample size for our analysis. In partnership with a leading non-partisan AI think tank, we randomly sent out six different email messages to all US state legislators, about 7,300. The emails provided information about AI, and an invitation to register for a webinar designed for state legislators (which we hosted in December 2021).

Texas State Capitol: Photo by Kyle Glenn on Unsplash

One of the features of the emails that we randomly varied was whether it emphasized a “narrative” and or instead provided “expert information.” Indeed, there are good reasons to think that policymakers are especially reliant on this kind of technical information when they are trying to understand a new policy issue, especially something as technically complex as AI. Many government reports and programs focus specifically on increasing expertise, such as by launching task forces, funding fellowships, and creating government-industry-academia rotation programs. The calls for expertise are arguably unprecedented in technology policy. Meanwhile, other legislators received an email just providing basic background information about the organization along with a link to their website, which we used as our “control condition.”

Narratives can be just as persuasive as expert information

In our analysis, we found something surprising. We measured whether legislators were more likely to engage with a message featuring a narrative or featuring expert information, which we assessed by seeing if they clicked on a given fact sheet/story or clicked to register for or attended the webinar.

Photo by Viralyft on Unsplash

Despite the importance attached to technical expertise in AI circles, we found that narratives were at least as persuasive as expert information. Receiving a narrative emphasizing, say, growing competition between the US and China, or the faulty arrest of Robert Williams due to facial recognition, led to a 30 percent increase in legislator engagement compared to legislators who only received basic information about the civil society organization. These narratives were just as effective as more neutral, fact-based information about AI with accompanying fact sheets.

We also examined whether the effectiveness of the strategy depends on the type of ‘frame’ used. Some emails focused on the economic and geo-political implications of AI, and others focused on social and ethical implications. Again, to our surprise, despite the focus of AI as a tool of innovation in much of policymaking discourse, legislators were just as engaged by discussions of social impacts. Regardless of whether one focuses on economic or ethical implications then, stories appear to help legislators get up to speed quickly.

The overall capacity of the state legislatures mattered too (some US states have legislative bodies that meet for only a month a year, while others have many full-time staff and many more resources). High-capacity legislatures engaged much more overall. While this is actionable information for policy entrepreneurs who want to inform policymakers at low marginal cost, it presents a dilemma: arguably the legislatures with the least access to information are those who are most in need of it. This was confirmed by another one of our findings, that legislators in states that had not yet legislated on AI were significantly more likely to engage with the email messages (whether providing expertise or narratives).

Implications for AI policy

Where does this leave us as AI policy increasingly takes shape, and advocates fight about human rights, safety, fair competition, and more? We think a few implications stand out.

1) Different groups benefit from different approaches. Targeting less experienced policymakers might be especially fruitful, but this could be challenging in policymaking bodies that lack resources. We need to keep experimenting to see what works.

2) Don’t underestimate the power of stories. Whether one is arguing to speed up or slow down AI, facts and arguments are only part of the case.

3) Different policy frames are still in play. Arguments about economic competitiveness certainly resonate, but discussions of ethical harm are equally engaging, and this was true regardless of political party. There’s room for more than one way to understand AI.

Policymakers, like the rest of us, can be influenced by both reason and passion. With any luck, we can use our collective wisdom to influence and inform them in a sound direction. 

This article is based on the paper, ‘Narratives and expert information in agenda-setting: Experimental evidence on state legislator engagement with artificial intelligence policy’, in Policy Studies Journal.

Please read our comments policy before commenting.

Note: This article gives the views of the author, and not the position of USAPP – American Politics and Policy, nor the London School of Economics.

Shortened URL for this post: https://bit.ly/48NeB8W


About the author

Daniel S. Schiff

Daniel S. Schiff is an Assistant Professor of Technology Policy at Purdue University’s Department of Political Science and the co-director of GRAIL, the Governance and Responsible AI Lab.

Kaylyn Jackson Schiff

Kaylyn Jackson Schiff is an Assistant Professor in the Department of Political Science at Purdue University and Co-Director of the Governance and Responsible AI Lab (GRAIL).

Posted In: Urban, rural and regional policies

Leave a Reply

Your email address will not be published. Required fields are marked *

LSE Review of Books Visit our sister blog: British Politics and Policy at LSE

RSS Latest LSE Events podcasts

This work by LSE USAPP blog is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported.