LSE - Small Logo
LSE - Small Logo

Catherine Speller

June 27th, 2018

LSE Experts on the Truth, Trust and Technology Commission (T3): Dr Nick Anstead

0 comments

Estimated reading time: 5 minutes

Catherine Speller

June 27th, 2018

LSE Experts on the Truth, Trust and Technology Commission (T3): Dr Nick Anstead

0 comments

Estimated reading time: 5 minutes

Nick Anstead

As part of a series of interviews with LSE Faculty on themes related to the LSE Truth, Trust and Technology Commission (T3) being run by the Media Policy Project, Dr Nick Anstead, Associate Professor in the Department of Media and Communications, talks to LSE MSc student Claudia Cohen about political communications and data-driven politics. Dr Anstead is leading the online political communications strand of the T3 Commission.

CC: Drawing on the 2015 UK General Election, you wrote about data-driven campaigning practices during elections. Thinking about the Cambridge Analytica scandal, how do you understand the evolution of these practices and the normative dimensions of data-driven politics?

NA: Let’s start with the important observation that using data in politics is not new. Political parties have used data, stored data, and employed it to target voters for a long time. If you go back to the type of political campaigning that has been going on for 50 years, when someone knocked on your door with a clipboard, they were also doing data gathering. They were trying to figure out what sorts of voter you are, whether you are likely to support them, making sure they will get you to vote on the day. So data is not new, but clearly something is happening in terms of the quantity of data, the types of data and the ability to manipulate datasets.

To situate those processes in wider society, Facebook’s business model it is about using the data we give it to target us with advertising, and therefore why should politics be any different? We know from the history of political communication that techniques developed in the corporate sector cross over into the political communication space, often very rapidly. So in that sense, there really should not be anything surprising here.

But we should care about this if we think these processes are posing risks or undermine the basic functioning of democratic life. Then, we need to think what sorts of communication is good and helps people to be good citizens, that helps politicians communicate in an effective and fair way. But also about the risks that new forms of communication pose. Plenty of countries have banned forms of political communication or regulate it heavily. The most obvious example in the UK is political television advertising. You cannot buy it, because it has been decided that this is not a healthy way to actually facilitate democracy, and this is reflected in electoral law. The US is the obvious example of a country that has gone in the opposite direction.

CC: What would you say to people who believe that data-driven companies played a role in recent unexpected election results?

NA : The first point – which is actually a very big problem – is that we do not know the effects of these types of communication because they are very hard to research. It is really interesting that there have been hundreds, probably thousands, of stories on Cambridge Analytica appearing in the world’s press in the past few weeks, but I don’t think I’ve seen a single one that spoke about the content of the adverts. The fact is we do know very little about this, so we can’t really make evidence-based statements about what’s actually happening.

But a second important point I am nervous about – and I say this as someone who researches political communication – is political communication runs the risk of becoming a cover-all explanation for much larger structural failures in liberal democracy. Maybe a different way of asking the question is not what we do about fake news, about unattributable advertising, about Russian bots, but why people might be susceptible to them? What is the problem we have with trust in our political communication process, and what is the problem we have with people feeling disengaged or disenfranchised in that process ? What is it that actually makes them susceptible to these messages?

CC: Recent political campaigns, such as the EU referendum, have been characterised by a tendency to exploit the weaker ‘truth’ filter of online platforms to spread misleading information and fake news. Do you think that platforms should adopt a system for assessing the credibility of a news brand? Is it feasible?

NA: I am very nervous about giving individual platforms the power to decide what truth is. It makes me very uncomfortable to give a private company that level of power. I think there are different ways we can think about this, and again I come back to my point about why people seem so willing to believe these stories and what actually makes people susceptible to them. I think there are ways you can create a healthier information environment which seeks to embody particular values such as pluralism, but I don’t think that decision should be left in the hands of corporations.

Now, of course, it is the case now that the algorithms created by companies like Facebook to organise content did not develop by accident: they are designed elements of the social media environment, there to organise information. But what I am keen to avoid is a situation where a company becomes the arbiter of what is fair and right.

CC: Then should the platforms and the government collaborate? During Mark Zuckerberg’s recent hearing at the US Congress, it seemed most people on Twitter thought US representatives were not best qualified to handle these issues.

NA: This is actually part of the problem, because these products are so complicated that they are very hard for non-experts to understand. To be genuinely transparent, users should understand what a social media platform is and how it works. If members of Congress with huge amounts of resources and research staff can’t get on top of this, then it is difficult for the average user signing up to Facebook’s terms and conditions.

That is clearly a failure on the part of Facebook as as entity. If it is not actually getting people’s genuine informed consent in terms of how their information is used, there is a big problem.

We should be careful about this, because part of the Cambridge Analytica story is about misuse of data – the way in which data has been shared or repurposed or extracted from the Facebook environment. However, we have to be clear about what is a bug and what is a feature: this is an entire system built on using data to target people, so I think the starting point is making sure citizens understand that’s what they are doing.

How much do people care? In the real world, how much do people notice these stories about the use of their data? The point is that we are sitting here having a very sophisticated debate about this in universities and maybe in some segments of government or among tech specialists, but ultimately Facebook is a mass product and how many of the people who use it genuinely understand the way in which information they have been given is organised? I suspect it’s not many.

CC: In your ongoing research focused on the 2017 UK General Election, you wrote that political ads on social media were shown to users in a personalised way. This makes adverts difficult to scrutinise and to regulate. Do you think it is possible to monitor election advertising happening online?

NA: Yes, it absolutely is. Facebook is taking some steps in this direction, which is to be welcomed even if, again, it still makes me a bit nervous because Facebook has decided it is a good thing, rather than civil society or governments deciding it. I think we hit on a classic problem – a Politics 101 first seminar problem – about what is politics, and from that how we define what should go in an archive of Facebook political adverts. Do we just want ads that are purchased by official political parties? Do we get ads that are purchased by NGOs and other groups that can campaign ?

But what happens then, when we step outside the realm of Electoral Commission, formally funded politics? There is a much darker space where we have weird groups that are not officially registered, maybe not even in this country, spending money. How do we get their advertising data? Do we use a keyword? But then we end up with massive datasets with lots of false positives. So, there are methological issues here, but it is a really important question because if you define politics in different ways I think we get quite different senses of what is happening.

There are also privacy concerns too. One of the issues in researching Facebook is the privacy concerns of giving data to academics. But we need to know who the advertising was targeted to, and that is needed to give us some level of transparency on the types of processes that are being used for communicate with the electorate. For example, we only can analyse what the messages are and identify contradictions if we also know which types of voters they are going to.

CC: So should these questions be considered in a global perspective, or on a national level?

NA: Transnational organisations like the European Union are very nervous about interfering in election law, because it is largely regarded as a national preserve. But clearly the nature of this challenge is transnational and so it becomes very hard to square the circle. I suppose one way of putting it is how you give the Electoral Commission the power it needs to regulate UK election, the integrity of which may be threatened by external actors and events. That is going to be a really difficult problem to tackle.

CC: You mentioned that you are not comfortable with giving too much responsibility to platforms, but what do you think should be the basic responsibilities and regulations they should adopt to promote a healthier political debate?

NA: The first thing we need is to understand what the problem is. I think that requires genuine transparency, and transparency on the terms that regulators and to some extent the academic community requires. Fundamental questions about democracy, elections and how they work must drive the nature of that transparency. That’s a difficult conversation and it has ethical concerns on the other side as well – when you give people data to do research, that data has to be secured and anonymised.

Then we need to understand the responsibility of the platforms versus the responsibility of governments, or other possible ways of regulating this.

Ultimately, the platforms need to take their role seriously. They need to be aware of the role they have in wider society and its processes. At the moment I think there’s a tendency to just go into PR-speak. They need to stand up and have a serious conversation about this. The platform itself needs to be transparent about their processes, it needs to find efficient ways of explaining what it is and what it does to its users.

CC: A few weeks ago, you led the workshop about online political communication for the Truth, Trust & Technology Commission at LSE. What were the key debates and questions?

NA: For me, the starting point isn’t actually about technology or even about platforms, but about values. What does a healthy liberal democracy look like right now? From that question about values, we can then go on to talk about how we have done things in the past, such as the TV advertising ban in the UK, and why that was a good embodiment of those values.

Then, how do we repurpose our current regulations to create those values anew? Or do we need to throw the regulations out and start again ? How radical do we actually need to be? Because ultimately I think that the challenges we face aren’t that different. The ideas we want to enshrine institutionally in democratic life probably haven’t changed that much, but the environment we try to enshrine them in has changed.

CC: Yes… there is this idea of the ‘golden age’ when political debate was healthier and better, but even in the past there were some similar problems.

NA: Indeed, and we need to be careful to avoid assuming that these problems are entirely new. Also, there is some continuity. We say platforms are transnational, really powerful and difficult to understand, but ultimately they are still companies with legal identities that can be regulated. We have seen in the past few weeks that they are companies with shareholders, and that when they get into trouble they will react as traditional companies do.

This article gives the views of the authors and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Catherine Speller

Posted In: Intermediaries | LSE Media Policy Project | Truth, Trust and Technology Commission

Leave a Reply

Your email address will not be published. Required fields are marked *