LSE - Small Logo
LSE - Small Logo

Julia Ziemer

March 22nd, 2018

LSE Experts on T3: Jean-Christophe Plantin

0 comments | 4 shares

Estimated reading time: 5 minutes

Julia Ziemer

March 22nd, 2018

LSE Experts on T3: Jean-Christophe Plantin

0 comments | 4 shares

Estimated reading time: 5 minutes

In the first of a series of interviews with LSE Faculty on themes related to the Truth, Trust and Technology Commission (T3), Dr Jean-Christophe Plantin talks to LSE MSc student Ruchi Hajela on Big Data, misinformation and transparency.

 

RM: Do you think the government and regulators have a grip on what to do with Big Data? And do you think GDPR is a good solution in that direction?

JCP: Two extremely large questions. First, we know that the architecture of digital platforms allows, and even emphasises, the dissemination of deceitful and unverified information, because it’s creating a lot of clicks and lot of engagement. The structure of these platforms works against the typical notions that are at the centre of media regulation, such as diversity of point-of-view, for example. For platforms, we are talking about a non-diversity that is technically enforced. This is a challenge. I am not an expert enough on GDPR to answer on that on that topic.

Something related that I am interested in, and that we discuss in my MC434 platform and infrastructure class, is that a lot of questions that we have about the regulation of information intermediaries in Western countries,  such as “How can we regulate them?” “How could we control fake news?” “Should we enforce real name policies?” are still at the level of questions, and with multiple solutions being discussed. Something I’m recently interested in is how these questions have already been solved in other contexts, such as China. Hence the provocation that is at the centre of a current project, that Gabriele de Seta and I are currently exploring, is to say that it may well be that the future of platform regulation has happened already, and it is in China. We use this example as an extreme case of how regulatory ideas that we have here in the West could be implemented, and with specific social consequences. Of course, it can’t be applied the same way in other countries, but it’s a cautionary tale that when we talk, for example, of implementing a “real name” policy.

 

RM: A lot of these platforms allow communication research to take place in a controlled, closed group environment, which is not available for others to examine. How do you think research could then be carried out in a more ethical way?

JCP: This is the black box question, a very good question. The way it is being solved right now is as such: It’s not the world of research entering these companies, it’s these companies entering the world of research. What we have been seeing in the last few years is the increasing importance of research units and general research efforts of platform companies. Facebook, Twitter, others or Uber have the means to hire the best researchers, academics, providing them with great working conditions for a couple of years. It’s of course very tempting for a lot of academics to go work for these companies. The consequences are that such research remains within the control of these entities, has great consequences on access to data, replicability of data, among others. On one side, we have extremely efficient research being done; but it’s done behind closed doors, and traditional questions about replicability, access to data, independence of research, etc., are still being asked to these new tech companies. But because their role is bigger and bigger in the realm of science, then these questions are becoming even more important.

In terms of solutions, it’s looking quite bleak for the public research university because there’s no way they can financially compete with these entities.  They are still going to have the best researchers because we can’t compete with them in terms of salaries and the great working conditions. From a rational point of view, it makes sense to go there. In terms of regulation, we are talking about private entities.  Forcing them to make data public would create huge scandal and would be extremely hard to implement.

RM: Since we are talking about the question of black box, there’s always this discourse around making the algorithm transparent. Do you think that’s even possible?
JCP: There was a very good article published in 2016 from Mark Ananny and Kate Crawford about the limitation of transparency, and it targets specifically this question. Frank Pasquale has also been working a lot on this question. The essence of these papers is that transparency  is good but that’s just the beginning. Just by opening up the black box and making hundreds of lines of code available, you are not explaining algorithmic discrimination because there is no line that says: “If belonging to specific population, then discriminate”.  It is much more complicated that that.

Transparency is extremely important but that’s just first step. More accountability from the user perspective, and more awareness of how algorithms work, are important first stages. There are so many people who still don’t know that Facebook tweaks their algorithm, and that what they see is not an accumulation of all the people they follow, but that curation happens. There are still a lot of miles that we need to cover on the way to users’ empowerment.

 

Q: How could companies be incentivised to prioritise quality over quantity when it comes to engagement, as currently it’s the reverse?

A: The first one is regulation and the other one is economic incentives. The regulatory response could mean applying and adapting existing media policy frameworks to information intermediaries. Robin Mansell, Damian Tambini have been working a lot on this topic and are calling for “regulatory innovation,”[1] to take a term from Robin Mansell. We are still talking about the same questions, but applied to different sets of media entities. A lot of things change, but the fundamental questions remain the same. We are still talking about a broadcasting entity, and in the end we are asking the same questions that we asked to TV or newspapers. We still ask them to Facebook, and others, now that they are filling this position.

Second, economic incentives: they may be the most efficient in the short term. We are entering here the realm of experiment. I don’t know what form it could take but to reward these companies for providing better information, instead of more engagement, would make sure that they would regulate their content, and probably more quickly. It was impressive to see how fast Facebook decided to leave the information market, and reducing the part of news items on our Facebook feeds. This debate has been building up for a long time: “Is Facebook a media company?” The decision to just go back to friend and family-centered platform, over information dissemination, shows that when Facebook wants to change things, they can do that in a very fast and efficient way. If we can insert quality of content into their business model, then to me, it will be done in the most efficient way. However, we are not solving the question of what gatekeeper we need to guard against Facebook.

Dr Jean-Christophe Plantin is Assistant Professor in the Department of Media and Communications at LSE. He is also programme director for the Msc Media and Communications (Data and Society) and Msc Media and Communications (Research).

Note: This article gives the views of the author, and not the position of the Media Policy Project nor of the London School of Economics. 

 

[1] Mansell, Robin (2015) Platforms of power. Intermedia, 43 (1). pp. 20-24. ISSN 0309-118X

About the author

Julia Ziemer

Julia Ziemer is Institute Manager at the Marshall Institute. She has previously worked at Polis, LSE's journalism think-tank, the charity English PEN and the Literature Department of the British Council.

Posted In: Algorithmic Accountability | Internet Governance | Truth, Trust and Technology Commission

Leave a Reply

Your email address will not be published. Required fields are marked *