LSE - Small Logo
LSE - Small Logo

Robin Mansell

October 23rd, 2020

How do you decide who to trust with your data? Professor Robin Mansell on why protecting privacy online is so difficult

0 comments | 7 shares

Estimated reading time: 5 minutes

Robin Mansell

October 23rd, 2020

How do you decide who to trust with your data? Professor Robin Mansell on why protecting privacy online is so difficult

0 comments | 7 shares

Estimated reading time: 5 minutes

This interview with Robin Mansell was originally published by The Privacy Collective. Robin is Professor of New Media and the Internet at the London School of Economics and Political Science (LSE). She is also a former chair of the International Association for Media and Communication Research, and a board member and secretary of TPRC, the research conference on communications, information and internet policy. Here she discusses why economic incentives are needed to encourage more responsible handling of citizen’s data, how technology platforms are allowed to grow as large as they do, and why the internet has been a runaway experiment. 

Why does online privacy matter?

 All technologies, including the internet and its design, get embedded in society’s values. And as long as we privilege the profit motive and the commercialisation model, it comes as no surprise that the internet has continued to evolve in that direction. As these companies have grown, there’s now much more attention being paid to the importance of privacy and the potential harm being done by surveillance models that have been introduced.

You never know that you’re in trouble until something actually happens. So when people do experience fraud, or online bullying, or other scenarios, that brings it to people’s attention. Some people are just fine. But a large number will find themselves in difficult situations, and it’s not clear where they should turn to when that happens. There are very few ways in which we’ve invested in redress systems, other than through the torturous process of the courts. The kinds of intermediary organisations that would be there in the offline world aren’t there, at least not on the scale that’s needed. It’s still very small scale and often an afterthought.

Can you tell me a bit about your research at LSE and your interest in the field of data privacy?

I teach courses on disruptive digital worlds, which address all aspects of innovation in new technologies, especially the social, economic and political implications. Within that, issues of surveillance and privacy are crucial – how do you create incentives for encouraging the private sector and government to behave more responsibly around the use of citizen’s data? And in that sense, I’m interested in the economic incentives but also in the governance and political issues.

How do you think laws like the introduction of the General Data Protection Regulation (GDPR) have impacted how companies are handling data, and how the public understands online privacy?

There is a higher awareness of data privacy and I think, amongst some segments of the population, there’s more of a propensity to ask questions. But in general, I think it’s a confusing situation for the public – they’re told to trust companies with their data, but then they hear about all of these lapses in data protection, covered by the media.

The issues there are not just technical issues and economic issues, they’re also to do with the implementation of legislation. As soon as you introduce one piece of legislation, there will be elements of the corporate world that will start looking for work-arounds. The GDPR is about changing the whole culture, the ways in which the corporates and public sector are supposed to think about data. And I think the biggest deficit there is in training, not just around technical issues, but training to be responsible guardians of the kinds of data that is being collected.

You recently co-authored a book about how artificially intelligent platforms are collecting and processing data – what inspired you to write the book?

We started with this question of why do these platforms grow as big as they grow? Is that growth inevitable? Why have state governments have just let them grow without intervening until now? One reason is because most governments are interested in a kind of technology race, and so the Americans have pushed and allowed their Silicon Valley companies to dominate the global market. We also talked about totally different business models, because the fact that you give your data for free to platforms so that they can nudge you into buying more things from their clients, is not the only business model in the world. There are collaborative, collective models, but very little investment has been put into them. And so no one really knows whether they could be sustainable or not.

Is public outrage what’s needed to make those alternatives a reality?

I think it’s part of the story. But collective action like that is usually stop and go. We see it in the environmental movement. It has an impact, certainly. But I think there needs to be a bigger impact whereby the institutions, whether they’re the courts or whether they’re regulatory agencies, actually get the message that they need to shape the behaviour of these data-collecting companies. They need to create the incentives where it makes economic sense for platforms to do business differently.

Do you think the pandemic has changed how people are thinking about surveillance and privacy? Are we prepared to accept more intrusion than pre-Coronavirus?

I think we might have been. But the trust that people might have been willing to put in government has been completely broken as a result of the A-level fiasco and the track and trace system. Why should people respect the notion that they should be monitored if monitoring leads to nothing? More illness, less kids in schools, people self isolating, because they simply do not know if they have the virus or not. Once you lose trust, getting people to believe in a system, which introduces more extensive and integrated data collection activities, is difficult. That said, and this has been true for the last decade or more, people express concerns about their data privacy in surveys but will then go and use these apps or platforms without thinking about the consequences.

What are some of the consequences of sharing this sort of data in the longer term?

One thing to bear in mind is the actual empirical evidence on whether or not sharing this data does affect outcomes such as voting behavior is ambiguous. There are some people who say absolutely it does, and other people who say no it doesn’t. But I think what is more concerning is the general way in which the proliferation of that kind of information changes the whole sense of society and public discourse. The notion of what’s “good behaviour” and “good speech” in a democracy starts to change and become normalised. I certainly see this happening in the United States – it’s normal for politicians to be uncivil, and it’s therefore normal for people who follow them to be uncivil.

That is problematic but it isn’t really to do with technology, in my view. It’s more about what are the behaviours that we find acceptable in society, and what are the behaviours that we don’t. I think that’s gradually changing as people being more used to a really fractured populism, which is problematic. The fact that we have as much information, misinformation or disinformation as we have is a symptom of those changing values and the changing notion of what our culture should be about and how to be civil to each other.

You’ve described the internet as a ‘runaway experiment’. What do you mean by that, and what is needed to bring it back in line?

I think the big question now is where will the investment come to develop new ways of doing things? My hunch is that if businesses do get the message, if they start competing on whether or not they protect people’s privacy, then we might be on a different pathway in the future. But they can’t just treat a fine from the Information Commissioner’s Office as a cost of doing business. That’s no longer viable. On the regulatory side, the oversight of the behaviours of these platforms needs to be independent, rather than an arm of the state. We’ll never get it perfect. But it seems to me that if you invest in those kinds of institutions that have that responsibility and mandate to think about a variety of interests, including those of citizens, then you have at least a chance of shaping the online world in new ways.

The Privacy Collective are taking Oracle and Salesforce to court for illegally selling millions of peoples data and they need your help! Check out how they’re fighting for people’s data privacy rights and support their claim by “liking” their website support button here.

This article represents the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Robin Mansell

Robin Mansell is Professor Emerita of New Media and the Internet in the Department of Media and Communications at LSE. She has training in several social science disciplines including psychology, social psychology, politics and economics and is a strong advocate of interdisciplinary research when it builds on the strengths of disciplinary inquiry.

Posted In: Internet Governance | Privacy

Leave a Reply

Your email address will not be published. Required fields are marked *