The growth of online communication has raised important questions about privacy, free speech and the governance of the internet. In an interview for EUROPP, Katharina Borchert, the former CEO of Spiegel Online and current Chief Innovation Officer at Mozilla, discusses the pressing need for data protection online, and why tech companies have a responsibility to prevent the exploitation of their users.
What is the ‘open web’ and can it be economically sustainable?
It depends on which definition you use, but right now the open web is a distributed, decentralised, permissionless network that everybody is free to build on and create. It is economically sustainable for the time being, but the problem is that our primary funding model for it is advertising. This has turned out to come with incredible downsides because we did not anticipate all of the negative side effects and the fallout we are seeing right now and so we did not think about rules and regulation upfront. I think we urgently need to think and work harder, and I’m excited about the growing number of pitches I hear for alternative ways of sustainable funding for all kinds of open web activities.
Is the traditional online business model of selling user data now outdated?
Well, it is very much alive still, but today we are seeing all of the negative downsides and we cannot continue like this. It has made us incredibly vulnerable to all kinds of manipulation and exploitation. As users, we are generating much more data than ever before in history, but we have almost zero control over it once we give up our data in exchange for something. Or often enough we don’t even know that we are generating data which third parties are then mining. At Mozilla we’re blocking over ten billion third party tracking cookies and crypto miners a day for Firefox users. But this is just a step in the journey to giving users much more control. But most users today don’t know where their data goes the second they go online: who tracks what, who sells what kind of data, what profiles about them are around. I think the state of affairs is worse in the US because it’s a more permissive regulatory environment than the EU. There are more controls in Europe, but it’s a scary state that we have reached where I can, as a user, be micro-targeted for all kinds of nefarious political advertising.
There was a really great article by Kashmir Hill recently about how consumer scores are built. It is mind-blowing how much information data brokers and consumer scoring agencies have about you. This goes way beyond advertising. Consumer scores built on online tracking determine whether someone can get a mortgage and at what price they can get it. In the future, a score might determine if you are eligible for health insurance. It has far-reaching consequences for everyday life. That’s not at all what we, the early adopters and builders, anticipated when we went online for the first time 25 years ago, giddy with excitement about the endless possibilities of this new platform. And this has to stop. We have reached a point where online advertising has created a surveillance economy with far reaching consequences for our daily lives and future wellbeing. And I don’t think we can continue like that.
Who should take the lead in terms of regulating privacy? Should companies self-regulate or should governments be on the frontline?
I will try not to give an evil laugh here. I have lost my faith in companies’ ability and desire to self regulate. Whether it concerns data privacy, diversity or labour conditions, we have seen self-regulatory efforts by companies fail time and again because in the end creating shareholder value wins out over any privacy, diversity or other considerations. There are plenty of examples of that, so my trust in self-regulation is minimal at this point. This is a space where governments need to step up.
As a European living in the US, I am quite excited about the trendsetter role Europe has taken, for once. Not everything about the EU’s General Data Protection Regulation (GDPR) is great and the implementation is not quite working as people had hoped, but I feel that despite all of the controversies it has generated, it triggered a global conversation about how to regulate data protection and data privacy. California has followed with a new data privacy law that has its flaws as well, but we cannot wait until companies put out flawless products and we cannot wait until governments have figured out a way to come up with “perfect” regulation. We need to do something and I am happy to see the first attempts around that. It really feels like a tipping point right now.
Twitter recently banned political adverts. Is this something all tech companies should consider and is there a better way to tackle the fabrication of facts?
That’s a difficult topic. As you might know, Facebook promised repeatedly to create more transparency around political advertising and make their political ads archive available for research, and then they built a super buggy API. They have been failing for over a year to deliver what they have promised. I’m not generally opposed to political advertising. I am a voter, I’ve been a political journalist, I have parents who are politicians, so I recognise this is an important tool. But right now, political advertising too often involves absolutely zero fact-checking paired with an increasingly lax attitude towards the truth and the ability to micro-target people. I would rather not have it at all than offer this capacity for dangerous manipulation at scale.
I think until there are better checks and balances implemented, companies should consider refusing to distribute political advertising. Especially since everybody claims it’s only a tiny part of the overall revenue. The current system is more harmful to our democracies and to our societies than it is helpful to the political process. And that is what we should care most about when it comes to political advertising. I understand all of the arguments about the difficulties in distinguishing pure political advertising from issue ads, but the fact that it’s difficult to get right does not mean it should be a free-for-all.
In the meantime, the internet has been really good at removing copyrighted materials. There are complexities to that as well. But when it comes to protecting large scale commercial interests, we can overcome difficulties and find solutions. So why are we shying away from the political ads issue? And I don’t only want to point fingers at Facebook. We’re suffering from a broad disinformation crisis that affects all kinds of platforms across the globe.
And this brings me to another aspect of the whole problem. Some of the most profitable companies in the world are outsourcing the burden of content verification to underfunded third-parties like journalists or fact-checkers. And increasingly to users themselves. Call me a cynic but I believe that’s the reason why some platforms are keen on funding digital media literacy initiatives. Yes, digital literacy is incredibly important and we have a lot to catch up on. But the burden can’t just be on the consumer and on third-party fact-checkers, when the large platforms provide the space, all of the tools and reap all of the profit, while escaping the liability. If you provide a platform that can be massively exploited, you are also on the hook for fixing it.
If we suppose that current business models are not going to last forever, what could be the next trend?
I would have my own company if I knew the answer to that! I don’t think the use of customer data for advertising and selling data is ever going to go away completely and I don’t think it has to. But we as users deserve more control over our own data. I should be able to decide much more easily whether I want to use a ‘free’ service, that I effectively pay for with my data, by giving companies access to my profile so that they can advertise. I have no problems with that as long as I’m in a position to give informed consent. I just have a problem with totally unregulated data harvesting that leaves me completely in the dark about who has my financial data, who has my health data, who has all of my web browsing history.
I think there are currently a lot of interesting start-ups and initiatives trying to tackle the funding gap or the news business model gap with cryptocurrencies, for example, to make micro-transactions possible at a minimal cost. That could be a viable system. There are also new subscription approaches worth looking into.
One of the things that I found incredibly compelling when I first went online in 1994 was the breadth of content just waiting to be discovered. And neither my location nor my spending power mattered much. So, what I don’t want to create is a class-based internet where the amount of privacy you have or the access to content you have depends entirely on the size of your wallet. But we also cannot continue the way we are going now, so I’m excited about the growing awareness and willingness to tackle this issue systematically. There will be many failed attempts, but there is a whole new generation of entrepreneurs being really smart about this and passionate about finding a solution.
Note: This article gives the views of the interviewee, not the position of EUROPP – European Politics and Policy or the London School of Economics.
Katharina Borchert is the Chief Innovation Officer at Mozilla. She is a German journalist and was previously the CEO of Spiegel Online.