Advanced Introduction to Platform Economics (Edward Elgar Publishing, 2020) is the recently published, highly thought-provoking book by Robin Mansell, Professor of New Media and the Internet in the Department of Media and Communications at LSE, and W. Edward Steinmueller, R.M. Philips Professor of the Economics of Innovation at the University of Sussex. In this interview, conducted by LSE PhD researcher Anri van der Spuy, we explore how Robin’s latest book fits into her many years of work on information and communications technologies and their governance; asked Robin to use the economic theories discussed in the book to illustrate some policy problems; talk about the much-talked-about The Social Dilemma (how could we not?); and turn to current regulatory trends in Big Tech and beyond to understand why leaving things as they are is a non-starter for her.
Q: You’ve been writing about various telecommunications-related topics for years. Lots of things have changed in that time: global Internet penetration, for example, grew from 1% in 1990 to more than 50% today. What are the most profound changes you’ve noticed in ‘the field’ (broadly construed)?
RM: I’ve seen a persistent expectation that the next generation of technology will address a whole host of problems, whether social or economic. In that sense, not much has changed. The social, cultural, and political consequences that came with advancing computerisation are not as surprising as many seem to think. There is a discourse that declares that this generation of artificial intelligence, datafication, Big Data, and other technologies has led to unforeseen consequences. But that’s not so. If you look at critical strands in the academic and policy literature, you’ll see that harms related to digitalisation and their consequences (whether for children or for adults) have long been signposted by many scholars.
What persistently surprises me is how difficult it is to be reflexive and to think of the causes and consequences of earlier generations of harms and what we can learn from them today. If those harms are magnified today, with 50% of the global population being online, this matters, and it is important to realise that it is not technology itself that is the cause, but the way in which we treat social and economic values. The consequences of today’s platforms have changed quite dramatically, either as a result of exclusion or as a result of inclusion on very unequal terms and very different capacities or literacies to deal with harms.
Q: This book seems quite different from your previous work. What inspired it?
RM: Teaching my course in this area inspired me, plus a nudge from the publisher. It’s not different. I’ve been critiquing dominant models of digitalisation in my policy and regulatory research for years. This has often meant criticising the dominant neoclassical market model, which governs much current discourse around digital platform growth and expectations about the ways in which technology will transform the economy.
Q: Picking up on the neoclassical approach, you argue in your latest book that digital platforms commonly have four elements: 1) content desired by users; 2) a business model that pays the costs of maintaining and improving the platform; 3) the collection, retention and use of data about users; and 4) the provision of auxiliary services. Can you tell us a little bit about how these relate to different economic approaches for understanding digital platforms?
RM: Neoclassical theory involves a set of limiting assumptions, meaning that much of what’s interesting about digital platforms is external to the theory. That is why we looked to institutional economics and to critical political economy for insight. We might have looked to sociology or psychology or other disciplines in the social sciences to understand the causes and consequences of platforms, but we were invited by the publisher to focus on the economics literature.
So we considered theories that relax or change the assumptions about how markets and innovation operate to understand how it is that something we have come to call ‘a platform’ becomes dominant, and what motivates those who provide them and those who participate on them.
There are two alternative theoretical frameworks that we can use if you want to stay within economics. One is critical political economy, which situates platform developments within a Marxist analysis and often focuses on inequalities and injustices from the point of view of the labour theory of value. This approach is helpful in illuminating some aspects of the commercial datafication process, and especially so when it comes to examining unfair labour practices. In this case, capitalism is treated as the barrier to making changes in the way in which these platforms develop.
Another framework is institutional economics which is very interdisciplinary. It similarly allows us to take social practices, political motivations and economic issues into account, but to put more emphasis on institutional or regulatory reform, albeit within the constraints of capitalism. This framework yields insights that speak more directly to those who are making reforms, such as introducing competition or privacy protection legislation, or doing something about curtailing surveillance and the use of facial recognition technologies. The institutional economics approach calls for a broader range or different kind of evidence that it is assumed that capitalism inevitably leads to harms or that market dynamics eventually will address any harms. I don’t think the overthrow of capitalism is imminent, so institutionalist economics theory can be fruitful for understanding digital platforms.
Q: Perhaps we can explore these notions using an example: you mention ‘zero-rating’ in the book as a cost-saving mechanism sometimes used to improve Internet access in certain contexts. An example is Facebook’s Free Basics, a somewhat controversial app that provides users with access to a selection of online services which are zero-rated, meaning that users do not get charged (in monetary terms, at least) for the data used to access those services. While some have argued that this is a great way of getting price-conscious users to go online, often for the first time, others feel that Free Basics is an unacceptable compromise as it only allows users access to a ‘walled garden’ curated by Facebook itself. What would a neoclassical, a political economy, or an institutional understanding of Facebook’s Free Basics be?
RM: Under the neoclassical model, we’d ask: what are the costs of zero-rating and what are the revenues? There’s no question of moral judgment. Facebook might ask: what is Free Basics going to cost us and how much revenue will we generate from it? If a service like Free Basics is not going to generate a profit or attract users to other revenue generating services compared to another pricing strategy, then it probably won’t invest in it.
A critical political economist might take a different view. What are the human conditions for the supply and use of zero-rated services? Should people be included in being able to have commercial online access and (limited) content? It might be argued that a lot more people will be able to go online and use this to generate entrepreneurial services and some income, but the key issue in this framing is likely to be under what kinds of labour conditions? In other words, the issue is not just access to content online, but the exploitative nature of the access conditions.
Similarly, the institutional economics view of zero rating would go beyond costs and revenues to the supplier in the commercial market. For example, what is the moral justification for a corporate decision to restrict people’s access to information online? Whose responsibility is it to make that judgment? Should it be the government, a regulator or civil society representatives? An institutional economics approach enables us to tell a more nuanced story about a digital service like zero-rating; with a focus on complex motivations and risks, and on the social, cultural, political, and economic values that are at stake in making a judgment as to whether a digital service strategy should be welcomed or resisted.
Q: One of the chapters in the book investigates global perspectives on digital developments, including an analysis of the so-called ‘digital imperative’ which tends to assume global South countries must ‘catch up’ with digital developments in the global North. How can global South governments gain from private sector investment without inadvertently reinforcing the dominance of global North commercial models?
Each penny spent on extending connectivity, for example, does not have identical motivations or consequences. Alternative models for promoting connectivity or digital inclusion, like community networks, are very inspirational, but they need funding. Parts of the world have been connected via community efforts on little more than a hope and a prayer, without the dominant involvement of private sector actors. Governments in the global South could invest in a variety of strategies to achieve connectivity and different uses of digital services, with greater emphasis on public, philanthropic or social entrepreneurial funding. They may in some cases have priorities to invest in something non-digital, but the key argument is that there is not just one pathway for taking advantage of digital technologies and what the commercial big platform companies offer.
Q: Moving to regulation, and as evidenced with pushback against Free Basics in India, we’ve seen a techlash over the past few years. One illustration is the Netflix docudrama, The Social Dilemma, in response to which Facebook has issued a statement (among other defences, it protests that ‘you are not the product’). What are your thoughts on the so-called ‘Tech Bro’ phenomenon which is on display in this docudrama?
RM:I found The Social Dilemma difficult to watch and was irritated that these people haven’t learned from history, even recent history, about the complexity of the environment they have been managing and organizing. They haven’t learned about the diversity of preferences, wants and needs of individuals and societies. It might be good to get across a message to the public that they there are risks to their agency and autonomy from the algorithmic universe that the platforms have created. But the way in which The Social Dilemma assumes that no-one has previously questioned the trajectory of these developments and the problems for individuals and society, seemed like an insult to the intelligence of a great many people.
Q: Recent years have seen new responses aimed at curbing platform power. Examples include the Global Internet Forum to Counter Terrorism (GIFCT), created in the aftermath of the 2019 Christchurch mosque massacre to address violent extremist and terrorist content online; the oversight board which Facebook is putting together to respond to content harms; and the new ‘Real Facebook Oversight Board’ launched by (primarily North American) academics and activists in an attempt to hold Facebook to account in the lead-up to the US presidential election. In the book you warn about the risk of ‘regulation by outrage’. Is that what these attempts are, or can these efforts and visions of platform governance work?
RM: There will never be unanimity about how to govern the digital platforms, but I do think there must be the possibility of a reasoned debate. Incentives for struggling towards more effective ways of governing platforms are, I think, stronger in an independent multi-stakeholder setting than when debates occur ring-fenced within the corporate or the government worlds. Chances of avoiding the excesses of censorship or unduly curtailing freedom of expression, are higher in a multi-stakeholder debate, although tackling issues of illegal content also involves the courts.
I’m pessimistic about knee-jerk or ‘regulation by outrage’ responses that infringe on fundamental rights without being evidenced. I’m also not optimistic about steps to break up dominant platforms if it is assumed that this will curtail exploitative business practices around datafication. Additional measures are needed because without them, a smaller set of competing companies is likely to hone similar business strategies. A mix of coordinated measures is needed to move towards platform models that value human dignity, respect privacy and value freedom of expression.
Q: While a fundamental restructuring of data ownership and control might be what’s needed to address certain platform harms, how likely is it that we will reach agreement on this? What will need to happen?
Giving individuals responsibility for the ownership, control and economic valuation of their data does not make sense in practice. The asymmetry of information between platforms companies and individuals is too great. It does not seem that individuals have the time or sufficient knowledge to make fine grained decisions about how or when their data should be used. The option of those decisions being made by a collective organisation on their behalf might be viable.
The default position on data ownership is that when data are generated, they belong to no one until they are appropriate by a company. Why not have gradations of data ownership arrangements that do not default to the individual using a collective approach with possibilities for opting in in some cases. The opting out current solution gives the power to the platforms, disadvantaging those without a reasonably high level of data literacy.
Q: Given what you have learned in researching this book, do you think the UK’s Online Harms legislative process is fit for purpose as a ‘a new regulatory framework for online safety’?
RM: The detail of the proposed legislation is not known yet. I favour a truly independent platform oversight body – with multi-stakeholder representation and with a strong coordination mandate. Assuming the regulatory role will be given to Ofcom [UK communications regulator], much will depend on what positions are taken and what processes are put in place to address the long list of harmful information set out in the Online Harms White Paper. How legislation is implemented will be crucial.
There is a high risk that a regulatory agency will err on the side of censorship. However, I do think that targeted measures to put ethically responsible, control mechanisms in place for child protection are necessary.
What we emphasise in the book is that the challenge of governing digital platforms is an ongoing struggle to ensure that public values including fundamental rights are upheld as digital technologies developed. The question is whether, on balance, we get outcomes that are more enabling for people than if the platforms are left largely to their own self-regulatory devices. I think leaving things as they are is a non-starter.
This article represents the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.