LSE - Small Logo
LSE - Small Logo

Robin Mansell

December 10th, 2021

Long Read: The Blind Spots in Digital Policy and Practice

0 comments | 26 shares

Estimated reading time: 10 minutes

Robin Mansell

December 10th, 2021

Long Read: The Blind Spots in Digital Policy and Practice

0 comments | 26 shares

Estimated reading time: 10 minutes

As governments take steps towards regulating digital platforms, LSE’s Professor Robin Mansell argues for more attention to the implementation of policies, and that we need to expose myths that hamper critical reflections on digital technologies and any harms they generate. This is a modified version of a Public Lecture in honour of receipt of an Honorary Doctorate from the Faculty of Economic and Social Sciences and Management, University of Fribourg, Switzerland, 16 November 2021.

Governments would like citizens to believe that achieving ‘superpower’ status in Artificial Intelligence (AI) innovation is the best way to assure our collective future. AI applications are expected to solve health pandemics, the global climate crisis, the spread of viral misinformation and a host of online harms. It is claimed that AI’s algorithms are (or will be) trustworthy. The vision of our digital future is one where ‘data is now less the trail that we leave behind us as we go through our lives, and more the medium through which we are living our lives’ as the UK Information Commissioner’s Office put it. Facebook’s vision of a technology-enabled virtual ‘multiverse’ is said to be inclusive, creative, empowering and trusted. Government oversight of technology companies is expected to produce transparency, controlling for harmful outcomes. The European Union’s Artificial Intelligence Regulation proposal says that AI should be a ‘force for good in society’. It also says that uses of AI must be balanced so as not to restrict trade unnecessary. If regulatory ‘guardrails’ are not yet in place, they soon will be and corporate interests in profit will be balanced with public values of fairness, equity and democracy.

There is no absence of a more critical narrative. This narrative responds to the fact that countries are seeking first mover advantage in exploiting digital technologies. There are multiple concerns about ‘deep tech’ and ‘smart data’ which ‘capitalise on data’. This is becoming the order of the day in the search for efficient optimised decision outcomes. Large digital tech companies – the GAFAM and data analytics companies such as Palantir Technologies in the US or Itransition in Belarus ­‑ have lucrative contracts with military and public sector organisations and private firms. Critical assessments of AI innovation and the behaviours of these companies are linked to the observation that their primary aim is to make online ‘clusters of transactions and relationships stickier’ via a system of ‘protocol-based control’ as law professor Julie Cohen says. Their strategy is ‘hypernudge’. The risks and harms for children and adults are occurring because of failures in the digital governance of capitalist markets in an era of data colonialism or surveillance capitalism.

A Crescendo of Digital Policy Measures 

A crescendo of digital and AI harm mitigation measures is being put in place in the West, the global South and the East, to govern AI and digital platforms. In the Western democracies, it is being acknowledged by governments and by civil society actors that requiring people to live their lives while being tracked continuously is undermining human dignity. Four components of such measures are very prominent. One component is efforts to achieve a level market playing field to tackle what is variously called the ‘significant and durable market power’, ‘substantial market power’, ‘strategic market status’, ‘bottleneck power’ or the ‘gatekeeper power’ of the largest digital technology companies. In the US, the aim is to stimulate ‘free and fair’ competition through antitrust measures with some attention to privacy law. In the UK and Europe, the aim is to restrain the behaviour of these companies. Measures include data portability and interoperability and codes of conduct. However, each region or nation is striving – at the same time – for AI and digital technology market leadership. When the policy focus is on domestic or regional market competition, this is often confronted with scaremongering headlines such as ‘Splitting up Facebook and Google would be great for China’. Toughening up approaches using competition policy may make markets more contestable, but they should not create unnecessary restrictions to trade because the competitive playing field is global.

A second set of measures is platform content moderation. The aim here is to achieve fairness, transparency and accountability of digital platforms’ moderation of illegal (and, in the UK, harmful but not illegal) content. But, at the same time, the aim is to maintain favourable conditions for digital and AI innovation. A third component is data and privacy protection. The EU’s General Data Protection Regulation or GDPR has been a pace setter for legislation in multiple jurisdictions, but does it not apply to anonymous and pseudo-anonymised data. This legislation has been accompanied by mounting calls for open data sharing in the UK and the EU to boost innovation and competitiveness. Last, but not the least, is ethics and ethical principles. ‘High risk’ AI systems will require testing in the EU before an AI system is put on the market and the UK is similarly promoting ethical AI innovation. Yet information about algorithms and complex machine learning systems requires tools and methods that are still in development: ‘for policymakers and practitioners, it may be disappointing to see that many of these approaches are not ‘ready to roll out’’ as the Ada Lovelace Institute put it. Furthermore, political geography professor Louise Amoore insists that a high level of algorithmic transparency is not feasible to achieve. Meanwhile, while some companies are trying to demonstrate their ethical credentials, governments are concerned that ethical principles should not stand in the way of innovation.

Five Myths about Digital Policy and Regulatory Practice

Will moves to govern digital technology companies with the aim of protecting the public interest succeed in limiting digital companies’ power and the harms associated with their digital systems? Digital governance requires principles and legislation, but it also requires implementation. Discussion about policy measures and regulation often neglects consideration of implementation practices associated with whatever digital governance regime is put in place – how implementation gives rise to outcomes, both expected and unexpected. This is a huge blind spot in the digital policy sphere. To understand the workings of policy and regulatory implementation, it is crucial to examine some of the myths that lead to a neglect of critical reflection on the implementation of policy and regulation.

Myth 1: individuals make well-informed rational choices about their online lives on a level market playing field

One myth is that individuals make well-informed rational choices about their online lives on a level market playing field. Each individual is assumed to be able to acquire information about the activity of all others and to adjust their behaviour accordingly. This myth pops up when it is argued that consumers need to be given a ‘real choice’ in the online world. It persists despite evidence that people do not read or comprehend privacy statements. An emphasis on individual ‘real’ choice typically leads to calls for investment in digital literacy and improved critical thinking. But as my LSE colleague Sonia Livingstone says,  although it is crucial to improve digital literacy, ‘we cannot teach what is unlearnable’. The myth of the level market playing field sustains claims that economic values will be balanced with citizen’s fundamental rights because companies will compete by differentiating themselves in ways that are favourable to citizens. This myth biases policy and regulatory practice towards assuming there is an imagined child or adult who is motivated to – and has or will have the opportunity to – make informed choices about their mediated environment. It biases policy makers to imagine that contestable digital markets will foster the public good. Even those seeking to protect citizens’ rights often succumb to this myth. For example, the UK-based 5Rights Foundation says that ‘in a more competitive market, services would compete to offer better alternatives to users who prefer not to share their data, to reduce exposure to distressing material, to respond to user reports more quickly and better uphold community standards’. Yet competition can also encourage a ‘race to the bottom’. The myth about rational choice and level competitive markets conceals that fact that the fully competitive market is an illusion.

Myth 2: Digital systems will allow individuals to control their online experiences in ways that are beneficial to them

A second myth is about technological fix responses to digital harms. Here the myth is that, in time, automated content moderation and algorithmic decision making will greatly reduce reliance on humans and that the costs to achieve transparency and greater individual control will decline. This myth suggests that digital systems will allow individuals to control their online experiences in ways that are beneficial to them. For example, design solutions will put users in control with ‘a real choice’ thanks to a variety of toolkits which let people decide what content they see and what data they release. The MyData initiative promises to yield ‘market symmetry’ between digital platforms and individual users. Tim Berners-Lee sees the use of ‘pods’ – personal online data stores – as enhancing individual control over data. These possibilities conceal the reality that people are still subject to private sector co-optation in an unlevel marketplace. Nevertheless, ‘trust’, ‘safety’ or ‘privacy’ by design are imagined to balance rights to privacy and freedom of expression with commercial ambitions. This myth supports claims that regulations will incentivise companies to design fairness and equity into their AI and digital systems. Yet, as these systems advance, even the designers understand less and less about what transpires between an algorithm’s data inputs and its outputs. Regulatory practice is biased away from understanding that the commercial datafication model itself is the problem.

Myth 3: There will be minimal ambiguity in the interpretation of evidence concerning company practices

A third myth is about unambiguous evidence and transparency and it is connected to policy makers’ claims about the capacity of legislative measures to yield regulatory certainty. It typically is suggested that there will be minimal ambiguity in the interpretation of evidence concerning company practices. Yet, for example, regarding digital harms, in evidence before the UK’s House of Lords, the Minister for Digital and the Creative Industries said that its Online Safety Bill definition of psychological harm has no clinical basis. The UK government’s own impact assessment of the Bill concludes that ‘in some cases, it was not possible to establish a causal link between online activity and the harm’. Nevertheless, the myth is that there will be relatively little dispute about what digital operations give rise to a foreseeable risk of harm. This myth creates a bias towards assuming that research evidence will provide relatively clear guidance about what regulatory actions are needed even in the face of conflicting regulatory objectives. Regulation also depends on evidence to be available if transparency is to be achieved. Penalties levied on companies for failing to comply with information requests are assumed to be able to elicit reliable information for independent audit. The myth is that companies will be responsive to requests and that regulators (and the courts) will have a solid and relatively uncontested evidentiary basis for their decisions. Not only does this myth help to distract attention away from the fact that both quantitative and qualitative evidence are probabilistic, it also conceals the interpretative – often politicised – frameworks that policy makers and regulators bring to evidence once it is produced. It encourages the claim that the implementation of regulation will lead to certainty (and fairness) for all parties, and especially for businesses.

Myth 4: In democracies, regulatory agencies act independently of the state and companies

Myth four is about regulatory independence. In democracies, rules, procedures and safeguards are said to enable regulatory agencies to act independently of the state and companies. This myth about independence varies in detail depending on the country or region, but there are signs of erosion. For example, in the UK, the Online Safety Bill gives the Secretary of State the power to give direction to the regulator to ‘reflect government policy’ or for ‘reasons of national security or public safety’. In practice, ‘independent’ regulatory institutions are dependent on the state and on companies in multiple ways. In some jurisdictions, legislation is opening the door for the state to define what is illegal (or harmful) speech. In the Western democracies, there are signs of declining, or even abandoned, procedural standards and interference by political actors in regulatory proceedings. Yet, the myth of independence persists, operating to encourage citizens to trust their political representatives to give primacy to their interests or, at least, to balance their interests with those of corporate actors through regulatory practice.

Myth 5: Once legislation is passed, it will be enforced effectively

A fifth myth is that once legislation is passed, it will be enforced effectively – capacity and skills will be upscaled to meet requirements. As an example of the need to unpack this myth, the EC’s report on GDPR implementation notes that implementation is still fragmented across the member states. Budgets for data protection increased by 49% from 2016 to 2019 and staffing for data authorities grew by 42%, yet case loads continue to grow. The governments in Ireland and Luxembourg, where many tech companies are headquartered, lack the required resources to handle their cases. Meanwhile, the UK’s Information Commissioner’s Office found (in 2019) that the advertising industry was processing GDPR special category data without explicit consent and therefore unlawfully. Across Europe, the largest fines have been levied for security incidents with many fewer fines for privacy violations according to the EU’s data. The myth of effective regulatory enforcement also persists in lower income countries which are being unevenly integrated into a datafied world. The World Bank’s Data for Better Lives report acknowledges, for example, that for many lower income countries an integrated data governance system is an ‘aspirational vision’. Nevertheless, discussion about digital governance in many of these countries includes repeated assertions that legislation will protect citizens from digital harms and be implemented in the public interest.

The myth about effective enforcement is robustly defended notwithstanding the fact that companies, themselves, cannot control certain online behaviours, sometimes even resorting to wholesale Internet shutdowns. They are practicing ‘deplatformisation’ when leading platforms shift right wing actors to the edges of the ecosystem, denying them access to their app stores or ejecting them from their cloud services. These actions are in the hands of companies, rarely regulators; and this gives rise to questions about freedom of expression and censorship. Trust marks, security devices and new codes of conduct to curtail disinformation and to protect data are being introduced and the development of the Internet of Things is seeing a lot of activity in this area. But enforcement, and the inadequacy of resources needed to achieve it, call into question assertions that legislative measures provide certainty for business and civil society stakeholder. High profile cases of illegal or harmful business behaviour are being pursued through anti-trust actions and other legislative routes and they receive much media coverage. There are instances of wins in competition policy enforcement as in the EU’s case against Google’s use of its own price comparison service to win unfair advantage over its European rivals. There are signs of more vigorous policy enforcement under the Biden Administration in the US against the largest digital platform companies. However, few of these developments directly tackle the fundamental underlying problem of an AI and digital platform industry that is guided, ultimately, by profit making incentives, not by public interests and values.

Why do Myths Matter?

Inquiries into blind spots in policy and practice have a long tradition in communications research. Communications professor Dallas Smythe, for example, argued in 1977 that the best way to address such blind spots is to examine the principal contradictions of capitalism. He insisted that the question that needs to be asked is about the economic function that providers of communication services serve in reproducing capitalist relations. In the case of AI and digital platforms, what functions are companies and their regulators performing in the interests of capital? What norms, beliefs and practices are shaping the implementation of regulation when it is translated from formal legislative and regulatory discourse into practice? Blind spots are maintained by myths and regression into myth is very common during periods of crisis such as the proliferation of illegal content and misinformation or biased algorithmic outcomes and ever more intrusive surveillance. Myths confer an illusory sense of mastery – for example, that we can control digital innovation in the public interest even in the face of mass exploitation of citizens’ data for commercial purposes. Myths naturalise so that the basic digital and AI business models which rely on individuals’ data are not fundamentally challenged.

The five myths considered here – rational choice and level competitive market playing fields, technological fixes, unambiguous evidence and transparency, regulatory independence, and effective enforcement – matter because they feed the blind spot in digital policy and regulation implementation. They sustain the argument that commercial datafication and AI applications can be operated ‘for the public good’ in the capitalist marketplace, subject to oversight. These myths bias regulatory practice towards favouring risk metrics and away from exposing the principal contradictions in digital and AI markets. They favour a belief that there is no alternative to the expansion of AI and commercial datafication. This is not to suggest that no change will happen as a result of legislation and regulatory action. It will happen and some of it potentially for ‘good’. But the myths underpin unrealistic expectations that regulatory implementation will be able to favour citizens’ interests. The blind spot about implementation ensures that the assumption that market equilibrium will eventually deliver optimal outcomes for all predominates. It sustains an all too familiar technology push agenda which flourishes as a ‘social disease’ as cultural studies professor Toby Miller calls it. It supports the EC’s claim, that ‘tracking and profiling of end users online is … not necessarily an issue’ – if it is done in a controlled and transparent way. In practice, ‘control’ and ‘transparency’ are mythologised constructs that need to be subject to much more scrutiny than is evident in today’s policy debates about AI and digital platform regulation.

Towards a Myth-busting agenda

It is being suggested that we need a new imaginary of our digital future; an imaginary of how technology can support, rather than undermine, democracy. For example, the UK House of Commons says, ‘we need to create a vision of how technology can support rather than undermine representative parliamentary democracy’. An article in the New Yorker asks ‘does tech need a new narrative?’ But more than a new imaginary or a new narrative is needed. The myths sustaining the notion that citizen’s interests are being protected because protective legislation has been (or is being) put in place need to be exposed. Without effective critique of the myths, new imaginaries will rest on practices guided by unrealistic assumptions about markets, individual rationality, decision making certainty, and effective regulatory implementation. Harvard Business School professor Shoshana Zuboff says ‘it’s not that we’ve failed to rein in Facebook and Google. We’ve not even tried’. And communication professor Mike Ananny says ‘they are ours to control — if we can figure out how to do it’. Figuring out how to do it by passing legislation is one thing. But attending to how new policies and regulations are being implemented in practice is crucial. This receives much less attention and is the subject of much less research than is the race to put digital legislation and regulations in place.

Disputes about the risks and harms of technology innovation in the media and communications field are not new. Warnings about race and gender discrimination and anti-democratic norms linked to information collection and processing, for instance, have been common since the 1960s. The 1968 Council of Europe recommendation on Human Rights and Modern Scientific and Technological Developments, for instance, warned against ‘newly developed techniques such as phone-tapping and eavesdropping to obtain private information, and against subliminal advertising and propaganda’. Steps were taken. If they had not been taken, citizens in Western democracies might not have the limited protections they have today. The view that it is acceptable to grant power to companies to use data as a basis for decisions that affect us all, so long as the state puts mitigating legislation in place, nonetheless, is very prominent today.

Resistance to the industry (and state) visions and business models for AI and datafication is possible, but only if myths such those outlined here can be dispelled. For researchers and policy makers, this means looking beyond the myths to critically examine promises about regulatory outcomes and the resources allocated to regulatory practice. It means examining the preferred knowledge claims that inform regulatory practice. It also means tracking instances of political interference in regulatory processes and monitoring gaps between promised regulatory outcomes and what companies do over time including their lobbying stances and their self-regulatory measures.

Well-funded research to unpack these and other myths is essential if alternative citizen rights-respecting digital futures are to have a chance of flourishing. Such evidence can help to make the case for alternatives to the dominant business models of AI innovation and commercial datafication. The space for alternatives, such as public service media platforms, non-use of tracking devices, collective institutions to finance online platforms, and governance arrangements – beyond state and market – for the public good, is constrained by the persistent promise that the private sector will deliver public benefit when it is overseen by effective regulation. But flourishing alternatives to market-driven digital and AI offerings will be a long time in coming without evidence-based myth-busting – ‘technological somnambulism’ will be our future.

New designs, ethical principles and codes of practice will be formulated and legislated across multiple jurisdictions. But if the myths go unchallenged, regulation of AI and digital platforms will be persistently blind to the origins of harms to citizens and to the erosion of public values. As Chris Freeman, a leading science, technology and innovation scholar said in the 1970s, if we substitute mathematics – in today’s language, data analytics – for human understanding, society will be at risk of a ‘reduction in social solidarity’. Social solidarity and democracy hang in the balance. They depend on whether greater attention can be given to the policy implementation blind spot.

This article gives the views of the author and does not represent the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Robin Mansell

Robin Mansell is Professor Emerita of New Media and the Internet in the Department of Media and Communications at LSE. She has training in several social science disciplines including psychology, social psychology, politics and economics and is a strong advocate of interdisciplinary research when it builds on the strengths of disciplinary inquiry.

Posted In: Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *