As artificial intelligence (AI) plays an ever greater role in how the public accesses information, LSE’s Damian Tambini argues that how public service media chose to deploy the power of AI in how their content is distributed is crucial to the future of democracy.
Public service media – like the press and the judiciary – are part of a wider system of checks and balances in a democracy. When they think about how they should deploy powerful new AI tools in media distribution, they should keep this in mind.
Public service broadcasting became the norm in Europe in part because it became clear in the mid-twentieth century that broadcast media (particularly television) were powerful enough to undermine the legitimacy of democracy because they are capable of managing public opinion. The European mixed system of broadcasting – some private, some public, all regulated, was a way of limiting the potential for opinion control by particular interest groups, or by the state as a whole, through broadcasting.
Most approaches to accommodating broadcasting power in a public service system worked pretty well, most of the time.
However, it is becoming clear that they will not work forever. The next generation of media will demand a different approach to their opinion forming power because they operate differently, because of the way AI is deployed.
I think of the recent scandals involving social media, of targeted disinformation and breaches of election law and data consent as the first steps of the march of the robots. Scholars like Shoshana Zuboff are centrally concerned with one thing: the loss of human autonomy and agency because of the increased capacity of smarter media to control the information available to each citizen, by gathering ever more granular data about those users. This has transformed advertising and it is transforming politics. Democracy faces a new vulnerability.
The next generations of AI will make targeted personalised media even more sophisticated in their ability to know and manipulate citizens. This challenge requires active updating of European media systems, and PSM should aim to be pioneers and models in the application of AI ethics in media distribution.
There is no shortage of AI ethics codes, and the EU has its own. Many have overlapping principles such as: transparency, explainability, keeping humans in the loop, robustness, reliability, human autonomy.
It is relatively easy to come up with these appealing words. The hard work is applying them in practice, and our PSM must be on the vanguard of this form of civic innovation, learning how to apply these in the era of powerful media.
As a provider of services that impact democracy and fundamental rights, public service media will be classed as ‘high risk’ services under the proposed new EU AI Legislation, so they will be regulated to a high standard. We will also need rules and self-restraint regarding how PSM address their audiences: they should not simply take up the technologies of learning, profiling, targeting and cultivating that are being used by commercial operators. They must first and foremost address their users as active, autonomous citizens. Some suggestions of how PSM can lead the way in making best use of AI:
- PSM should pioneer the application of a broad and citizenship-oriented notion of AI ethics.
- AI can be used to create plurality at the level of the individual citizen by showing each of us a variety of viewpoints, identify when citizens are falling into filter bubbles or rabbit holes and hopefully also haul them out by enhancing diversity of exposure, as Natali Helberger has argued.
- AI can be used to undo the work of profiling – building anti-racist algorithms rather than relying on learning processes that hardwire existing inequalities and forms of oppression by learning from biased data as the Council of Europe set out in its recent report.
- PSM can do more to present users with ownership and control of data
- Respect and positive promotion of basic rights (e.g. privacy and freedom of expression).
- Above all, PSM must be focused on their citizenship role, and their role in protecting democracy in their long-term projects for deploying these technologies.
- This means that PSM themselves need to be subject to new forms of accountability and empowered engagement between audiences and PSM – because all of this needs to be done in ways that are trusted.
In the age of streaming, the notion that PSM are too powerful may appear fanciful. The power of a single Netflix documentary to shape popular opinion on a global scale is certainly a new phenomenon that needs to be better understood. But the daily grind of media power and the shaping of policy and electoral outcomes still occurs at the national level. In a situation in which newspapers and other broadcasters are in crisis, the stable revenues of the PSM put them in a powerful position, particularly those that have successfully managed the transition to digital and online. Building civic media is expensive and long-term, and market actors focused on revenue maximisation through engagement will fail to fund it.
The latest EBU data show that nightly news television broadcasts are still the most used source of news and information in Europe – even for 16-24 year-olds. They are also among the most trusted. This puts them in a very strong position, and a crucial one as democracies struggle to define the role of their new media in deliberation and legitimation.
How the PSM chose to deploy AI power is crucial to the future of democracy. They need to think long term. Perhaps they should be asked by parliaments, including the European Parliament, to do so.
This is a summary of Damian Tambini’s comments at the conference Artificial Intelligence and the Future of Journalism, organised by the Portuguese Presidency of the EU, on 12 May 2021. This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.