LSE - Small Logo
LSE - Small Logo

Daniel Mügge

November 26th, 2021

Cooperation á la carte is the way forward for EU AI regulation

0 comments | 33 shares

Estimated reading time: 10 minutes

Daniel Mügge

November 26th, 2021

Cooperation á la carte is the way forward for EU AI regulation

0 comments | 33 shares

Estimated reading time: 10 minutes

Disputes over how to regulate artificial intelligence have rapidly risen up the global agenda in the last few years. But is a single set of global, or even just transatlantic rules the way to address the issue? Daniel Mügge argues that for the European Union, with its towering ethical ambitions, the answer may be ‘No’.

Earlier this year, the EU became the first major jurisdiction to publish a set of draft rules for artificial intelligence (AI). There are good reasons to keep close legislative tabs on AI technology. Concerns range from algorithmic discrimination and distorted democracy to ubiquitous surveillance and outright oppression.

Beyond these questions loom deeper concerns. A world saturated with machines able to predict and exploit our fears and desires fundamentally challenges human autonomy. At the same time, AI-led automation has the potential to deprive many people of their jobs once the government debt-fuelled post-pandemic boom peters out. Already, automation has driven a wedge between the upper and lower echelons of labour markets. Aided by smart machines, many highly educated workers have seen their productivity rise, while jobs with more routine tasks, in contrast, have slowly disappeared.

The European Commission proudly touts a European third way on AI regulation: rather than letting algorithms serve the state (as in China) or large corporations (as in the United States), the Brussels approach aims to put them at the service of people – ‘trustworthy AI’, as the EU slogan puts it. This entails curtailing the freedom afforded to the masters of AI – be they governments or companies.

This European approach will encounter two key obstacles. First, by most measures the EU tech sector already lags those in China and America. Slamming the breaks on AI applications is bound to widen that gap. After all, algorithms thrive on ‘learning by doing’. The creation of a European data space and coordinated seed investment by EU member states is meant to boost the homegrown sector. But whether that’s enough to make European AI more than a sideshow on the global stage is anybody’s guess.

Second, American politicians and pundits alike increasingly frame AI as a security issue. Earlier this year, the final report issued by the US National Security Commission on AI portrayed AI as a tech race between China and the United States, with the loser vulnerable to the winner’s AI supremacy. Such a dog-eat-dog tech world has little patience for the finer points of AI ethics. Hence, the report implies that Europeans would be well-advised to climb into the passenger seat of a US-led alliance of ‘freedom-loving’ tech powers and leave the steering to Washington and Silicon Valley.

These forces are pulling European states in opposite directions. Ethical concerns inspire caution, competitiveness concerns encourage European boldness, while the security-framing of AI suggests Europe should play a junior role in a Western alliance. So, what is the way forward?

Cooperation á la carte

Much of the current debate on the global dynamics of AI regulation is unduly simplistic, treating ‘AI regulation’ as though it were a single, monolithic block, to be approached either alone or in cooperation with one or the other partner. Instead, a nuanced perspective would be to recognise the enormous breadth of uses and regulatory concerns currently lumped under the AI-heading.

To be sure, with open economic borders the effectiveness of European AI rules hinges on what other major jurisdictions do – the AI field is rife with ‘regulatory interdependence’. What good are EU rules against invisible discrimination in automated CV-sorting when clients can simply use US-based services instead? What good are tight privacy rules when we can import AI systems trained on data unethically harvested from citizens abroad?

Yet this regulatory interdependence is variegated: in some domains – say, the global spread of automated weapons systems – the EU is entirely dependent on cooperation with other major powers. In others, such as safety standards for self-driving cars, it can craft its own rules and testing procedures, quite irrespective of what others do. Some applications, such as the resilience of AI-powered infrastructure, have direct relevance for NATO as a security alliance; others, like rules for Twitter-algorithms or automated loan approval, are not security-relevant at all.

This variegated regulatory interdependence – high in some cases, low in others – invites a differentiated approach to global AI regulation. Where global standards might align with European regulatory goals, they are clearly preferrable. When such agreement is out of reach, second-best options emerge: mini-lateral solutions such as shared standards crafted in the newly minted Transatlantic Technology Council. Mutual recognition of different but equally stringent standards are also an option, as is support for private tech-industry standards.

But where EU ethical goals are best served by Brussels forging ahead on its own – as it did with the General Data Protection Regulation – it should have the courage to do so. There is no reason to think the EU will be well-served by a one-size-fits-all-AI-uses approach to international regulatory cooperation. It should not blindly give in to the ‘are you with us or against us’ logic prominent in American thinking. The diverse ethical trade-offs AI raises weigh too heavily to be tackled well in a uniform international mould.

Always heedful of which approach allows it to maximise its digital sovereignty on any particular issue, the EU should cooperate where feasible and dare to go it alone where it must. Such an á la carte approach to global regulation has served the EU well in many other fields, from finance and food safety to pharmaceuticals. There is no reason to think it would fail in relation to AI.


Note: This article gives the views of the author, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: Lukas on Unsplash


About the author

Daniel Mugge

Daniel Mügge

Daniel Mügge is a Professor of Political Arithmetic at the University of Amsterdam.

Posted In: EU Politics | Politics

Leave a Reply