LSE - Small Logo
LSE - Small Logo

Grace Nelson

November 29th, 2024

Risk-based regulation and the EU AI Act

0 comments | 10 shares

Estimated reading time: 5 minutes

Grace Nelson

November 29th, 2024

Risk-based regulation and the EU AI Act

0 comments | 10 shares

Estimated reading time: 5 minutes

Grace Nelson, a technology and telecommunications analyst and recent graduate of LSE’s Media and Communication Governance programme, writes about the EU AI Act and the implications of its risk-based approach in Europe and beyond. 

In July 2024, the finalized text of the European Union’s AI Act was published in the Official Journal of the EU, marking the end of a four-year legislative process and a countdown to the enforcement of what the EU claims is the world’s first binding regulation on artificial intelligence. Since the AI Act was introduced in 2021, similar legislation has been introduced in at least eight other countries, including Canada, Brazil, Mexico and Vietnam. Each of these proposals adopts the EU’s “risk-based” approach to regulating AI, setting tiered obligations for developers and deployers based on the perceived and predictable threats posed to the public by their systems. Though the concept of artificial intelligence and the logics of algorithms date at least to the mid-20th century, an apparent consensus on how to address the potential benefits and harms of the technology has been reached in only the last five years. 

In some ways, the present settlement on AI regulation is entirely predictable. The AI Act and its progeny are laws built on the principles of consumer protection. From the General Data Protection Regulation (GDPR) to the Digital Services Act (DSA) to the AI Act, the EU has developed and continued to repurpose a mode of digital rulemaking entirely reliant on assessing and minimizing harm. Even when the definition of consumer protection is expanded beyond economic measures, laws such as the AI Act are limited to predictable and identifiable harms, be they economic, social or civic. Despite the ambitions of laws like the AI Act and Canada’s Artificial Intelligence and Data Act to address systemic harms, these protections are again limited to the knowable and legible dangers of technology, which often amount to risks faced by individual constituents. 

As the EU now turns to supporting the adoption of AI and other emerging technologies, including quantum computing, across its economy, the insufficiency of a risk-based regulatory framework has become more apparent. Despite repeated claims from around the world of embracing a principles-based approach to AI governance – which would suggest a theoretical grounding and therefore greater flexibility in regulating against ill-defined harm – legislation like the AI Act adopts a tangible and finite list of limits on AI development. In failing to capture unknown, unseen or yet unrealized risk, the law creates a vacuum that grants implicit permission for different forms of innovation: innovation that achieves a yet undefined positive societal outcome, as is the hope of policymakers, or innovation in the malicious compliance and continued exploitation of digitized publics, as has been the operating procedure of dominant platform companies through the implementation of other laws. Any connection between the principles that policymakers champion for any technological innovation – such as equity or inclusivity – and the reality they could create if realized, has been left undetermined by risk-based regulation, as that regulation inherently refuses to specify how those principles are to be achieved as opposed to just how they are not achieved.  

Without a positive or reconstructive vision for a socially, politically and economically just AI (or any other emerging technology) that is adopted in regulation but also industrial policy and efforts to drive technological adoption, the forthcoming iterations of so-called innovative progress will only unveil novel and creative adaptations of existing, oppressive power structures.  

Risk-based policymaking: social construction, individualized harm and implicit permissions 

The AI Act is structured around the assumption that it is possible to identify, isolate and outlaw the harms of the technology through a hierarchy of risk. This assessment of risk is based in part on the context in which the technology will be applied and in part on the capacity of the technology, especially when application contexts are not immediately known such as in the case of “general purpose” AI. The capacity of systems is determined through proxy measurements, like the computation power used in the training of a given model, and in some similar legislative proposals, such as the proposal paper put forward by the Australian Department of Industry, Science and Resources, uncertainty around application contexts is taken as proof itself of higher risk. A limited set of clearly defined use cases for AI are prohibited under the AI Act as instances of unacceptable risk, including social scoring tools and some real-time biometric identification systems.  

Like its predecessors in the GDPR and the DSA, the AI Act is constructed around performances of compliance. Developers and deployers of AI systems captured by the regulation must record and document various markers of conformity, such as data governance policies and quality management processes. Though some requirements are newly added to the European digital compliance process, such as recordkeeping on the results of adversarial testing procedures, these socially constructed markers of compliance reflect the same self-fulfilling construction of policy underpinning earlier iterations of law in the digital economy, especially privacy law like the GDPR. As the AI Act has borrowed from the consumer protection frame of its ancestors, so too has it borrowed from the limited window of possibility of what laws in this arena are and can be. Similar to the performative markers of privacy law, the AI Act is reliant on the consultative input from regulated sectors to develop the codes of practice for acceptable commercial behavior in AI markets and further informed by industry practices, like “red-teaming” and the transferal of liability through value chains, to develop a standard of effective compliance. The regulator, in this case the newly formed EU AI Office, will again be limited by the bounds of existing corporate practice and ultimately situated as dependent on the industry it regulates, lacking the threat of meaningful and punitive enforcement.  

Beyond the hindrance of regulators through industry-built compliance, the AI Act also reflects the performative trappings of prior policy endeavors in adopting a primarily individualistic model of risk prevention and consumer participation. Much like the consent notifications of the GDPR, the AI Act requires outlets for consumer “autonomy” through transparency in consumers’ interactions with AI systems and opportunities to lodge complaints about noncompliant systems. Though the AI Act does take greater account for the potential of systemic harms, both in defining the types of technology it imagines capable of perpetrating this harm and in outlawing selected systemically harmful practices, the law still largely fails to speak beyond the rights of the individual and work towards identifying collective or intangible harms. As Cathy O’Neil notes in her book Weapons of Math Destruction, collective harms of algorithmic discrimination or dysfunction are often diffuse and difficult to identify, therefore even more capable of eluding a system based primarily in protecting individuals. This individualized framing so common to the European digital rulebook ultimately prevents effective interventions into the collective or “horizontal” public policy problems posed by the egregious power imbalances of the digital economy. 

Both of these limitations of the AI Act contribute to the law’s pitfalls as a risk-based regulation crafted through a system of cost-benefit analysis. As described by Julie Cohen, this mode of regulation compartmentalizes the uncertain and the unknowable damages of a technology and sets them outside the bounds of the law. Cohen points out that this vacuum has served frequently to encourage excessive risk-taking that can occur simultaneously within compliance procedures but outside the prescriptive bounds of prohibited activities. The balancing against purported benefits in this lens also contributes to the downplaying or devaluing of collective and intangible harms which are already so difficult to capture in systems based around the protection of individual consumers. Summarily, the AI Act and the laws built in its image currently emerging around the world fail to disrupt the systems of power that enable and thrive on the exploitation of the users and communities that choose to use or are subjected to these technologies.  

Linking industrial policy to a positive vision for tech regulation 

A number of governments and governing bodies around the world have emerged to communicate the values on which they believe AI systems should operate, including the United Kingdom and Singapore governments which have prioritized voluntary governance and economic growth over binding safety regulation. Despite these visions for a technological future based on accountability, equity, and fairness, among other principles, the current settlement on AI, pioneered by the AI Act, fails to provide the regulatory tools needed to realize a tangible and just future that enacts these values. Instead, the EU must prioritize a positive or reconstructive vision of what that future entails and specifically if and how technological innovation serves to contribute to it. 

As the European Commission shifts its focus to encourage technological adoption in service of a more globally competitive economy, AI and other emerging technologies have been positioned as critical industries for public investment. In this context, industry stakeholders, led by platform giants such as Meta, have unsurprisingly begun to campaign for the rollback of existing EU regulation, citing the need to allow for greater room for innovation. In other words, the existing powers of the global digital economy are seeking not only to preserve their standing, left largely untouched by the AI Act, but to reclaim complete control over the direction of technological development and to even disregard the law’s effort to outlaw a selected set of unacceptable paths. In a moment when the EU has created an opportunity to align industrial policy with digital regulation to create a more comprehensive picture of a desirable digitized society, the bloc is privileged to ask questions about who innovation is meant to serve. If the EU is in fact seeking justice on behalf of its constituents as it often claims, its answer to that question must diverge from the present alignment, and regulatory action that enacts such a reconstructive vision must follow.  

As the European Commission shifts its focus to encourage technological adoption in service of a more globally competitive economy, AI and other emerging technologies have been positioned as critical industries for public investment. In this context, industry stakeholders, led by platform giants such as Meta, have unsurprisingly begun to campaign for the rollback of existing EU regulation, citing the need to allow for greater room for innovation. In other words, the existing powers of the global digital economy are seeking not only to preserve their standing, left largely untouched by the AI Act, but to reclaim complete control over the direction of technological development and to even disregard the law’s effort to outlaw a selected set of unacceptable paths. In a moment when the EU has created an opportunity to align industrial policy with digital regulation to create a more comprehensive picture of a desirable digitized society, the bloc is privileged to ask questions about who innovation is meant to serve. If the EU is in fact seeking justice on behalf of its constituents as it often claims, its answer to that question must diverge from the present alignment, and regulatory action that enacts such a reconstructive vision must follow.  

This post represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured: Photo by Igor Omilaev on Unsplash

About the author

Grace Nelson

Grace Nelson is an analyst with Assembly Research in London where she tracks global regulatory developments in technology and telecommunications. Grace also maintains her academic research interests on the intersection of media, technology and labor through work with the Our Data Bodies project. She is a graduate of the University of Pittsburgh and the Media and Communications Governance program at the LSE and a former staffer with the US Senate.

Posted In: Artificial Intelligence | EU Media Policy

Leave a Reply

Your email address will not be published.