LSE - Small Logo
LSE - Small Logo

Daria Onitiu

January 31st, 2022

How a human rights perspective could complement the EU’s AI Act

0 comments | 30 shares

Estimated reading time: 8 minutes

Daria Onitiu

January 31st, 2022

How a human rights perspective could complement the EU’s AI Act

0 comments | 30 shares

Estimated reading time: 8 minutes

The European Commission has proposed an AI Act for regulating artificial intelligence technologies. Daria Onitiu argues that adopting a human rights perspective would allow the proposed framework to better protect the safety, autonomy, and dignity of citizens.

AI now permeates our daily lives. When you interact with a wearable device or smart device, for instance, these interactions are analysed by algorithms to find patterns in social behaviour and return recommendations suiting your current mood or lifestyle. Much has been written about how algorithmic personalisation systems like these raise new issues related to privacy, data protection, and discrimination.

These concerns motivated a recent European Commission proposal to establish an AI Act composed of a ‘uniform legal framework’ for ‘the development, marketing and use of artificial intelligence in conformity with Union values’. However, much groundwork still needs to be done to ensure the EU’s AI Act will be a meaningful instrument for the protection of an individual’s safety, autonomy, and human dignity in the big data age.

Bringing human rights back into the equation

It is important to highlight that the AI Act is not a new ‘data protection instrument’ that intends to redefine an individual’s informational privacy and control of data. Therefore, calls for the Act’s heightened protection of fundamental rights and/or to ‘put human rights first’ overestimate the breadth of the proposal, which follows the spirit of product safety legislation. Rather, what is important is the kind of common principles that underpin the Act and whether new values inform the Act’s framework of a risk-based approach.

Indeed, a major flaw of the proposal is its emphasis on the intended use of AI systems. For instance, personalisation systems, which create new vulnerabilities that go beyond an individual’s physical or psychological state (see Article 5 (1) (a) of the AI Act), fall through the cracks in the proposal. For instance, it is not clear why the proposal’s risk-based approach only kicks in when advancements in machine learning and computer vision methods are used by law enforcement agencies ‘to detect the emotional state of a natural person’ (Annex III).

What is clear; however, is that the proposal’s risk-based approach falls short of a common standard. Take the transparency obligation in including the notification duty regarding interactive systems in Article 52 (1) of the AI Act, which only illustrates a vague ex ante duty – and might even clash with the problems surrounding transparency obligations regarding profiling and automated decision-making in the General Data Protection Regulation (GDPR). What this shows is that we need to bring human rights back into the equation of how emerging technologies can shape our understanding of personal autonomy and human dignity, which are inherent in a system’s intended use.

Reshaping proportionality in the AI Act

A flexible approach that could maintain the spirit of the AI Act and provide legal certainty would be to tweak the risk-based approach by considering the proportionality principle. The AI Act takes a ‘proportionate horizontal regulatory approach to AI that is limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development.’ There is a spectrum whereby we can measure the degree of risk of an AI system. However, we must not place this balance on a sliding scale, but rather, use a set of normative principles irrespective of the technology in question.

Following this thought process, a proportionate approach would be to use independent normative values to develop trustworthy AI, whereby we have further prescriptive rules applied to risk levels. This dualism of normative principles and descriptive rules is necessary to make EU values not only applicable but also resilient to technological developments including autonomous systems in the future.

A mandatory due diligence requirement

A step in the direction of addressing the points above would be to lay the foundations for a mandatory due diligence obligation within the framework. Looking at the due diligence approach in the UN Guiding Principles on Business and Human Rights, which is a form of international ‘soft law’, providers would have to address any ‘actual and potential’ (Principle 17) impact of emerging technology. What is often overlooked is the UN Guiding Principles’ reliance on and direct references to international human rights norms. Adopting such a rights-based approach could act as an overarching principle regarding the development and marketing of new technologies.

The due diligence element invokes two requirements for corporate responsibility. One is procedural and the other is substantive. Turning to the former, previous research has highlighted that providers need to be bound by a set of procedures to identify human rights risks, such as a human rights impact assessment. Nevertheless, a human rights impact assessment is not only a prescribed procedure of ex ante and ex post monitoring. A mandatory due diligence obligation also provides an opportunity to reconstruct the role of human rights in the digital age.

This leaves the question of what the role of private entities should be given the emergence of automated decision-making and how collective values concerning privacy and data protection should shape an individual’s interaction with these systems. These are important questions that should underpin the EU’s AI Act. Ultimately, there is a need to weigh up the different implications of AI technologies with reference to the individual, rather than simply the technology in question.


Note: This article gives the views of the author, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: Markus Spiske on Unsplash  


About the author

Daria Onitiu

Daria Onitiu is a Research Associate in the Edinburgh Law School at the University of Edinburgh. She is in the process of completing her PhD, which investigates the socio-legal impacts of fashion recommender systems, focusing on the meaning of privacy, autonomy and identity.

Posted In: EU Politics | Politics

Leave a Reply