There is a systemic lack of social rights reflected in existing human rights frameworks for addressing AI, finds Jedrzej Niklas, Research Associate at Cardiff University’s Data Justice Lab. He argues that new social policies on AI should be implemented in ‘human centric’ way that is consistent with fundamental rights, as well as transparency and fairness requirements.
Around the world, different countries and organisations have been developing policies and laws aimed at regulating use and facilitating development of Artificial Intelligence (AI). The European Union (EU) is at the forefront of those efforts, proposing a so-called ‘human centered’ approach to AI that will allow sustainable and more ethical use of sophisticated data technology. While human rights are seen as a core of this debate, its scope is rather limited. This is especially seen in the context of rather limited recognition of social rights and distributional efforts as a part of wider AI policy. Therefore, big questions about interactions between technological policy and welfare states emerge as a crucial element of this contemporary debate.
New policies and their limitations
Over the last couple of years, the EU has been working on a new policy framework for AI. The wide range of documents adopted by Brussels now cover budgets for developing AI, facilitating scientific progress and international cooperation. EU sees AI as a technology that brings many benefits especially in medicine (better diagnosis) and in addressing climate crisis (more efficient energy use). At the same time, those benefits are associated with particular risks, such as discriminatory biases embedded in algorithmic systems or a mass threat to privacy. That’s why the EU is also developing regulation that will introduce a new governance structure that allows for the management of those risks. The plan is to establish a set of regulations that will apply across all member states and ensure that AI is implemented in ‘human centric’ way that is consistent with fundamental rights, as well as transparency and fairness requirements.
Problems with current AI policies
However, the development of this new set of AI dedicated policies is also subject to particular interests, and prioritises certain values over others. In term of human rights, digital policy making has for a long time been predominantly concerned with data protection and privacy. While both important, these only offered a limited framework to name, define and address the social challenges related to technological innovations. Recently, discussions about AI also raised other issues related to discrimination and inequality, but the debate is often reduced to challenging discriminatory biases in models or lack of diversity among AI designers. Among the missing spots within the mainstream discussion about AI and fundamental rights remain social and workers’ rights.
Limited place for social rights
Commitment to social rights like the rights to social protection, healthcare and education has been strongly associated with the modern welfare state and European culture of social justice. They provide standards for receiving public services and for various labour-related issues like fair wages, paid holidays, the right to form trade unions or go on strike. AI can create changes for the administration of social protection or labour relations. For example, there are existing AI systems that check eligibility, detect welfare fraud or calculate social benefits. Some of these, like the Dutch SyRi system or the Polish unemployment profiling mechanism, have created significant controversy and even led to major court cases.
Among other concerns is the net loss of jobs and job opportunities associated with AI and other advanced robotic systems, due to the automation of certain labour tasks. Digital technologies are also changing job quality and workplace relationships, when AI systems are used for employee management, supervision, control and work organisation. For example, in the United Kingdom, retail company Tesco made workers wear electronic armbands to track their movements and productivity and monitor their breaks. AI technologies are also more common in making decisions about about hiring, wages, time and work performance. However, in such situations workers may not have adequate knowledge about scheduling, salaries and even firing. All these problems are closely related to social rights, however they are rarely used as a political framework for AI policy making.
Our recent study of public consultations on the EU’s draft AI legislation shows that such rights are barely mentioned by the social and political actors engaged in this debate such as the European Commission, various NGOs or businesses . Therefore, there is a systemic lack of social rights reflected in existing human rights frameworks for addressing AI. An important exception, however, is trade unions, which explicitly make connections between workers’ rights and AI development. Recently, trade unions also initiated important court cases in the context of algorithmic management of platform workers. Those efforts can lead to important changes in the political landscape around digital technologies. There are even some early examples of actual legislation that regulates algorithms used to manage so called platform work. One of them is a European draft directive on improving the conditions of platform workers that tries to improve transparency in the algorithms used by digital labour platforms, ensures human monitoring of their working conditions and gives them the right to contest automated decisions.
Looking for Social Europe in digital policy making
Social rights require some reinterpretation and the introduction of new safeguards in the form of policies, laws and regulation that will maintain fair working conditions and an adequate level of social protection in technology-centred discussions. Existing frameworks like data protection have an important role to play too, as they often serve as a starting point to talk about technologies and human rights. For example, GDPR and other data protection legislation may behave as subsidiary legislation that enables the protection of particular interests (such as procedural fairness, transparency or non-discrimination) that are crucial in a social rights context. Exploring these new dimensions and the more ‘egalitarian’ side of data protection is required – however, new types of interpretation and the active role of data protection authorities will be necessary to achieve this.
There is also a need for new social policies that could tackle structures of existing social policies to either address social transformations caused by technologies (e.g. in the context of work) or simply mitigate harms that can also be co-produced by technologies implemented in particular political contexts. Additionally, we need more research and conceptual work that will address issues related to discrimination that goes beyond biases and include a focus on needs, material equality and distributional policies of the state. All those efforts may help in understanding how the new wave of digital policies interacts with the more traditional Social Europe agenda that focuses on distributional justice and the welfare state.
This blog post is based on our publications on social rights and data technologies at Data Justice Lab:
- What rights matter? Examining the place of social rights in the EU’s artificial intelligence policy debate
- Working Paper: Social rights and data technologies: looking for connections.
- Working Paper: European artificial intelligence policy: mapping the institutional landscape.
- Working Paper: The Datafied Welfare State: A Perspective from the UK.
This article reflects the views of the author and not those of the Media@LSE blog, nor of the London School of Economics and Political Science.
Featured image: Photo by Michael Dziedzic on Unsplash