LSE - Small Logo
LSE - Small Logo

Giulia Gentile

September 8th, 2023

Trial by artificial intelligence? How technology is reshaping our legal system

0 comments | 42 shares

Estimated reading time: 8 minutes

Giulia Gentile

September 8th, 2023

Trial by artificial intelligence? How technology is reshaping our legal system

0 comments | 42 shares

Estimated reading time: 8 minutes

In recent years, we have seen artificial intelligence (AI) developing rapidly. The profound implications for areas such as art, politics, work and education are now becoming apparent, leading to a heated public debate about the best ways to harness the benefits of this new technology while mitigating the risks. In this Q&A with Maayan Arad, Giulia Gentile discusses how AI is transforming our judicial system and the impact this may have on public perceptions of justice.


Which aspects of AI development have you focused on?

My research explores the tensions between the advancement of AI in the management of the State and public values. I have been focusing on two areas in particular. First, I am investigating the impact of AI on the legal field and in particular on judicial systems. I’m currently leading a project on the challenges related to, on the one hand, the introduction of AI tools in courts systems, and, on the other hand, compliance with the rule of law in the EU.

Second, I am coordinating a collaborative research project funded by the EU Commission together with Orla Lynskey on the protection of public values in privately digitised public services. In recent years, several public services, such as justice or education, have been digitised through forms of public-private collaborations. Could these partnerships be liable to deprive public services of their “publicness”, by subtly subjecting them to private interests? How can we ensure that this cooperation does not undermine public values, such as transparency and accountability? Our research ultimately aims to identify the safeguards that should exist to ensure that public-private cooperation in the digitisation of the State respects public values.

What have been your key findings?

A first finding is that, due to the bias that is intrinsic in data, and, by reflection, in AI systems, there’s no guarantee that AI-assisted courts, or other aspects of the administration of the State, will be compliant with fundamental values such as judicial independence, non-discrimination and ultimately the rule of law.

A second finding is that the increasing use of AI in the public administration, and particularly in court systems, is inevitably transforming the role and the public perception of State authority in society. If judges are no longer purely human but AI-enhanced, for example, their function and social standing will inevitably change. These technological transformations are likely to affect public trust in the State, especially in courts. The introduction of AI in the courtroom could undermine some of the foundations of the administration of justice.

In what ways is AI currently being used in the legal profession and in the courts?

Law firms are increasingly using algorithms to draft contracts or skeleton arguments for litigation. AI also helps speed up due diligence and e-discovery exercises (an automated process that classifies the relevance of evidence in the context of litigation).

When it comes to courts, AI has been employed at different stages of judicial proceedings. In the pre-trial phase, AI can be used, for instance, to guide the parties of a litigation in submitting their claim, or to automatically distribute workload to judges. AI could be further applied in the deliberative phase for research support or to produce a skeleton judgment.

Who is responsible at the moment for creating these tools?

That’s a very good question: because of the far-reaching effects of AI on the delivery of public services and, more generally, on our democracies, the “who”, “how”, and “why” questions become topical. However, currently, the media and the public at large are not particularly concerned with the creators of AI tools used in the public field.

In the UK, the development of technologies for courts has occurred through public-private partnerships. Public procurement rules were used to identify the partners of the UK government in delivering these technological innovations. An example is the Money Claim Online system that was developed in cooperation with a private company, Kainos. Yet the public procurement framework focuses essentially on value for money. The provider that better matches the requirements of the tender at the lowest price will be selected. The same rules are likely to be used for the introduction of AI tools in UK courts. Should we radically transform our public services and interactions with the state in areas so crucial for our democracy, such as the administration of justice, by merely assessing value for money? Certainly, keeping the costs of digitisation and AI tools low is important to avoid excessive spending. But giving absolute prevalence to the value of the transaction over public values, such as fairness, non-discrimination, transparency and accessibility, ultimately undermines the pedigree of our democratic system.

Should we radically transform our public services and interactions with the state in areas so crucial for our democracy, such as the administration of justice, by merely assessing value for money?

Even more problematic from the angle of compliance with public values are the more informal kinds of cooperation between the state and private entities. For instance, during the pandemic, Google for Education provided its products to several schools in the UK. As a result of this cooperation, pupils’ data is processed by Google software, which now shapes the learning experience of thousands of children across the UK. Google’s offer of these services was not subject to any specific procedures or safeguards and, accordingly, public accountability appears reduced.

Under both scenarios, there was no careful assessment by the government or other public bodies on the implications of these technological developments on public services and values. Accordingly, the recent examples of public-private cooperation used to digitise public services in the UK appear to hinder foundational principles such as non-discrimination, fairness and accessibility. Undermining the public nature of these services has adverse effects for UK democracy: it risks worsening poverty and inequality in UK society, while consolidating the market power of digital services providers, and threatens fundamental rights. To counteract this, I aim to devise a legal framework that can better embed public values in public-private collaborations in the digitisation of the State.

As this technology becomes more advanced, how do you expect it will change the nature of courts and how they function?

Any transformation brings novelties but also losses. Similarly, introducing algorithms in judicial systems will reshape courts in the sense that they may become more efficient. But inevitably “something” else will be lost. We need to assess the new dynamics and implications that AI technology will introduce in the justice system.

To begin with, we can observe a loss of “humanness”. Traditionally, parties of a litigation are confronted with a human judge, an individual who can empathise with the parties’ problems while interpreting legal rules to solve the pending matter. When the adjudicator becomes a robot, an AI entity, the shared “humanness” among the actors of the litigation goes missing. The absence of humanness could influence the authority and the constitutional standing that courts have in society. An algorithm might be seen as less or more authoritative or empathetic than a human depending on the context. One might imagine that systems affected by a backlog of cases of minor importance might benefit from a speedier resolution of disputes through algorithms. But where algorithms are used to solve cases that directly impact fundamental rights, this may cause more concern.

In China, for example, relatively simple claims are already being decided through algorithms. But in the case of criminal or constitutional cases, knowing that the ultimate authority on the litigation will be a robot inevitably makes a difference. How would you feel knowing that your liberty may be limited by an AI entity?

How might these changes redefine the role that emotions and other innately human characteristics play in judicial processes and decision-making in court? Could removing this element reduce biases and discrimination or might it have more negative effects?

This question goes to the heart of the conundrum of courts’ digitisation. AI could be a potential panacea for the psychological factors and especially the bias problem that shape litigation. If a judge, for example, has a certain preference, whether political or simply personal, towards certain parties, certain arguments, or certain interpretations of the law, he or she may be unconsciously biased, and in extreme cases judicial impartiality would be affected. The question of how to manage the psychology of judges is very complex. There are several rules already in place designed to measure and assess whether judges are independent and impartial, and the law provides safeguards to ensure diversity and inclusivity among the judicial benches. AI could be seen as a solution to the potential judicial biases by providing more reliable, mathematical input in the courtroom.

AI could exacerbate biases and potential inequalities in the interpretation of the law because of its exponential nature.

But that’s where the conundrum comes in. The reality is that AI systems are in themselves biased. This is because AI relies on data produced by humans, which is imbued with human preferences and inclinations. Data reflects pre-existing biases. So the alleged solution to potential biases in the mind of a judge may not be effective after all. On the contrary, AI could exacerbate biases and potential inequalities in the interpretation of the law because of its exponential nature. So using AI to make judicial decision-making more neutral may not be the right way forward.

How can we make sure that AI isn’t used in a destructive way now that it’s becoming more sophisticated and powerful?

The answer to this question is regulation. We need to regulate as soon as possible before AI reaches a level of advancement that could seriously undermine not only the job market, but also some very essential aspects of our society, such as a free press, access to the internet and the right to be informed. If AI is used to manipulate public information and create fake news, good, reasonable governance may become virtually impossible.

Additionally, we should reflect on which values should guide the use of AI, and which usages of this technology should be permitted because they benefit society at large.

Prohibition of certain uses of AI could be a possible way forward, for instance, by singling out in which fields AI should be prohibited because of the potential excessive risks to human and democratic values.

I am pleased that the UK government recently acknowledged, for the first time, that AI bears very serious risks. And I very much hope that this will lead to rules that have regulatory bite in the UK.


For more analysis of the impact of AI – listen to the latest IQ podcast which includes input from the author. This Q&A is an updated and edited extract from an interview conducted for the podcast.

Image credit: Photo from Shutterstock

Print Friendly, PDF & Email

About the author

giulia-gentile

Giulia Gentile

Giulia Gentile is Lecturer in Law (Assistant Professor) at Essex Law School. Her research focuses on the promotion of human rights in the digital society and EU constitutional law. She was previously a Fellow in Law at the LSE Law School.

Posted In: Law and Order | LSE Comment | Research | Society and Culture
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by British Politics and Policy at LSE is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.