LSE - Small Logo
LSE - Small Logo

Giulia Gentile

June 8th, 2023

LawGPT? How AI is Reshaping the Legal Profession

4 comments | 131 shares

Estimated reading time: 7 minutes

Giulia Gentile

June 8th, 2023

LawGPT? How AI is Reshaping the Legal Profession

4 comments | 131 shares

Estimated reading time: 7 minutes

Generative AI is causing many fields of expert and professional knowledge to reassess fundamental practices and their value. Taking law, a field that has long been warned of potential threats of automation, as a focus, Giulia Gentile outlines the socio-technical challenges facing the discipline and considers how new technology may rebalance relationships between lawyers, technologists and clients.


We are still wrapping our heads around the potential impact of AI in our society. No fields seem to be immune from the influence of AI, including the legal profession broadly understood. Lawyers, clerks and legal advisors across the world are all exploring the implications of AI. Two different questions are currently occupying legal professionals when it comes to AI, showcasing the Janus-faced power of this technology (and tools such as ChatGPT).

The first issue is how AI can be embedded in the legal profession to enhance the work of the legal professional. The more immediate advantage of AI is that of improving the efficiency of legal work by way of automation. Tools, such as ChatGPT, could, for instance, be used to draft contracts or skeletons of judgments. Law firms have announced the trialling of robo-lawyers.

The second, more concerning, matter is how AI could carry out at least some of the tasks traditionally performed by lawyers. Could AI replace lawyers? Inevitably, AI is raising questions that are reshaping the legal profession as a whole. This is what I want to focus on here.

1. Pressure/Complexity/Interdisciplinarity

The advance of technology is increasing the demands and the expectations from lawyers. Most lawyers currently practising have no training in data science, algorithms or AI. Therefore, currently there is an informational gap affecting the profession: any future use of AI will require lawyers to upskill and learn at least the basics of these technologies.

The degree of expertise required is debatable. Should lawyers become AI experts? This may not be achievable for the current generation. A reasonable alternative would be for lawyers to work more closely with AI experts, although this would make lawyers dependent in a way on AI experts. Moreover, such cooperation might not necessarily achieve AI systems that are fit for the purposes of lawyers – the design of AI tools for lawyers is likely to take time and input from the lawyers themselves in terms of training. Even then, by the time a system is trained new AI technology may have emerged and rendered the purpose-built tool obsolete. If so, the interdisciplinary process – pressurised by the technological race – would begin again.

This means that AI may – rather than simplifying life in the legal profession – simply risk making it much more complex and guided by the demands of technology, rather than those of clients and the law.

2. Transforming the relationship with clients?

The complexity resulting from the influence of AI on the legal profession is also reshaping the relationship between lawyers and clients. This is because the way in which AI is used in the work of lawyers also influences the liability of legal advisors and their professional obligations towards their clients. For instance, lawyers have a duty to provide their clients with clear information that is not misleading (see for instance the SRA code of conduct). The use of AI may make compliance with that duty much more difficult and complex for two reasons.

First, lawyers need to understand AI tools to be able to explain them. As mentioned, the current digital literacy of lawyers may be limited and, in any event, even experts have difficulty tracing how an AI system has taken a decision due to the ‘black box’ problem. This compromises the ability of lawyers to keep their clients informed.

Second, AI systems such as neural networks may learn on their own and acquire features that were not envisaged in the initial stage of the AI design. In addition, it is well reported how AI tools may provide information that may look correct, but it is not – so-called hallucinations. This means that these tools may be unpredictable, with significant risks of errors and potential damages to clients. Recently, a US lawyer used ChatGPT to draft briefs, which went awry when it made up authorities to support its arguments. He is now facing sanctions.

As a result, the duty of lawyers to provide clear information not to mislead their clients may prove challenging when AI is used in the professional sector based on the current technologies. These settings ultimately affect the trust that the public has towards lawyers.

3. Public perception of the legal profession using AI

No less important is the transformation of public perception and trust that the introduction of AI into the legal profession may instigate.

Clients who are being exposed to ‘robo-lawyers’, automated legal advice, or the risks deriving from the use of AI to draft contracts or other legal documents, may ultimately lose confidence and trust in the legal profession. The human dimension of the relationship between lawyers and clients is necessary to establish trust, understanding and empathy. Clients need to know that their lawyers understand their problems and will try to solve them. But this trust dynamic could be seriously undermined by robo-lawyers or over-reliance by lawyers on AI tools.

For instance, if a technology like ChatGPT were used to provide a preliminary legal advice, it may not fully understand the demands of the client. It may also fail to offer the same level of empathy, human connection, and legal creativity of a human lawyer. The essential question is: would we feel comfortable taking what could be the most important advice in your life from a robot? Troubling as the question may sound, some may answer “yes!”. Doubtless, even the best human lawyers are flawed and not all human lawyers are the cream of the crop – so one may wonder whether we should give up faith in human lawyers. But, as with automation bias in other fields, this perspective ignores that machines will also be flawed (albeit in different ways, not all of which are currently known or properly understood). A challenge for the legal profession will be demonstrating to the public that it is growing with new technologies to address human fallibility. Doing so will involve passing between Scilla and Charybdis: the desire to “keep the law human” on the one hand, and blind faith in the “superior” powers of poorly understood and developing technologies (which will inevitably be flawed) on the other.

Putting finer points to one side, could we be as passionate about Erin Brochovic and Harvey Specter if we knew they were just cold calculating machines?

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Google DeepMind via Unsplash.


Print Friendly, PDF & Email

About the author

Giulia Gentile

Giulia Gentile is Felllow in Law at the LSE Law School. Her research focuses on EU constitutional law and the promotion of human rights in the digital society. She is particularly interested in the use of AI in justice systems and the protection of the right to a fair trial in AI-driven courts.

Posted In: AI Data and Society | LSE comment

4 Comments