LSE - Small Logo
LSE - Small Logo

Beeban Kidron

May 28th, 2024

Tech tantrums – when tech meets humanity

0 comments | 5 shares

Estimated reading time: 5 minutes

Beeban Kidron

May 28th, 2024

Tech tantrums – when tech meets humanity

0 comments | 5 shares

Estimated reading time: 5 minutes

In advance of her LSE public lecture “Tech tantrums – when tech meets humanity,” Baroness Kidron outlines some of the themes she will expand upon in her talk, arguing that in the face of growing capacity and use of artificial intelligence, we need to tackle tech exceptionalism.

Update (6 June): to watch the recording of the event, please see Tech tantrums – when tech meets humanity | LSE Event (youtube.com)

Who hasn’t cringed at the sight of a desperate parent trying to prise a tablet from a toddler in a full, red-faced, decibel-defying ‘tech tantrum’? But the tantrums I am most interested in are of a different kind.

During my lecture at the LSE on June 5, I am going to talk about five red-faced, decibel-defying ‘Tech Tantrums’ performed by the tech sector when told that they must play by societal rules. These tantrums, two of which I was involved in and three that have been widely reported, reveal the disregard for societal norms by a sector that has successfully fought a battle to remain unimpeded by the rules that contain others. This is often referred to as tech exceptionalism.

In between, I will make the argument that if the definition of madness is doing the same thing twice and expecting a different result, then we are mad.

Mad to allow the innovation of powerful technology to rip through society without deciding what bits of our private and public lives are precious to us or need to be in public hands.  Even as we count the cost to mental health, to the public purse, to democracy, and consider the discourse of previous decades of tech exceptionalism – which has privatised the benefits and outsourced what the sector euphemistically calls negative externalities – we are greeting AI as if we have not been here before.

When the UK Prime Minister called an AI summit in late 2023, the tech bros came to town shouting of the existential threats contained in the products and services they were still building – with no intention of putting their dangerous tools down. They appeared as pantomime dames shrieking ‘over there,’ ‘over there,’ then  lifting our wallets while our eyes were averted.

Many of the problems of AI are longstanding; bias, mis and disinformation, algorithmic unfairness, false positives and missed negatives, echo chambers, bugs and floods of unwanted material on services with a hard to find off switch. Some are new or supercharged by the sheer power of generative models that will transform the job market with no plan for what the mass of unwaged will do, or will create misinformation at a scale that will confuse us all. There are infinite possibilities in which not being able to distinguish between truth and fiction would upset us: if a politician has or has not been found in a web of corruption, if you find someone else being credited with your life’s work, or if the voice of your dead husband rings you up for a chat. And what of those lethal robots – who is safe if the tap of a space bar is all that it takes to become collateral damage?

I am bewildered by how few discussions about AI are about governance and how many are about existential threat, when the existential threat to society is not the technology, but tech exceptionalism. Millions of networked computers sharing information in real time, able to act in a fraction of a second, may in the future turn out to be smarter than us, but they are already more powerful.

The ability to know a great deal and act on it at speed does not make a decision maker – either AI or human – moral, correct or a gift to humanity. That depends on the quality of what is known and the purpose to which the decision is set. As it stands, we are on course to leave both the quality and the purpose in the gift of a handful of powerful commercial companies, led by individuals with outsized voting blocks and few checks and balances.

My Tech Tantrums lecture will start with me getting a bollocking from a Tech representative who believes that the sector is above the law, and end with Geoff Hinton, the godfather of AI, telling Parliament that time is running out. In between, I make the case that the future of AI is not in the future. It is about what we do right now.

Baroness Kidron will speak about ‘Tech Tantrums’ at the LSE on June 5 at 6.30pm. For more information and registration, please see the event listing here. This post represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Beeban Kidron

Beeban Kidron is a leading voice on children’s rights in the digital environment and a global authority on digital regulation and accountability. She has played a determinative role in establishing standards for online safety and privacy across the world. Baroness Kidron sits as a crossbench peer in the UK’s House of Lords. She is an advisor to the Institute for Ethics in AI, University of Oxford, a Commissioner on the UN Broadband Commission for Sustainable Development, expert advisor to the UN Secretary-General’s High-Level Advisory Body on Artificial Intelligence, and Founder and Chair of 5Rights Foundation. She is a visiting Professor of Practice at LSE where she Chairs the research centre, Digital Futures for Children, led by Professor Sonia Livingstone and a Fellow in the Department of Computer Science at the University of Oxford. Before being appointed to the Lords she was an award-winning film director and co-founder of the charity Filmclub (now Into Film).

Posted In: Children and the Media | Communication Technology

Leave a Reply

Your email address will not be published. Required fields are marked *