LSE - Small Logo
LSE - Small Logo

Rachel Coldicutt

February 11th, 2025

Winners and Losers at the AI Action Summit – who does the UK stand with?

0 comments | 49 shares

Estimated reading time: 5 minutes

Rachel Coldicutt

February 11th, 2025

Winners and Losers at the AI Action Summit – who does the UK stand with?

0 comments | 49 shares

Estimated reading time: 5 minutes

The vision of the Paris AI Action Summit is for an inclusive and sustainable AI. But the current AI champions based in the USA are pushing towards an acceleration of their global dominance. Rachel Coldicutt sees the UK’s AI Action Plan as siding with the latter vision, but argues the UK has more to gain from the Parisian vision of a less extractive and divisive future AI. 


Enjoying this post? Then sign up to our newsletter and receive a weekly roundup of all our articles.


The AI Action Summit has arrived in Paris – and it couldn’t be more different to the 2023 edition, hosted by former UK Prime Minister Rishi Sunak. While the UK Summit took place under wraps at former code-breaking HQ Bletchley Park and focussed on “existential risk”, the Parisian version is being held in the heart of the city at the Grand Palais, with a commitment to “inclusive and sustainable AI”.

AI’s dual identity is apparent from the very different conversations taking place at and around the Summit.

This openness has translated into a veritable circus of side events, and for the last week the city has been home to a slew of conferences, specialist mini-summits, and drinks receptions, where the great and good of international AI development have gathered to set out their competing visions for the road ahead. But even this convivial atmosphere has not been conducive to agreement – and the conclusion of the Summit is likely to be a visible fracturing of global political intentions, confirming AI’s status as not just an expansive set of technologies, but as a valuable, and hotly contested, political imaginary. 

Rather than a pitch battle, the global AI policy landscape currently resembles a WWE wrestling match – full of set pieces and bravado, with a seemingly inexhaustible cast of players.

AI as magic vs AI as data centres

AI’s dual identity is apparent from the very different conversations taking place at and around the Summit. As Meta’s Yann LeCun has been debating whether large language models will unlock artificial general intelligence (spoiler, no) and Hugging Face’s Sasha Luccioni and others have launched an AI Energy Score Card, the political debates have little to do with the actual technologies at play. While a leaked draft of the French declaration pushes for a competitive global market that respects planetary boundaries, the US interest is in doubling down on world domination, with the UK appearing to follow close behind. And while the speeches being given in Paris conference halls this week may be full of computing jargon, this is not a contest for technological dominance but a race for good old power and money. 

Rather than a pitch battle, the global AI policy landscape currently resembles a WWE wrestling match – full of set pieces and bravado, with a seemingly inexhaustible cast of players. While the underlying technologies depend on extremely tangible assets such as data centres and computing power, the stories of success that play out in media headlines tend towards the magical and evanescent, sometimes operating at the very edge of human imagination or comprehension – with many of them have yet to be achieved. 

The UK’s policy on AI offers no protections against Silicon Valley capture

In the UK corner, Prime Minister Keir Starmer announced an AI Action Plan on 13 January, making the fairly terrifying promise that “Artificial intelligence will be unleashed across the UK”, an unfortunate framing that sounded more like a threat of plague and pestilence than the beginning of “a decade of national renewal”. Written by venture capitalist Matt Clifford, the 50 recommendations were given a thunderous welcome with the unhesitating muscular language of physical domination: by “throwing the full weight of Whitehall” behind AI, the Action Plan “mainlines AI into the veins of this enterprising nation” to deliver heroic ends such as “driv[ing] down admin for teachers” and “putting more money in people’s back pockets” (never mind, I suppose, their front pockets or purses). This is a vision predicated on inviting inward investment from Silicon Valley companies, a plan in which AI happens to the people of the UK by the billion-dollar companies with the power to invest. To prove the potential, the unveiling of the plan was backed by the announcement of £14bn of new investment.

This is a vision predicated on inviting inward investment from Silicon Valley companies, a plan in which AI happens to the people of the UK by the billion-dollar companies with the power to invest.

Scarcely had the fireworks of the UK AI Action Plan’s entrance into the arena subsided when, on the 23 January, President Trump, flanked by the CEOs of Oracle, Softbank, and OpenAI, announced the $500bn Stargate initiative to overhaul US AI infrastructure and repealed Biden’s executive order to initiate a deregulatory environment that would “enhance America’s AI dominance”. 

And then, just a few days later, the Chinese openweight model DeepSeek appeared as if from nowhere – wobbling the Nvidia stockprice and blowing open the market in what, in wrestling terms, would be regarded as a legendary debut. 

Criticised by some for being too soft and failing to deliver on the regulatory promise of the EU AI Act, the French vision has the potential to change the whole game by simply refusing to take part.

Is there still space for the summit’s ethical vision of AI?

As the UK, US, and China stumble about in the debris of this global show of machismo, the inclusive, sustainable and pro-democracy vision promoted by the French Summit has appeared in another ring altogether – seemingly taking place in a completely different competition. Criticised by some for being too soft and failing to deliver on the regulatory promise of the EU AI Act, the French vision has the potential to change the whole game by simply refusing to take part. While the “big AI” model of LLMs and foundation models is angled towards monopolistic dominance of a few companies, predicated on there never being quite enough data, compute, or electricity, the French approach is one of pluralism and sustainability. The draft statement sets out its priorities as:

  • Promoting AI accessibility to reduce digital divides;
  • Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all
  • Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development
  • Encouraging AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth
  • Making AI sustainable for people and the planet
  • Reinforcing international cooperation to promote coordination in international governance

Whether “open, inclusive, transparent, ethical, safe, secure and trustworthy” AI is even possible is another matter altogether, but it will be very interesting to see which governments sign up to the ambition to achieve it – and which decide to stay toughing it out in the arena in the hope of delivering a beat down to their opponents. If the only possible future for global AI is extractive and divisive, the pursuit of dominance will result in painful and costly real-world conflict – the very opposite of the “national renewal” that Starmer hopes these technologies will bring about. 


All articles posted on this blog give the views of the author(s), and not the position of LSE British Politics and Policy, nor of the London School of Economics and Political Science.

Image credit: Ployker  in Shutterstock


Enjoyed this post? Sign up to our newsletter and receive a weekly roundup of all our articles.

About the author

Rachel Coldicutt

Rachel Coldicutt is a researcher and strategist specialising in the social impact of new and emerging technologies. She is founder and executive director of research consultancy Careful Industries and its sister social enterprise Promising Trouble. She was previously founding CEO of responsible technology think tank Doteveryone where she led influential and ground-breaking research into how technology is changing society and developed practical tools for responsible innovation.

Posted In: Global Politics