LSE - Small Logo
LSE - Small Logo

Tom Kirchmaier

April 15th, 2025

AI for Good – impact through ethical technology 

0 comments | 1 shares

Estimated reading time: 5 minutes

Tom Kirchmaier

April 15th, 2025

AI for Good – impact through ethical technology 

0 comments | 1 shares

Estimated reading time: 5 minutes

Artificial Intelligence is set to change our lives, but there are many doubts about whether this will be for the better. By looking at a case of non-surveillance application of AI used to help with crowd management, Tom Kirchmaier makes the case for “AI for Good”, using the technology to have a positive impact on people’s lives.


Enjoying this post? Then sign up to our newsletter and receive a weekly roundup of all our articles.


Artificial intelligence (AI) is (probably) one of the most transformative technologies of all time. From automating workflows to analysing complex data in real time, AI has the potential to reshape entire sectors, including healthcare, education, climate science, and public safety. But with such power comes responsibility. The concept of “AI for Good” has emerged as a guiding principle for those seeking to ensure that AI serves society ethically, inclusively, and effectively. 

At its core, AI for Good is about aligning technological innovation with positive societal impact. It encourages developers, researchers, policymakers, and industry leaders to build systems that protect human rights, promote transparency, and tackle real-world problems, especially in areas where traditional solutions fall short. One such area is public safety and crowd management. 

Defining “AI for Good” 

The phrase “AI for Good” was popularised by the United Nations as part of its effort to harness advanced technologies to meet the Sustainable Development Goals (SDGs). It advocates for using AI not just for efficiency or commercial advantage, but to improve lives, particularly in underserved communities and high-stakes environments. 

Whether used in disaster response, medical diagnostics, or urban planning, AI for Good is fundamentally about human dignity. 

AI for Good demands accountability, inclusivity, and foresight. It emphasises transparency in how systems make decisions, safeguards to avoid bias or misuse, and collaboration across sectors to ensure deployment meets actual needs. Whether used in disaster response, medical diagnostics, or urban planning, AI for Good is fundamentally about human dignity. 

Crowd management through AI, without surveillance

A compelling example of AI for Good in action is the work of N-AI, a technology project focused on ethical crowd intelligence. In collaboration of the London School of Economics and Greater Manchester Police (GMP), we deployed an AI-powered platform to support crowd management at the Manchester United vs Arsenal football match in March 2025. 

The challenge was to enhance public safety at a large-scale event without compromising individual privacy or over-relying on traditional surveillance. We provided real-time analytics based on high-definition drone footage, processed securely via encrypted connections and anonymised through a “dots on a map” visualisation. This allowed decision-makers to track crowd density and movement patterns while preserving anonymity. 

For AI to do good, it must be designed with purpose, deployed with oversight, and continuously evaluated against ethical benchmarks. 

What makes our approach stand out is its privacy-first design, academic grounding, and real-world utility. The system delivered actionable insights, such as identifying bottlenecks and predicting potential crowd surges. That enabled GMP to deploy resources efficiently, reduce response times, and avoid escalation. Throughout, the technology was not used to replace officers but to empower them with better information. 

Bridging the gap between research and real-world impact 

One of the most powerful aspects of AI for Good is its ability to bridge academic research with practical outcomes. The partnership between GMP, LSE and N-AI illustrates this beautifully. By combining rigorous academic methods, public sector collaboration, and agile technology development, the project turned theory into impact. 

LSE researchers supported the ethical design and evaluation of the deployment, ensuring that the system aligned with principles of transparency, non-discrimination, and accountability. GMP provided on-the-ground feedback to improve functionality and ensure usability. The result was not just a successful field trial, but a model for how AI can be responsibly integrated into public institutions. 

Designing ethical AI 

We live in a world of increasingly complex challenges, from climate change to urbanisation to social unrest. AI for Good offers a framework for approaching these issues with innovation and care. But it’s not automatic. For AI to do good, it must be designed with purpose, deployed with oversight, and continuously evaluated against ethical benchmarks. 

Projects like N-AI’s crowd management platform show that it’s possible to meet operational needs while upholding civil liberties. They also show that public safety does not have to mean invasive surveillance, it can mean smart, respectful, and effective tools built for real people in real contexts. 

Scaling AI for Good 

The future of AI for Good depends on our ability to scale its principles. That means expanding collaborations between academia, government, industry, and civil society. It means investing in AI education and digital literacy, particularly among marginalised groups. And it means holding developers and institutions accountable for how technologies are used. 

AI for Good is not a slogan, it’s a commitment. It challenges us to rethink what success looks like in tech and to prioritise impact over hype.

N-AI is already exploring broader applications of its platform, including predictive analytics and integration with other data systems, always with a focus on ethical safeguards. The goal is not just to innovate, but to lead by example: to show that AI, when guided by values, can become a force for good in the most critical domains of human life. 

AI for Good is not a slogan, it’s a commitment. It challenges us to rethink what success looks like in tech and to prioritise impact over hype. By focusing on responsible design, inclusive practices, and measurable outcomes, we can ensure that AI uplifts rather than divides.


All articles posted on this blog give the views of the author(s), and not the position of LSE British Politics and Policy, nor of the London School of Economics and Political Science.

Image credit: takayuki on Shuterstock


Enjoyed this post? Then sign up to our newsletter and receive a weekly roundup of all our articles.

About the author

Tom Kirchmaier

Tom Kirchmaier is Director of the Policing and Crime research group at the Centre for Economic Performance, LSE.

Posted In: Government | Law and Order | LSE Comment | Police