LSE - Small Logo
LSE - Small Logo

Julia Ziemer

February 11th, 2020

A new approach to creating ‘tech for good’

0 comments

Estimated reading time: 5 minutes

Julia Ziemer

February 11th, 2020

A new approach to creating ‘tech for good’

0 comments

Estimated reading time: 5 minutes

Julien Cornebise is a mathematician who wants to help non-profits use Artificial Intelligence.  After leaving DeepMind in 2016, Julien began working pro-bono for Amnesty International, helping them scale up their analysis of satellite images of Darfur to gather evidence for a 2016 campaign to highlight atrocities taking place there.  He also helped develop a tool to track abuse of women on Twitter, providing quantitative data contributing to their 2018 report ‘Toxic Twitter: A toxic place for women’.

Now Julien is working on an endeavour that will bring the developers of machine learning in deeper collaboration with activists and non-profits working in the climate crisis and human rights arenas.  He has been talking to The Marshall Institute about how best to structure and govern new ventures for public benefit and spoke to Julia Ziemer about his approach.

How would you describe your attitude to technology?

I find myself having to balance my enthusiasm as a scientist with my more pragmatic side that is very much against tech-solutionism.  I don’t buy into the Silicon Valley mentality that tech will solve all our problems.

That is why my preferred approach as a machine-learning scientist is to work closely with other experts (like NGO staff or policymakers) who have deep knowledge of the problems that need solving. In many instances, it is a case of finding existing technology and applying it in the most accessible and appropriate ways through close collaboration with these actors.

Do you think the popular myth that humans will be replaced by robots is at all realistic?

Ideally, AI technology should be used to help humans, to allow them to use their time to work on other tasks more effectively, doing more of what they do best. But humans must always be kept in the loop, full automation would lead to failure and cause damage.  With technology such as automated cars, the 2018 fatal accident caused by an automated Uber car shows how difficult it is to get the technology that works perfectly in a controlled environment to operate effectively in difficult terrain with many variables.  Ultimately the human ‘driver’ or supervisor who was in the car is facing criminal charges for the crash. This is a prime example of what Madeline Elish calls a ‘moral crumple zone’[1] where, in the context of human-machine collaboration, the liability falls solely on the individual human even though arguably the technology plays a significant role in the failure.

I think Steve Job’s description of computers as a ‘bicycle for the mind’ is a useful one, the bicycle is a more efficient tool than walking but one that involves some human agency nonetheless. I see this idea as the most constructive way to approach the use of machine learning.

What do you see as the role for technology in fighting the climate crisis? Could technical solutions solve climate change in the future?

It is very important that we continue to pursue research to develop potential big-scale responses to climate change. But doing bad things now with the slim hope of future technological help is irresponsible.  There is work being done with things like geo-engineering that involves trying to control the earth’s climate systems; materials design to develop carbon-absorbing objects and fusion reaction as a potential future energy source of clean energy that no doubt AI could play a useful part in helping develop.  But these are currently still high-risk ideas, not a proven solution to anything.

In my view, we shouldn’t base our daily life or our policies on hopes for big technological solutions. We can dream for the best but should plan for the worst.  I want to use technological tools to give more power to activists, regulators and the actors trying to fix the incentives.  For example, if a government introduces tax credits for lowering carbon emissions or building wind turbines, AI technology could help with the accountability mechanism, to make sure commitments are implemented. I see that as more likely to have more impact I the short and medium-term.

How would you sum up the principles of your new project?

  • I want to work closely with the organisations who work on the ground to signal where help is needed
  • I hope to unlock new funding with the promise of solutions at scale – this is what technology can offer
  • I want to approach this work with a focus on reusable products rather than just standalone projects, that can be used in positive purpose-driven applications in multiple ways.

Julien is keen to hear from you if you are:

  1. Working in the climate change space and looking for machine learning tools to help your work
  2. If you are a funder interested in funding applications of deep technology developed in an interdisciplinary way to tackle climate crisis and human rights violations

Recommended further reading:

Geek Heresy: Rescuing Social Change from the Cult of Technology  by Kentaro Toyama

Cory Doctorow’s blog on AI NOW

AI NOW at New York University

 

[1] Madeline Clare Elish ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction.’ (2019) https://estsjournal.org/index.php/ests/article/view/260/177

About the author

Julia Ziemer

Julia Ziemer is Institute Manager at the Marshall Institute. She has previously worked at Polis, LSE's journalism think-tank, the charity English PEN and the Literature Department of the British Council.

Posted In: In the news | Innovation

Leave a Reply

Your email address will not be published. Required fields are marked *