LSE - Small Logo
LSE - Small Logo

Julia Ziemer

December 4th, 2018

Can AI create a fairer world? Five principles for ethical coding

0 comments | 3 shares

Estimated reading time: 5 minutes

Julia Ziemer

December 4th, 2018

Can AI create a fairer world? Five principles for ethical coding

0 comments | 3 shares

Estimated reading time: 5 minutes

LSE MSc student Nicole Darabian reports on a Polis Talk by Kriti Sharma given to the LSE on 27 November 2018

With automation resulting in jobs losses, flawed algorithms perpetuating discriminatory practices and robots receiving citizenships, it is easy to feel discouraged by some of the recent contributions of artificial intelligence (AI).  However, in a world of growing scepticism, Kriti Sharma is an optimist. As an artificial intelligence technologist, she has made it her mission to incorporate greater ethics and diversity into machine learning ,in order to promote social good. Throughout her talk, Kriti highlighted examples where AI is making strides to address global challenges and achieve social impact. For instance, she recently helped develop an AI application that works as a non-judgemental companion offering support and guidance to South African women who are potentially victims of domestic violence and harassment. Because these crimes are highly stigmatised, victims do not tend to speak about their situation leaving cases often unreported.

Kriti Sharma speaking at the LSE on 27/11/2018

The room asks – how do we ensure the world derives more social value from AI? Kriti believes it starts by acknowledging our own bias and realising that algorithms are opinions embedded in code. To make AI fairer, Kriti offers five guiding principles:

  1. AI should reflect the diversity of users it serves. If algorithms are going to be used to benefit society, they must reflect multiple cultures and ethnicities. Diversity is key when creating teams designing and developing technology, “when people from diverse background come together, when we build things in the right way, the possibilities are limitless”, Kriti says. While there is value in fixing an algorithm’s code, it is critical to filter out as much bias during the creation process to ensure algorithms do not learn to reproduce our own stereotypes in the first place.

 

  1. AI must be held accountable. If a machine makes a bad decision, who is to blame? While consensus is growing that AI cannot remain unscrutinised, there isn’t yet a clear process to whom to turn to and how to act. Kriti mentions the UK’s Centre for Data Ethics and Innovation , where she is a member of the advisory board, as a good step towards promoting ethical AI governance nationwide. As governments debate what shape and form AI regulation should take, Kriti thinks we must also create a new model of “evolutionary regulation” that can react effectively to the rapid pace of innovation. Similarly, Kriti acknowledges that private companies, often the most resistant in revealing the code behind their algorithms, have a key role to play. Among other best practices, it is essential that developers take responsibility for the codes powering AI as well as ensure they do not develop algorithms that cannot be explained.

 

  1. Reward AI for showing its workings. AI should not be deployed blindly and must align to human values. As algorithms learn from the diverse datasets that feed them, rewarding mechanisms when developing AI can be an approach. In other words, developers can reinforce learning mechanisms that reward AI to not only achieve a certain result but in a way that aligns with human values.

 

  1. AI should level the playing field. Perhaps the principle that requires more attention. The “raison d’être” of AI should be to benefit humanity and much more needs to be done to ensure AI is accessible and does not exacerbate socio-economic inequalities. As we stand today, this is particularly problematic given that most of the data worldwide sits with 4-5 companies. This is an issue as it originates unequal opportunities for developing more AI that can serve humanity.

 

  1. AI will replace but it must also create. It may be inevitable that certain jobs will be replaced by machines, but Kirti sees a silver lining. As we invest in AI, we must invest even more in jobs for which human skills are paramount. Reasoning, creativity, empathy are all valuable human skills that can complement AI.

Undoubtedly, there is still a lot to be done to make AI more socially just and while these principles may be only one part of the solution, Kriti’s message inspires hope. Let’s not think of AI as technological determinism but harness its potential in ways that allow algorithms to reflect and serve the diversity of the world we live in. Technical tweaks are not going to fix it all and ensuring responsibility and ethical guidance in the development of this powerful technology is vital.

By Nicole Darabian (@nicoledarabian)

About the author

Julia Ziemer

Julia Ziemer is Institute Manager at the Marshall Institute. She has previously worked at Polis, LSE's journalism think-tank, the charity English PEN and the Literature Department of the British Council.

Posted In: Featured | Media and Communications in Action | Tech