LSE - Small Logo
LSE - Small Logo

Veronica Barassi

December 13th, 2022

AI, The Western Illusion of Human Nature and The Human Error Project

0 comments | 11 shares

Estimated reading time: 5 minutes

Veronica Barassi

December 13th, 2022

AI, The Western Illusion of Human Nature and The Human Error Project

0 comments | 11 shares

Estimated reading time: 5 minutes

This article by Veronica Barassi, Professor in Media and Communication Studies at the University of St. Gallen, is published on the occasion of the publication of the report: “AI Errors and the Profiling of Humans: Mapping the Debate in European News Media, the first public report of the Human Error Project based at Universität St. Gallen, Switzerland.

It’s all been a huge mistake. My modest conclusion is that Western civilization has been constructed on a perverse and mistaken idea of human nature. Sorry, beg your pardon; it was all a mistake. It is probably true, however, that this perverse idea of human nature endangers our existence

 (Sahlins, The Western Illusion of Human Nature, 2008: 112)

For the very first time in history, we are educating machines to understand the most varied and complex concepts that define our cultural worlds, and we are allowing them to create new meaning out of what we teach them. We train them to understand complex cultural values such as what is an animal, an object, a human being. We are training them, in short, to think like a human.

As we train our AI systems, our world gets divided into systems of classifications with categories and subcategories.In her latest book The Atlas of AI, Kate Crawford beautifully explains the profound social and cultural complexity that we find when we teach our systems to understand our worlds. For example, the concept of “human body” is divided into: “male body”, “female body”, “adult body”, “young body”, and so on. Crawford clearly demonstrates that these categories are not only cultural and dominated by Western thought, but they are also political because their construction of the “social truths” of the world reflects the inequalities of our society. So, in teaching our AI systems to understand our hierarchically ordered social world, we are repurposing and amplifying the divisions and inequalities of the society to which we belong.

If systems of classifications are always cultural and hierarchical and define our world view, how are we teaching our machines to understand our worlds? Are we teaching them the  nuances of  indigenous languages for example? Will we be training them to understand complex cultural concepts like the Melanesian notion of ‘dividual’, as discussed by Marilyn Strathern in her 1988 book, which clearly shows that the concept of “person” varies from culture to culture?

Unfortunately, we all know the answer to this: we are heading into a completely different direction, training machines based on limited, pre-determined, deeply biased and error-prone concepts. Most of these systems are designed and developed in the West, they are based on Western science, and often rely on a long history of scientific bias and human reductionism. Binary data systems belittle and misleadingly reduce the complexity of human experiences. Indeed, over the last few years, we have witnessed the rise of such important and critical research aimed precisely at showing how unjust, biased and unequal algorithmic logics and automated systems can be. Among relevant others, MIT’s Joy Buolamwini has extensively looked at racial biases in facial recognition technology and algorithms, while Safiya Umoja Noble has applied a similar approach to online search algorithms and their results. We have also seen the rise of new critical debates, research centers, journalistic investigations and legislative frameworks which try to find a solution for ‘algorithmic injustice’. This has been particularly visible in various cases of resistance against the deployment of facial recognition technologies in public space in various areas in Europe and in the US and, in some cases, in the legislative responses that have been launched in the aftermath. What is emerging clearly from the current research  is that AI systems rely on thin, inaccurate and de-contextualized human data and that they are trained with databases which are often filled with racist or sexist bias. Our technologies are shaped by our own history of scientific bias and they learn from it and reproduce it at an unprecedented scale.

What worries me most about this transformation is the fact that these technologies are being increasingly adopted – at incredible speed – in different contexts: from schools to hospitals, from law enforcement to government infrastructures, to profile human beings and to make data-driven assumptions about their lives. It seems almost as if – over the last decades – we’ve really convinced ourselves that the technologies used for predictive analytics, cross-referencing, facial recognition, emotion classification, and all the other technologies that power our AI systems offer us a deeper understanding of human behavior and psychology, and that we forget the fact that they often offer us a very stereotypical, reductionist, standardized understanding of human experience.

In 2020, we launched The Human Error Project: AI, Human Rights, and the Conflict Over Algorithmic Profiling, at the University of St. Gallen,  because we believed that – in a world where algorithmic profiling of humans is so widespread – critical attention needs to be paid to the fact that the race for AI innovation is often shaped by new and emerging conflicts about what it means to be human. In our research we use the term ‘the human error of AI’ as an umbrella concept to shed light on different aspects of algorithmic fallacy (bias, inaccuracy, unaccountability) when it comes to human

Together with my team (Postdoctoral Researchers Dr. Antje Scharenberg and Dr. Philip Di Salvo, and Marie Poux-Berthe and Rahi Patra) I am particularly interested in uncovering how the debate around “the human error in AI” is unfolding in different areas of society. We position ourselves amongst those scholars that have called for an analysis of the ‘political life of technological errors’ and for a qualitative approach to the understanding of algorithmic failures. Our aim is to map the discourses and listen to the human stories of different sections of society, to try and understand how AI errors when it comes to the profiling of humans, are experienced, understood and negotiated.

  1. In order to achieve our goals, our team is researching three different areas of society where these conflicts over algorithmic profiling are being played out in Europe: civil society organisations; the media and journalists; critical tech entrepreneurs. In all these different sections of society we are focusing on three main methodologies which include: critical discourse analysis, organizational mapping, and the collection of 100 in-depth interviews. The Human Error project is also hosting two different PhD research projects that are focusing on algorithmic bias or injustice.
  2. Ms Rahi Patra’s PhD research on Dalit surveillance in India is concerned with scientific racism and the inequality of AI systems.
  3. Ms Marie Poux-Berthe is focusing her PhD on ‘the aged’ and looking into how older people are reflecting on algorithmic profiling and the ageist bias of our technologies.

In December 2022, we published the first report resulting from the Human Error Project, which is titled “AI Errors and the Profiling of Humans: Mapping the Debate in European News Media”. This was the product of a 2 year long critical discourse analysis of 520 articles that appeared in newsmedia, which covered the issue of AI errors in human profiling. For our research we focused on three core countries: UK, France and Germany. The report is the beginning of a broader discussion on the conflict over algorithmic profiling in Europe.

This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Possessed Photography on Unsplash

About the author

Veronica Barassi

Veronica researches and writes about the impact of data technologies and artificial intelligence on human rights and democracy. She is an anthropologist and Professor in Media and Communication Studies in the School of Humanities and Social Sciences at the University of St. Gallen, as well as the Chair of Media and Culture in the Institute of Media and Communications Management. Veronica’s previous research focused on social media and political campaigning and the question of digital citizenship. She is the author of Activism on the Web: Everyday Struggles against Digital Capitalism (Routledge, 2015). Over the last five years she investigated the impact of children’s data traces on their civic rights, and the meaning of a society which ‘datafies’ its citizens from before birth. Her most recent books are Child | Data | Citizen: How Tech Companies are Profiling Us from before Birth with MIT Press (2020) and I figli dell’algoritmo: Sorvegliati, tracciati, profilati dalla nascita with Luis Press (2021).

Posted In: Artificial Intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *