LSE - Small Logo
LSE - Small Logo

Grace Lordan

Paris Will

Dario Krpan

November 3rd, 2022

How should we decide when to use artificial intelligence in decision-making?

0 comments | 13 shares

Estimated reading time: 3 minutes

Grace Lordan

Paris Will

Dario Krpan

November 3rd, 2022

How should we decide when to use artificial intelligence in decision-making?

0 comments | 13 shares

Estimated reading time: 3 minutes

Should organisations adopt artificial intelligence in their decision-making processes? Does AI work better than human judgment? In the case of recruitment decisions, Grace Lordan, Paris Will, and Dario Krpan write that AI-based methods lead to more efficient hiring decisions, increasing diversity of recruits and leading to the best performance in new hires. However, there is a problem of perceptions. Sentiment towards AI hiring is much more negative than sentiment towards human hiring.  


 

In the last decade, there has been a rise in the emergence of novel artificial intelligence (AI) technologies, which are now beginning to pervade most areas of our life. This includes the use of AI in the monitoring of crime, how we hire employees, how we diagnose medical problems, and even how we drive cars. These changes consist of the complete automation of processes in some cases, or an integration of human-computer interaction in others.

As these types of technologies have been developed and implemented, there has been a controversial response from researchers, practitioners, and the public. The argument is primarily based around whether AI will reduce or perpetuate cognitive bias in our decision-making, thus driving suboptimal outcomes.

There is merit to both sides of the argument. On the one hand, it is possible that AI can be designed in a way to remove factors laced with bias to create more fair decisions. On the other, if the data used is not representative and the algorithm is not trained properly, it is possible that bias can be perpetuated on a larger scale than human bias ever could. Since prior research has backed up that both of these situations are possible (Lambrecht & Tucker, 2019; Sajjadiani et al., 2019), we need a better way to judge whether we should adopt novel AI technologies. Theory on AI bias will never be conclusive; we need to look at how the technologies are performing.

We propose that to gain clarity on whether certain AI technologies should be adopted, we should compare the outcomes of AI decision-making directly to the outcomes of the status-quo method of decision-making. We note that in most cases, human decisions represent the status quo.

We highlight this method through a recent systematic review (Will, Krpan, & Lordan, 2022) where we synthesise findings that compare human hiring methods to algorithmic hiring methods. The results of this systematic review indicate that AI methods outperform humans with respect to making efficient hiring decisions, increasing diversity of recruits and leading to the best performance in new hires. However, with respect to perceptions, sentiment towards AI hiring is much more negative than sentiment towards human hiring.

Due to these results, we would recommend, in line with our framework, that AI be adopted in the contexts in which it has shown to outperform humans. However, we also recognise that due to the findings on negative perceptions towards AI, this may represent a barrier to adoption. In cases where AI is performing better than humans, there may still be emotional factors holding people back from embracing these new technologies.

Similar past findings have corroborated that humans are more averse to AI decisions than human decisions, even when AI outperforms humans, a finding known as algorithm aversion (Dietvorst, Simmons, & Massey, 2015). As such, to push forward adoption, it is imperative to make people feel comfortable and address concerns regarding these technologies. Past studies identified in the systematic review have shown that the degree to which hirers address privacy concerns surrounding how the technologies work is influential over candidates’ perceptions of the fairness and acceptability of such technologies (Kodapanakkal et al., 2020; Langer et al., 2019).

Some researchers will argue that there are limits to what AI can predict, and it is dangerous to use algorithms in outcomes characterised by a large degree of variability, such as hiring. We agree that as the technology currently stands, there are limits to what can be accurately predicted.

AI is currently able to function well in contexts in which it is trained, known as weak AI. However, strong AI can also generalise across contexts, and this type of technology is not yet reliably established. This means that when we use AI, for example in hiring, the algorithm should be trained in the specific context such as job role and industry that it is used in.

Although there are currently limits of AI prediction, there are also limits in humans’ ability to predict outcomes. For example, in our systematic review, when looking at predicting the future promotions of job candidates, despite its limited abilities, AI still performed better than humans. In any social decision-making scenario, we likely will not be able to make the perfect decision for a while, but this should not dissuade us from implementing technologies that are an improvement over previous methods.

We can’t conclude whether every type of AI should be adopted, but instead we offer a simple framework for how practitioners and researchers can determine decisions regarding the adoptability of their specific type of technology. The framework involves comparing the outcomes of human and AI decision-making in specific contexts and determining whether AI performs better, equal to, or worse than humans. In this way, we also believe that decisions on adoption should be context-specific to the scenario in which the technology is being employed and be monitored over time to ensure reliability.

♣♣♣

Notes:

About the author

Grace Lordan

Grace Lordan is an associate professor in the Department of Psychological and Behavioural Sciences at LSE. She is the founder and director of LSE's The Inclusion Initiative (http://www.lse.ac.uk/tii). She wrote the book "Think Big, Take Small Steps and Build the Future you Want". http://www.gracelordan.com/

Paris Will

Paris is the Lead Corporate Research Advisor at The Inclusion Initiative, the London School of Economics. In this role she carries out behavioural science research for corporate and executive clients. Her primary research topic is on assessing the applications of Artificial Intelligence with the goal of reducing decision-making bias.

Dario Krpan

Posted In: LSE Authors | Technology

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.