LSE - Small Logo
LSE - Small Logo

Paul Whiteley

October 19th, 2023

Why Artificial Intelligence is a misnomer

0 comments | 15 shares

Estimated reading time: 5 minutes

Paul Whiteley

October 19th, 2023

Why Artificial Intelligence is a misnomer

0 comments | 15 shares

Estimated reading time: 5 minutes

Since Artificial Intelligence first began to develop, we have had a tendency to humanise and anthropomorphise this technology in how we think about and discuss it. But does this enhance or impede our understanding of AI and its capacities, both now and in the future? Paul Whitely asks.


Artificial Intelligence (AI) is moving ahead rapidly at the moment with Google having announced a new chatbot which will attend a meeting on your behalf if you do not have time to turn up yourself.

The arrival of large language AI models like ChatGPT has set off a fire storm of speculation around the topic. British computer scientist Geoffrey Hinton left Google with a warning about its potential dangers, having been closely involved in developing the technology during his career.

Similarly, the entrepreneur Elon Musk who has been involved in funding the technology has voiced similar concerns. He has argued that AI urgently requires regulating if we are to avoid a “catastrophic outcome”.

What’s in a name?

That said, AI is likely to be very useful in the economy. For example, in the polling industry it will be able to provide additional tools for pollsters and automate some tasks such as selecting samples, weighting data to correct for biases, following up non-respondents in surveys, and undertaking data analysis. Complex methods of analysis such as multi-level regression with post-stratification analysis (MRP) will become accessible to non-specialists since AI can efficiently write computer code.

Essentially, the term AI is an anthropomorphism of a form of computer based applied statistical analysis.

But should this technology really be called artificial intelligence? The term was invented by computer scientist John McCarthy in 1950 and he may have done us a disservice by describing it in these terms. This is because much of it is essentially a complex prediction machine which searches large amounts of data and answers questions based on these searchers, without understanding anything about the underlying concepts or theories at work. Essentially, the term is an anthropomorphism of a form of computer based applied statistical analysis.

Algorithms vs. inference

In their book “Computer Age Statistical Inference” Bradley Efron and Trevor Hastie, two Stanford university computer scientists, explain how computer-based statistical analysis has developed over time and how it currently works. They argue: “algorithms are what statisticians do while inference says why they do them”. In other words, algorithms are recipes allowing anyone to follow a procedure without understanding it, whereas inference is about testing theories created by imagination, insight and understanding.

There are some exotic concepts in the field of computer-based statistical inference such as “lasso regression”, “random forest analysis” and “neural networks”. But underlying these is a fundamental workhorse technique used in statistical work called regression analysis. This involves using a set of variables to predict the behaviour of one or more other variables. For example, we might use a person’s age, sex, educational background, family income and other measures to forecast their voting behaviour in an election. But to be accurate, this requires a theory to explain why people vote.

Why do we need theory to make sense of algorithms? We could use an algorithm to search the web for anything which appears to predict voting behaviour in a regression model. This is referred to as “unsupervised” learning in computer-based machine learning exercise. It involves predicting the behaviour of voters based on large numbers of variables, potentially thousands of them, without having any understanding of the causal processes at work. In this type of analysis, the researcher may not even know what the predictor variables are because they are often bundled together into composite measures of various kinds.

One of the main drawbacks of algorithmic analysis of this type is the problem of “spurious regression”. This occurs when a variable appears to predict something which makes no sense in terms of any understanding of the mechanisms at work. For example, there is a highly statistically significant relationship between stork migration patterns and the human birth rate in Europe. An unsupervised algorithm might well add this to a predictive model of the human birth rate, oblivious to the fact that storks do not bring babies.

General intelligence

The holy grail in the field of AI is what is referred to as general intelligence. This involves predicting behaviour over a wide variety of different problems. Currently, the algorithms which drive AI are very good at one thing, but relatively hopeless at everything else. For example, “Deep Blue” was a purpose-built chess playing computer created by IBM. In a six-game contest in 1997, it beat the world champion Gary Kasparov twice and drew with him three times. This contest was regarded as a landmark event in the development of AI. But if Deep Blue were asked to play draughts it would be quite unable to do so, because it was a special purpose machine programmed only for chess.

Currently, the algorithms which drive AI are very good at one thing, but relatively hopeless at everything else.

The computer scientist and tech entrepreneur Erik Larson describes the search for general artificial intelligence in the following terms: “No algorithm exists for general intelligence. And we have good reason to be sceptical that such an algorithm will emerge through further efforts on deep learning systems, or any other approach popular today”.

To test the effectiveness of AI, I asked chatGPT to tell a joke, to see what it made of that most human trait, a sense of humour. It came up with the following: “Why don’t scientists trust atoms? Because they make up everything!”. My advice to the algorithm is, don’t give up your daytime job to join the stand-up comedy circuit any time soon.


Image credit: Rohan Gupta via Unsplash

All articles posted on this blog give the views of the author(s), and not the position of LSE British Politics and Policy, nor of the London School of Economics and Political Science.

Print Friendly, PDF & Email

About the author

Paul Whiteley

Paul Whiteley

Paul Whiteley is Emeritus Professor of Government at the University of Essex. His research interests are in electoral behaviour, public opinion, political economy and political methodology.

Posted In: Economy and Society | Party politics and elections | Political Participation
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by British Politics and Policy at LSE is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.