LSE - Small Logo
LSE - Small Logo

Martin Anthony

January 25th, 2024

Meet the Academic: Professor Nello Cristianini

0 comments | 2 shares

Estimated reading time: 10 minutes

Martin Anthony

January 25th, 2024

Meet the Academic: Professor Nello Cristianini

0 comments | 2 shares

Estimated reading time: 10 minutes

Nello Cristianini is a Professor of AI at the University of Bath in the Department of Computer Science. His research interests include machine learning, data science, computational social science, the philosophy of AI and the regulation of AI, and we caught up with him to discuss his research, book and the future of AI in the lead-up to his Public Lecture at LSE. 

Hi Nello, thank you for joining us today. Can you tell us a little bit about yourself and the intellectual journey that has taken you to the interests you have today?

I have been studying AI for a very long time, ever since I had my first computer, which was a Sinclair ZX80, in 1981, but my high school was about Classics, and my first degree was in Physics, so I had to wait a long time before I could formally study machine learning and all the other things that really interested me. In the days before the web I had found a way to order articles and books via the university library, which was also good for learning some English. All this happened in Italy. Then I came to the UK to study, and then I found a job in America, and then I came back here. The thesis of my first degree was on PAC learning, and that project started in 1995, so nearly 30 years.

Your recent book is subtitled “Why Intelligent Machines Do Not Think Like Us”. Can machines be “intelligent”? And how should we think of this as being different from human intelligence?

Intelligence existed on this planet well before humans did, we see signs of intelligence in the animals that preceded us, from their fossils we know they had brains and senses, and that they hunted and escaped predators, and some hunted in packs, and so on. Intelligence is not exclusively human, and not even just of the animal kingdom. It is the ability to act appropriately also in new situations for which we do not have a script, and in this sense also machines can be intelligent. Just consider how YouTube can recommend a new video to a new user. But there is no reason to expect that we make these decisions in the same way.

Is the sudden impact of artificial intelligence mainly a result of genuinely new paradigms, or is it a consequence of increased computing power?

Both. The increased computer power allowed us to explore a path to intelligence that we could not have explored before, and frankly not even imagined. Not only can we predict human decisions, for example, book purchases, without even understanding the contents of the book or the personality of the reader. But we can also translate documents without understanding their contents, or the laws of language. A lot can be done just based on statistical signals, but only if we have enough data. And these days, this often means millions or billions of documents, as we do to train GPT, a large language model that can answer exam questions at a university level, despite having been trained on the simplest of statistical signals.

If you could point to one major development in artificial intelligence over recent years, what would that be?

Until a few years ago I might have said AlphaZero, which learnt how to play Go and Chess on its own and from scratch. But now I have to say the large language models of the GPT class, that show remarkable emergent abilities: when trained just on the task of predicting missing words in a text, they acquire remarkable new skills, such as completing syllogisms or answering questions, and so on. These only appear after a certain amount of data has been seen. In both cases, however, we see machine learning in action. So I think that the major development behind AI in the past decades has to be machine learning. This has become a very sophisticated technology.

Do you think that machines can “understand”, or is it pointless to try to draw such anthropomorphic comparisons?

This may be a matter of words, but I think that when you can respond appropriately to your environment, because you know what to expect, after observing it, and when you can even imagine the outcome of situations that you have not observed before, then you do comprehend something of its mechanisms. So, yes, there is no reason why machines should not understand the world, just they do not do it in the same way as we do.

You write in your book about the possible dangers of machines behaving in a very literal-minded way. How can we prevent this from being a problem?

Imagine a machine tasked with the job of increasing clicks on a website, maybe one that recommends short videos to teenagers. It will explore its environment and will learn all sorts of tricks that can lead to increased click rates. It might even learn to exploit specific vulnerabilities of specific individuals. How can it learn when it is causing some damage, while also pursuing the very goal that we have given it? Making sure that the machine respects our norms will not be easy, due to the nature of the way it works. This problem is even more serious when we allow machines such as GPT to solve problems by selecting which sub-goals should be completed along the way. Once more: how do we ensure that those intermediate goals do not violate our values and norms?

Should we be worried at all about effective “manipulation” of humans by artificial intelligence systems, such as the targeting of messages or the filtering of social media and news, creating polarisation and echo chambers?

Remember, users are part of the environment that certain learning agents need to “exploit”, so some manipulation might be part of the game. But we should distinguish intended and secondary effects, at the very least.
How well can recommender systems steer users, what could be unintended consequences, and can they induce polarisation by proposing only contents that align with the user’s beliefs? The first thing is to be able to measure these possible secondary effects, and this is an important job for psychologists, and I do not see enough studies in this direction. Maybe at LSE someone will want to help. We will hear a lot about this during the next elections, in the UK and in the rest of the world.

Have we become too trusting of artificial intelligence and other algorithmic or machine-based decision-making systems, or is it the case that their accuracy on balance is an improvement upon alternatives that would inevitably involve human error?

Accuracy is not the only metric that we care about; we have made decisions to pursue other goals too, such as equality, privacy, freedom of expression, transparency, and so on. So while a decision should always be correct and based on accurate predictions, we must also solve the inherent tensions deriving from all those competing goals. This belongs to the social sciences and the humanities. Again, as in the case of the psychologists, this is not a mathematical issue. A common culture and common language among scholars would be beneficial here.

 

 

On balance, are you optimistic about artificial intelligence’s impact on our society?

Yes, I am very optimistic, but we should keep our eyes open, be involved, study the challenges, and not hide from them. Solving a single problem is more useful than just complaining about a thousand. We have to rapidly learn how to coexist safely with AI, and avoid risks before they become problems. We have started, the laws are on their way, and we will not repeat the same mistakes we made with the environment: this time we are acting before we actually ruined things. But we need to be vigilant. I am very optimistic because I meet my students, and see how clever they are, and how thoughtful. We are in good hands.

Glad to hear it! Thank you very much for joining us, Nello.

If you’d like to listen to Professor Cristianini delve further into the fascinating world of AI, why not join us on February 12 for his Public Lecture at LSE? Find out more information here.

About the author

Martin Anthony

Posted In: Featured | General | Machine Learning | Meet the Academics

Leave a Reply

Your email address will not be published. Required fields are marked *