LSE - Small Logo
LSE - Small Logo

Andrew Murray

August 10th, 2018

When machines become sentient, we will have to consider them an intelligent life form

2 comments | 7 shares

Estimated reading time: 5 minutes

Andrew Murray

August 10th, 2018

When machines become sentient, we will have to consider them an intelligent life form

2 comments | 7 shares

Estimated reading time: 5 minutes

For most of us, our understanding of robots and artificial intelligence (AI) is drawn more from science fiction than from fact. Intelligent robots are often portrayed as either a virulent threat to humanity, as seen in the Terminator series of films or in Isaac Asimov’s I, Robot, or a socially beneficial tool, as with the Star Wars robots R2-D2 and C-3PO or Star Trek’s Lieutenant-Commander Data. The truth of AI and robotic integration into society, however, is unlikely to be either of these, and, as developments in AI continue, perhaps we should pause to re-evaluate how we view these ever-evolving machines. (Note from the editor: Please see video below with reference to HAL 9000, the sentient computer in Stanley Kubrick’s movie 2001: A Space Odyssey.)

As we move towards robots becoming sentient, it is clear that we must start to rethink what robots mean to society and what their role is to be.

As a first step, we need to stop thinking of robots as human facsimiles. Science fiction tends to imagine robots that mimic human movement and language; while it is true that we are developing robots like these, the bulk of everyday robots will in all likelihood not look or sound human. Many will be specialised devices not dissimilar to the production line robots of today, carrying out spot-welds on cars or packing shirts for shipping; or they will exist without corporeal bodies at all as mere lines of code that control self-driving cars or drones or that will act as future personal assistants replacing Siri, Cortana and Alexa.

Stephen Hawking and Elon Musk have warned of the threat that AI poses to human safety and security

As we move towards robots becoming sentient, it is clear that we must start to rethink what robots mean to society and what their role is to be. Today much debate surrounds what I label the “sci-fi debate”. Among others, Professor Stephen Hawking and entrepreneur Elon Musk have warned of the threat that robots and AI pose to human safety and security, a position held by 36 per cent of people in the UK according to a 2015 YouGov survey for the British Science Association. In the alternative, the passive or socially useful robot has become demonised as a direct threat to human employability. The Bank of England warned in 2015 that up to 15 million jobs in Britain are at risk of being lost to robots, while a 2016 report from Forrester research suggested that, by 2021, robots will have eliminated six per cent of all jobs in the US.

Despite these dire warnings, we continue to press ahead in robotics and AI research. Why? Because there is a dissonance between the sci-fi debate and the future role of robots in our society. The first generation of truly smart AI devices is likely to be self-driving vehicles, which offer potentially massive social benefits. From a public safety perspective these benefits are clear. In 2016, 1,810 people were killed on Britain’s roads and 25,160 were seriously injured. With human error being attributed to around 90 per cent of road traffic accidents, self-driving cars could save around 1,600 lives and reduce serious injuries by around 22,500 per annum. Then there are the economic benefits for major corporations. Delivery companies, ride-share apps and even public transport providers can replace employees with smart robots, saving billions per annum and removing the risk of industrial action. Against such a backdrop it is clear to see how AI is attractive. Similar arguments can be made for the objective impartiality of AI judges and the precision of robot surgeons.

These arguments and debates are not the root of my interest in AI and robotics, however. While most people are looking at the challenge of AI and robotics to society, I’m looking at the challenge of the robots to us. What is the human cost of integrating AI and robotics into society? It is clear that using intelligent devices, even the base algorithmic intelligence of a current smart agent like Alexa, changes the way that humans think and make decisions. We retain less information and outsource the storage of data to our devices. This means that these external devices filter the information provided to us when we make a decision: we lose some of our autonomy by trading it for convenience and for a perceived “fuller picture” which is not the case.

As Eli Pariser has shown in his book The Filter Bubble, a vital role of technology is choosing what not to reveal to us. In 1987 we might have made a decision based on incomplete information but the question of what to retain and what to discard was a purely human decision. Thirty years later we have more information but that information is valued and presented to us not by a human thought process but by algorithmic design. The information society we value so highly has created too much information for us to process. We are faced with a tyranny of choice created by overwhelming data and have outsourced the filtering of that data to algorithms and devices. This has led to developments like big data analytics and algorithmic regulation.

As we approach the brave new world of human-level machine intelligence, which some commentators believe could be with us by 2030, we will, however, be asked some very deep questions about our identity and what it means to be human. The first significant challenge is likely to be how we treat our new equals. A common theme of sci-fi is human inability to recognise and treat with respect sentient life forms different from our own. If we do achieve human-level artificial intelligence within the next 20-30 years, what we do next will define both us as humanity and our relationship with our creation. Will we treat it with respect, as an equal, or will we treat it as a tool?

When the machine intelligence reaches human level and becomes sentient and self-aware we will have to consider it an intelligent life form.

Today when we talk of AI and robotics we normally define them as tools or devices to be used as we please: to drive cars or fly planes, to mine in dangerous environments, or simply to manage our everyday lives. This may be acceptable with the current standard of low-level machine intelligence; however, when the machine intelligence reaches human level and becomes sentient and self-aware we will have to consider it an intelligent life form. If we then continue to treat it as a tool or device it will be no different from treating humans in this way. The UK abolished human slavery in 1833; in less than 30 years we may be revisiting the debate. Such debates, or even the possibility of such debates, mean that for lawyers AI and robotics offer a unique opportunity to hold a mirror up to humanity and society and to examine how we make and uphold our most fundamental legal principles and norms.

♣♣♣

Notes:

  • This blog post appeared originally on LSE Connect, the magazine of LSE’s alumni community.
  • The post gives the views of its author, not the position of LSE Business Review or the London School of Economics.
  • Featured image credit: Photo by DasWortgewand, under a CC0 licence
  • When you leave a comment, you’re agreeing to our Comment Policy.

Andrew Murray is professor of law, with particular reference to new media and technology law. He is also a fellow of the Royal Society for the Encouragement of Arts, Manufactures and Commerce (FRSA). Andrew studied law at Edinburgh University, from where he graduated (LL.B. Hons) in 1994. He undertook the one year postgraduate diploma in legal practice during 1994-95 and then spent one year as a research assistant in the department of private law, University of Edinburgh, before taking up a lectureship in law at the University of Stirling in 1996. He joined LSE in 2000.

 

 

 

 

About the author

Andrew Murray

Andrew Murray is professor of law, with particular reference to new media and technology law. He is also a fellow of the Royal Society for the Encouragement of Arts, Manufactures and Commerce (FRSA). Andrew studied law at Edinburgh University, from where he graduated (LL.B. Hons) in 1994. He undertook the one year postgraduate diploma in legal practice during 1994-95 and then spent one year as a research assistant in the department of private law, University of Edinburgh, before taking up a lectureship in law at the University of Stirling in 1996. He joined LSE in 2000.

Posted In: LSE Authors | Technology

2 Comments