LSE - Small Logo
LSE - Small Logo

Blog Admin

February 10th, 2019

Book Review: Robot Rights by David J. Gunkel

0 comments

Estimated reading time: 5 minutes

Blog Admin

February 10th, 2019

Book Review: Robot Rights by David J. Gunkel

0 comments

Estimated reading time: 5 minutes

In Robot RightsDavid J. Gunkel explores the question of whether rights should be extended to robots, examining the philosophical foundations of four key positions and their implications. Gunkel’s interrogation of what has been seen as an ‘unthinkable’ idea offers a valuable and accessible contribution that will prompt reflection on the place of humans in the world and our relationship with other entities of our own making, recommends Ignas Kalpokas

Robot Rights. David J. Gunkel. MIT Press. 2018.

Find this book: amazon-logo

The post-human turn in thinking about rights, privileges and agency has resulted in efforts to overturn anthropocentrism in considering both living and non-living things as well as machinic and algorithmic extensions of human beings (see here for a useful overview). However, discussing robot rights has remained, by author David J. Gunkel’s own admission, an ‘unthinkable’ idea, something that is susceptible to distrust at best and ridicule at worst. Hence, his new book Robot Rights is a crucial innovation in the way we think about our proper place in the world and relationships with entities of our own making. And while this questioning of the specificity and exclusivity of humanness is what connects the book to the wider post-humanist literature, Gunkel simultaneously engages with a broad spectrum of other literature spanning the domains of technology, law, communication, ethics and philosophy.

In trying to establish whether robots can and should have rights, Gunkel explores four main propositions, starting with an assertion that robots neither can nor should have rights. After all, robots are typically perceived to be mere tools or technological artefacts, designed and manufactured for the specific purpose of human use, i.e. as a means to an end rather than ends in themselves. As a result, the argument goes, there is simply no basis for a moral or legal status to arise, implying also that humans have no obligations to robots as independent entities. The only obligations towards robots would arise from them being somebody else’s property.

On the other hand, this mode of thinking opens up some fundamental questions that Gunkel is right to point out. Particularly, as robots get ever more sophisticated and autonomous, their influence on the social and psychological states of human beings is increasingly on a par with that of fellow humans and significantly exceeding the influence that mere tools can exert. Hence, it would not be unreasonable to assume that such affective capacity, instead of the nature of the affecting object, should count as the main criterion for attribution of moral status and, therefore, rights and privileges. As a result, it would perhaps be wrong to reduce all technological artefacts to tools regardless of their design and operation.

But there is an even deeper point that Gunkel raises: we are postulating a very Eurocentric idea of what ‘human nature’ actually is by emphasising the distinctness of human beings from their surroundings, while other traditions embrace very different ways of imagining the same relationships. As a result, Gunkel asserts, simplistically denying robot rights on the basis of their different nature ‘is not just insensitive to others but risks a kind of cultural and intellectual imperialism’ (77).

Image Credit: (Matt Wiebe CC BY 2.0)

The second proposition entertained by Gunkel asserts that robots both can and should have rights. It is a chiefly future-oriented proposition: although in their current stage of development robots are not yet capable of meriting rights, at some stage in the future (probably sooner rather than later) as they become more ‘human-like’, robots will cease to be mere things and will become moral subjects instead. Once that happens and robots, making use of proper artificial intelligence, become feeling and self-reflective conscious beings that enjoy autonomy and free will, it will become increasingly difficult (and morally unjustifiable) to deny robots the rights enjoyed by their fellow feeling and self-reflective conscious beings endowed with the capacity for autonomy and free will – humans. As a result, privileges, rights and immunities shall be granted. On the other hand, the same accusation of employing a Eurocentric anthropocentric standard of ‘human-like’ nature still applies, thus undermining the morality of such propositions.

The third proposition, even more anthropocentric than the previous one, stipulates that even though robots can have rights, they should not have them. The premise, as Gunkel emphasises, is deceptively simple: as far as law is concerned, the attribution of rights is a matter of fiat: once a legitimate authority following the right procedure passes a decision according rights to robots (or anything else), the latter immediately become endowed with such rights. In other words, rights do not depend on the qualities of their possessor (to be), as suggested by the two previous propositions, but merely on the will of the lawgiver: as such, even the current version of robots could legally be bearers of rights. However, as the proponents of this proposition suggest, the mere fact that something can be done does not mean that it should be done. Instead, this proposition is based on the assumption that we are, in fact, obliged ‘not to build things to which we would feel obligated’ (109), because the opposite would open up floodgates to uncontrollable social transformations. However, such a premise, Gunkel asserts, yet again necessitates accepting the Eurocentric and anthropocentric thesis of human exceptionality with all the intellectual imperialism that it involves, which is, obviously, not the most attractive option.

Finally, the fourth proposition stipulates that even though robots cannot have rights, they should still have them nevertheless. This particular perspective finds its basis in our tendency to accord value and social standing to the things we hold dear, particularly if such artefacts exhibit a social presence of some sort, robots being the obvious candidates. In other words, we invest things with our love and/or affection and, by doing so, dignify and elevate them from mere things to something more. As a result, robot rights would inhere, once again, not in the robots themselves but in their human owners. This, however, appears to be one of the weaker propositions, and Gunkel quickly dismisses it on the grounds of ‘moral sentimentalism’, its focus on appearances and, as readers might have been guessed already, anthropocentrism (the instrumentalisation of others for the purpose of our own sentimental wellbeing).

The alternative proposed by Gunkel himself involves turning to the philosopher Emmanuel Levinas and his thesis of an encounter with otherness being at the heart of ethics. Hence, it is not some predefined set of substantive characteristics or necessary properties inherent to the encountered other that determine the latter’s status, but relationships that are extrinsic and empirically observable. To put it in a form more immediately applicable to the book’s subject:

As we encounter and interact with other entities – whether they are another human person, an animal, the natural environment, or a domestic robot – this other entity is first and foremost experienced in relationship to us (165).

The key question is, therefore, neither whether robots can have rights nor whether they should, but instead how that which I encounter ‘comes to appear or supervene before me and how I decide, in the “face of the other” […] to make a response to or to take responsibility for (and in the face of) another’ (166). On the one hand, the Levinas-inspired solution solves the problem of anthropocentric prescription of the necessary possession of quasi-human traits. On the other hand, however, despite somewhat diffusing the sentimentalism for which Gunkel criticises the fourth proposition, this approach still retains anthropocentrism in another way – by implying the necessity of a human subject experiencing ‘the other’ and the encounter itself.

Although the issue of robot rights remains essentially unsolved in the book (which Gunkel openly acknowledges), the problematisation of the matter is itself a valuable and meaningful contribution, opening up for serious consideration of the otherwise ‘unthinkable’ proposition. Moreover, being accessibly written, the book is likely to appeal far beyond an academic readership. The caveat is, of course, that any closure to this debate will have to be worked out independently. Hence, a prospective reader has to be adventurous enough to engage in some intellectual DIY.

Please read our comments policy before commenting.

Note: This article gives the views of the authors, and not the position of USAPP– American Politics and Policy, nor of the London School of Economics.

Shortened URL for this post: http://bit.ly/2taDXvF


About the reviewer

Ignas Kalpokas – LCC International University
Ignas Kalpokas is currently assistant professor at LCC International University and lecturer at Vytautas Magnus University (Lithuania). He received his PhD from the University of Nottingham. Ignas’s research and teaching covers the areas of international relations and international political theory, primarily with respect to sovereignty and globalisation of norms, identity and formation of political communities, the political use of social media, the political impact of digital innovations and information warfare. He is the author of Creativity and Limitation in Political Communities: Spinoza, Schmitt and Ordering (Routledge, 2018). Read more by Ignas Kalpokas.

About the author

Blog Admin

Posted In: Book Reviews | Democracy and culture

Leave a Reply

Your email address will not be published. Required fields are marked *

LSE Review of Books Visit our sister blog: British Politics and Policy at LSE

RSS Latest LSE Events podcasts

This work by LSE USAPP blog is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported.