In Re-Engineering Humanity, Brett Frischmann and Evan Selinger explore how the rise of new technologies and datafication grounded in machinic rationality risk conditioning humans to become more machinic-like in turn. As the book seeks to consider how the value of the human can be protected from the consequences of data creep, it will prompt readers to look at otherwise taken-for-granted technology practices differently, writes Ignas Kalpokas.
Re-Engineering Humanity. Brett Frischmann and Evan Selinger. Cambridge University Press. 2019.
It is by now uncontroversial to observe that we constantly find ourselves in the middle of a data loop whereby large amounts of data about us are collected and analysed, and the results of that analysis are then used to shape our digital environments, product offerings, etc. Meanwhile, our reactions to such customisations are themselves turned into even more data that is collected and analysed… and so on. Certainly, this has all sorts of implications for privacy, the freedom to choose, the ability of institutions to regulate and many other domains. An issue that has thus far attracted less attention than it merits is the extent to which such datafication practices end up akin to conditioning human behaviour – techno-social engineering, as Brett Frischmann and Evan Selinger call it. And it is this gap that Re-Engineering Humanity aims to fill.
Although Frischmann and Selinger’s claims of a slippery slope towards social engineering and data creep (the idea that data collection and techno-social conditioning based on it are only going to increase as we are desensitised with every step we take) may at first glance seem unabashedly alarmist, in fact the authors do a very good job in substantiating their claims with examples from spheres as diverse as the home, education and employment. While arguing their case, the authors pursue a two-pronged argument: first, that because machinic rationality becomes the benchmark against which human behaviour is to be measured, the necessary corollary is the need to make humans more machine-like – hence, techno-social engineering. Second, that although humans usually end up agreeing with such treatment (at face value, at least, with users often unthinkingly consenting by just pressing a button), this is often due to a certain lack of awareness and due to traditional contracts no longer serving their purpose. Hence, the authors present a strong case for pausing and thinking as to how the value of the human person is to be best protected or, perhaps, restored. Unfortunately, despite calling for ‘new humanism’ towards the end of the book, the authors leave this idea underdeveloped.
The book opens with a discussion of the ways in which data collection, analysis and feedback are being normalised: for example, by collecting data about exercise patterns and physical activity more generally to gather insights for making the population at large more physically active. There are several characteristics that make this an easy sell, not least the ostensibly noble aim and the ‘cool factor’ of the gadgets. Nevertheless, there is a flip side (or rather two of them) as well. First is the loss of control over one’s data and, second, loss of control over one’s life.
Image Credit: (Brett Jordan Unsplash CCO)
While the first point is relatively straightforward (once we begin trotting down the slippery path of datafication, there is no turning back), the second necessitates further explication. It is, essentially, an open challenge to behavioural economists and their emphasis on devising strategies for nudging people towards predefined choices. However, while the latter think that nudge merely represents a form of benevolent libertarian paternalism (Richard Thaler and Cass Sunstein’s bestselling book Nudge being a notable example), Frischmann and Selinger see in it the kernel of techno-social engineering: that is, humans losing their autonomy and being pushed towards becoming more machine-like. In both criticisms, the authors seem to claim – with strong persuasive power – that the mystery and unpredictability at the heart of being human are being eroded.
The chapters in the second part of this book particularly focus on tools and the peculiarities of their use. One such peculiarity is the aforementioned issue of contracts: we become less informed about our use of tools (and about our tools’ use of us) and thus increasingly less capable of truly giving consent; moreover, as these tools grow in their importance to our everyday life, this consent is less and less free. A related argument is that, as technology becomes an extension of not only our bodies but also of our minds, both our behaviour and patterns of thought unavoidably become affected. And as our environments become increasingly smart and interconnected, techno-social engineering is seen to follow suit, becoming more all-encompassing than any form of behaviour regulation before it, thereby giving the largest technology companies unprecedented power. In this context, the blurring of the line between the human and the machine appears to achieve near-complete realisation with the optimisation of everything from walking patterns to personal relationships.
Finally, the third part asks the questions of what is specific about being human and how robust this kernel is. Here the authors flip the famous Turing test, designed to check if computers have become human-like, into one geared towards demonstrating whether humans have become computer-like. At the end, Frischmann and Selinger come up with one core human characteristic that distinguishes them from machines: free will. Unsurprisingly, this is also the attribute most challenged by datafication: if humans become fundamentally knowable and mouldable (which is the twin promise of data analysis and nudging strategies based on such analysis), then humans become incapable of thinking and acting independently and, therefore, are rendered indistinguishable from computers as deterministic machines. It is here that the authors call for a new humanism, which they frame, very abstractly, in terms of a restoration of human dignity and autonomy. It would be interesting if they further develop this idea in their future work.
Overall, this is a book that makes the reader reconsider otherwise taken-for-granted assumptions and everyday technological practices that we often just perform without thinking. Indeed, this performance without thought, such as in agreeing to terms and conditions, is one of the explicit targets of the authors’ criticism. Regardless of whether you consider Re-Engineering Humanity to be overly alarmist or not (either case is defendable, although I am swayed towards the latter), it is likely that you will never look at your fitness tracker or the latest seemingly innocuous app in the same way after reading it. And that is, perhaps, the most important point – Re-Engineering Humanity serves as a re-sensitising device that makes technology creep seem less natural and self-explanatory (and less benevolent), thus enabling us to make better-informed decisions. And even if it might be tempting to debate whether new humanism is really the best way forward – perhaps some strategies inspired by posthumanist thought (see, for example, the work of Rosi Braidotti, Stefan Herbrechter, David Roden or Peter Mahon) may be more fitting – this should not distract attention from the timeliness and importance of Frischmann and Selinger’s book.
- This review originally appeared at the LSE Review of Books.
Please read our comments policy before commenting.
Note: This article gives the views of the authors, and not the position of USAPP– American Politics and Policy, nor of the London School of Economics.
Shortened URL for this post: http://bit.ly/2kbadh8
About the reviewer
Ignas Kalpokas – Vytautas Magnus University
Ignas Kalpokas is currently assistant professor at LCC International University and lecturer at Vytautas Magnus University (Lithuania). He received his PhD from the University of Nottingham. Ignas’s research and teaching covers the areas of international relations and international political theory, primarily with respect to sovereignty and globalisation of norms, identity and formation of political communities, the political use of social media, the political impact of digital innovations and information warfare. He is the author of Creativity and Limitation in Political Communities: Spinoza, Schmitt and Ordering (Routledge, 2018).