LSE - Small Logo
LSE - Small Logo

Audrey Borowski

January 8th, 2016

Book Review: The Glass Cage: Where Automation is Taking Us by Nicholas Carr

5 comments | 1 shares

Estimated reading time: 10 minutes

Audrey Borowski

January 8th, 2016

Book Review: The Glass Cage: Where Automation is Taking Us by Nicholas Carr

5 comments | 1 shares

Estimated reading time: 10 minutes

In The Glass Cage: Where Automation is Taking Us, Nicholas Carr expands upon his prior examination of the internet’s impact upon the workings of the human mind by turning his attention to the implications of wider processes of automation. As Carr’s nuanced approach communicates caution without positing either a fully utopian or dystopian view of technological advances, Audrey Borowski praises the book for its reflections on how automation is engendering a re-evaluation of ‘the human’ itself.

The Glass Cage: Where Automation is Taking Us. Nicholas Carr. Bodley Head. 2014.

The Glass CageIn his previous book The Shallows: How the Internet is Shaping the Way We Think, Read and Remember (2010), Nicholas Carr disputed the alleged neutrality of the internet as a medium. Like the map and clock before it, the internet is imbued with an ethos that has far-reaching effects, actively reshaping the human mind. Bombardment by data, stimuli, instructions, suggestions, pokes and auditory cues rewire our physical brain to crave constant distraction and undermine our ability to sustain concentration or think deeply, both of which are necessary for the formation of long-term memory. Our thought patterns are realigned with the fragmented and fleeting streams of the Net. Far from encouraging divergent thinking, the internet flattens our cognitive horizons and reinforces pre-existing biases.

In The Glass Cage: Where Automation is Taking Us, Carr generalises his argument to intellectual technologies. Automation is no longer confined to the manual realm; in the past decade, its tentacles have spread to the intellectual domain, taking over every aspect of our lives, from computers to check-out systems, from apps to watches, from cars to planes. ‘Intellectual automation’ has come to exert an unprecedented hold over our lives, often profoundly altering them in unanticipated ways.  Software no longer merely supplements human thought and judgment but, in an increasing number of cases, supplants them altogether. Carr delivers a more nuanced account which cuts through the prevailing binary narratives of utopia and dystopia to which we are generally entreated. In both these cases, conjuring up fatalistic visions of earthly paradises or robot uprisings does more to anaesthetise thought than to maintain a critical stance. Daring to challenge this apparent godsend is often tantamount to blasphemy.

While the benefits of intellectual automation are considerable, they come at a cost. Carr is no luddite: instead, he sets out to temper our unquestioning enthusiasm for intellectual technologies, and to measure the sacrifice incurred by their increasingly ubiquitous deployment. Primarily, we have fallen victim to the ‘substitution myth’: we labour under the illusion that by offloading routine chores onto software and relieving our minds of effort, we free up mental space for higher pursuits.

This rests on a false analogy between human and artificial minds. Computation has surreptitiously crept into our lives as the new benchmark against which human minds are gauged. A new form of digital paternalism has set in whereby humans are no longer able to think or behave for themselves; instead, we should defer to the wisdom of algorithms. Against the speed, efficiency and productivity of computation, the human mind is cast as inherently and irretrievably flawed, a liability to be replaced as much as possible by its artificial counterpart. As Carr sarcastically quipped a few months ago: ‘The best way to get rid of human error is to get rid of humans.’

At the practical level, intellectual automation raises a host of new questions, including automation bias and its corollary, automation complacency. Our belief in the infallibility of algorithms leads us to entrust tasks to software and to abdicate judgment. In planes, for instance, the flight deck has become one huge flying computer interface. Flying a plane today consists in operating computers, checking screens and entering data with the pilot typically holding the controls for an average of three minutes. While the automation of flying has made air travel safer, it has also resulted in the loss of cognitive control and lack of situational awareness. This disconnect can, in some cases, have dramatic consequences: when an error occurs or the software fails to work as intended, manual control is abruptly thrust back into the hands of an overwhelmed pilot.

Overreliance on software dulls our minds and reflexes, erodes our expertise and results in ‘de-skilling’, which in turn lead to more human errors and a further restriction of human responsibilities. Sharp tools make for dull minds. As Google executive Alan Eagle put it: ‘At Google and all these places, we make technology as brain-dead easy to use as possible.’ Literally.

Our role is increasingly whittled down to following instructions: ‘Rather than opening new frontiers of thought and action to its human collaborators, software narrows our focus. We trade subtle, specialized talents for more routine, less distinctive ones’ (67). By opting for the path of least resistance, we are actually curtailing our agency. Far from liberating us, software cages our minds within narrow parameters. Even in the creative professions, such as architecture, computer-aided design has become the norm. In an ironic twist, software not only disempowers us, but also increasingly assumes responsibility for moral choices: to kill or not to kill the spider in the case of robotic vacuum cleaners, the human crossing the street or the soldier in the case of armed conflict.

Letting the software ‘do our thinking for us’ hampers our ability – or will – to learn: ‘The generation effect requires precisely the kind of struggle that automation seeks to alleviate’ (75). In a results-oriented world, we dismiss processes as time- and energy-consuming. And yet, ‘learning requires inefficiency’ (175), slowness and contemplation. More than that, by building an increasingly ‘friction-less’ world, we are willing to forsake the constitution of our personal identities.

Software and computing have become so deeply embedded in the world as to constitute ‘the very stuff out of which man builds his world’, in the words of computer scientist and father of the Eliza programme, Joseph Weizenbaum.   Every Google Glass, smartphone and iPad acts as an additional filter that further ‘un-grounds us’ from it, to paraphrase French philosopher Maurice Merleau-Ponty, and brings us one step closer to the complete automation of the world itself.

Taken too far, automation can promote disinvestment and passivity, which turn us into ‘shallow thinkers’ merely skimming the surface of the world. Traditionally, tools were ‘instruments of experience rather than just means of production’: ‘The value of a well-made and well-used tool lies not only in what it produces for us but what it produces in us’ (217). Our horizons of experience are shrinking to our screens. We are advancing blindfolded in the world, relying ever more on devices whose own workings operate at a completely different level than ours, away from our scrutiny. Carr emphasises the need to embrace a human-centred automation rather than embarking on a spiral of always more automation. One way of doing this would be to design systems which alternate control between humans and software.

At the heart of his book and excellent blog Rough Type lies Carr’s concern for the human itself. Carr echoes German thinker Gunther Anders in his comments on the ability of ‘industrial re-incarnation’s’ new version of immortality to elicit senses of shame and inferiority in us.  We have embraced a perverse logic whereby the human component is deemed at best peripheral, and at worst an obstacle to the realisation of a technological teleology. By trading our unique – and irreplicable – qualities for the cold logic of soulless algorithms, we are denigrating our very humanity:

‘from living, into a rich and fluid understanding of the world that we can then apply to any task or challenge, it’s this supple quality of the mind, spanning conscious and unconscious cognition, reason and inspiration, that allows human beings to think conceptually, critically, metaphorically, speculatively, wittily – to take leaps of logic and imagination’ (121).


Note: This review gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics. 

Image Credit: ProtoplasmaKid (CC BY SA 3.0).


 

Print Friendly, PDF & Email

About the author

Two grey pencils on yellow background

Audrey Borowski

Audrey Borowski is a doctoral student at the University of Oxford and the author of the upcoming article, ‘Leibnizian Musings on Intellectual Automation’.

Posted In: Media Studies | Science and Tech

5 Comments

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Creative Commons Attribution-NonCommercial-NoDerivs 2.0 UK: England & Wales
This work by LSE Review of Books is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 UK: England & Wales.