The growing automation of professions like therapist, teacher and caregiver threatens the survival of interpersonal work. We automate without knowing exactly what we’re doing and what its impact is. The frontier of automation is being pushed ever further. Allison Pugh writes that new achievements in biology, linguistics, etc. deserve acclaim, but we urgently need to develop the capacity to discern between the useful and the harmful.

Allison Pugh will be speaking at this event tomorrow.
With the flood of artificial intelligence in the market and in the news, transformations promised by the technology can feel inevitable. However, automation is not just technological but social. Throughout history, people have rejected some innovations as going too far, not worth their benefits or violating a core value. One controversial area today is the use of AI in humane interpersonal work such as therapy, teaching or primary care, and I wrote my latest book in part to better understand the debates.
This is work that relies on relationship, involving the worker “seeing” the other, and the other feeling “seen.” It has profound effects, as we know from randomised controlled trials in medicine and other research. As one patient told a hospital chaplain whom I was shadowing: “There is nothing like being in the worst moment of your life and being met with comfort by someone you don’t even know, when you feel like someone understands you.”
I call this seeing work “connective labour,” to capture how deeply interactive it is, not least because for it to “land” successfully, the other must assent – to some degree – with the vision that is being reflected their way. Unlike “emotional intelligence” (EQ) or a host of other similar concepts, connective labour reminds us that these moments are the product of a collaborative moment, a mutual achievement between human beings.
That mutuality is also crucial when we think about the automation of connective labour for the individual patient, client or student who might be seen. Technologists are trying to mechanise recognition, with such ventures as “personalised” medicine or education tailored to an individual’s biological or learning needs. To be sure, if the only thing that mattered here was the individual, and whether or not someone received customised treatment or lessons, then if people can feel seen by a machine, it is not clear that we need a human to do it at all.
But what if humans actually create something together, in these moments of connection? In five years of research, I watched many moments of connection and heard about many more, in which being seen by another led to a sense of human dignity on both sides of the interaction, renewed purpose, even epiphanies of understanding.
Pamela Murray’s story was heartrending, but her account of its profound impact was typical. A middle-school teacher in California, Pamela had had selective mutism when she was a child, until her teacher took the time to find out that her reluctance to speak was not an indicator of her actual capabilities, but instead a response to her family moving incessantly. “She just listened. Just sat and just listened and that made a world of difference,” Pamela remembered. “I could have been tested and put into special ed, instead I was tested and put into gifted and talented.” That memory inspires her today. “I want to be that teacher for my kids [in my school],” she said. “I want to be the teacher that I wanted, and that I needed and that I finally got”.
There is a shifting boundary that demarcates human work as more or less available for automation – what we might call an automation frontier – and people have been contesting that line since the 16th century, when Queen Elizabeth I once apparently rejected the application for a patent for a knitting machine, the first attempt to mechanise textile production. “Thou aimest high, Master Lee,” she told the inventor in 1589. “Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars.”
Notwithstanding the Queen’s concern, for years people assumed that manufacturing was particularly susceptible to automation while white-collar work was safe, until the spread of computers into routine cognitive work in the 1980s moved the automation frontier again. Today, some argue that while AI might be able to do complex analytic tasks such as deciphering x-rays or even writing laws, connective labour work such as teaching or therapy is the “last human job”. Yet technologists are busily chipping away at the frontier anyway; socioemotional AI has been burgeoning for the past decade, with engineers designing everything from AI couples’ counselors or AI palliative care consultants to virtual preschool or digital nurses.
Of course, many of these forays remain in the lab, and critics caution against believing too much of the hype about AI’s capacities to “disrupt” caregiving and other emotional jobs. But since ChatGPT burst onto the scene, large language models (LLMs) have taken the mechanisation of such work to a new level. Those who designed a chatbot interviewer, for example, claimed that it demonstrated “cognitive empathy,” using follow-up questions to plumb an interviewee’s perspective “as closely as they understand it themselves.” Chatbots have been designed to teach, provide therapy, and give medical advice, in each case allegedly better than humans. In the UK, the government has recently announced a 14-billion-pound investment scheme to bring AI into the National Health Service and other civil service work, seeking to modernise without increasing labour costs.
We are embarking on this course, however, without really knowing what connective labour is, why it is valuable, how practitioners do it, and what the impact is of our attempts to systematise it with manuals and data metrics, not to mention AI. In my research, involving hundreds of hours of interviews and observations with practitioners and engineers as well as clients, patients and students, I saw how the systematization of this work helped bring both workers and clients a little closer to automatons themselves.
The presence of machines in feeling work made practitioners have to prove they were human to skeptical consumers, for example, but it also demanded that clients and patients translate their chaotic human feelings into ones more legible to the software. When we introduce AI, then, it is not just the workers who are affected; the consent we are giving to technologists for the AI therapist or teacher is the consent of the automated.
AI certainly has its uses, and the flood of new achievements in biology, linguistics, geology and the like deserves acclaim. But we urgently need to develop the capacity to discern between the useful, even the celebratory, and the dangerous or the harmful. I have called for a “connection criterion” to evaluate AI in light of the need to prioritise human bonds: does a given AI use or application impede or enhance human relationship? The terms of our consent – and our social health – demand it.
Sign up for our weekly newsletter here.
- This blog post is based on the author’s book The Last Human Job: The Work of Connecting in a Disconnected World.
- The post represents the views of its author(s), not the position of LSE Business Review or the London School of Economics and Political Science.
- Featured image provided by Shutterstock.
- When you leave a comment, you’re agreeing to our Comment Policy.
I admire this article and respect the author’s take. She clearly articulated the embedded problems in disconnecting us from each other in so many walks of life, banking, health, retail.
Well done Ms. Pugh