There is much fear that artificial intelligence will steal human jobs. Many human capabilities are being delegated to machines and abilities such as memory, attention, accuracy and manual precision could be undermined. But humans have a unique type of intelligence that AI can’t match. Chiara Succi describes three levels of (irreplaceable) human contribution to the workplace: intelligent hand, intelligent mind and intelligent heart.
The heated debate about artificial intelligence (AI) and its “extraordinary potential” is surprisingly leading to a rediscovery of humans’ contribution to work. If robots can bring speed, precision, and strength, we still need fantasy, intuition and affection. It is important to debunk the belief that machines will steal the jobs of human beings, unless they are asked to repeat only one single task. In fact, we will not see the elimination of our occupations, but artificial intelligence applications will certainly impact and modify them deeply.
Weakening human capabilities
Automation will change workers’ behaviours and the risk is that some abilities could be undermined, such as memory, attention, accuracy, manual precision, and many others. How can we leverage new technologies without diminishing our competencies? Which human assets are in danger and in need of protection?
It is not easy to describe the magnificence of human intelligence (HI) and the literature has not yet presented an exhaustive taxonomy of relevant human capabilities in the digital age. As a starting point, I propose a description structured over three levels:
The intelligent hand
Some manual skills cannot be replaced by machines while keeping the same versatility and level of quality. In every organisation, we can identify some activities that require a particular know-how and a great amount of knowledge that is transferred over time, through on-the-job training. Several lines of manual work require the ability to solve complex changing problems that are always slightly different because the raw material, temperature or context is different. It would make no sense to program a robot to solve a problem that occurs differently almost every time.
The world of luxury, in particular, strives for authenticity and values artisanal processes, which make some handmade goods unique. Contemporary research finds that products, services and organisations perceived as authentic carry value that consumers are willing to pay for.
As a side effect, if we push to replace manual work with AI, we might miss some parts of the work experience such as technical mastery, dedication to the job, experimentation, apprenticeship, concentration, and caring about excellence, with possible implications and impacts difficult to assess for the workplace.
The intelligent mind
Matthew Crawford warns us of the risk of “learned helplessness” as an effect of automation and digitalisation. People display a lower level of individual agency and a higher level of dissatisfaction. Office jobs often don’t allow us to experience the direct effects of our actions and to see tangible results.
In fact, we have a greater amount of data at our disposal, but an inferior capacity to read them, and probably at a lower speed in comparison with an algorithm. It appears natural to delegate complex tasks and difficult decisions to machines, the danger being that we will lose critical thinking. This might weaken our exercise of judgment and we may face issues more superficially and impulsively, without going in-depth and without analysing root causes.
This might represent a problem, considering that cognitive skills such as problem-solving, creativity, innovativeness, and learning to learn are becoming increasingly important in the workplace. Decisions are taken by managers largely based on their intuition and not based on a mere analysis of data.
AI doesn’t have the sensitivity to manage unexpected events or crises. A self-driving car cannot enter a complex roundabout, as it cannot identify a zero-risk condition. Human drivers, instead, find the courage to travel adapting to the flow and moderating their speed to avoid accidents, as a robot will (probably) never be able to do.
We must acquire higher awareness of the potentiality of our mind and its ability to find mental shortcuts, the so-called heuristics, which allow us to source daily experience to survive and overcome obstacles.
The intelligent heart
Communication among human beings is becoming more complex and more frequent; it can be supported, facilitated, “augmented” by technologies, but never fully substituted by a remote system. Mysteriously, the hormone oxytocin, which is essential for the development of trust and emotional engagement, is produced only when we interact in presence.
Emotional intelligence and even a sense of humour can be extremely powerful, especially in the workplace. Only real interactions and laughing together release ‘feel-good’ hormones, such as endorphins, dopamine and oxytocin.
Finally, the moral question, distinguishing between good and bad, is central when we discuss artificial intelligence. Computers alone are not able to work ethically and, consequently, discrimination and injustice might be automated.
Augmented by machines
If Steve Jobs said that “computers are like a bicycle for the mind”, AI represents its racing car. In fact, there are several areas in which AI can effectively support human intelligence, such as offering new paths for exploration, enhancing the creative process, saving time and focusing attention on relevant tasks.
In organisations, AI can facilitate the imagination process in the development of new products or new ideas. The exploratory phase typically requires access to information, multiple experiences, time, and space to take place. AI can accelerate this phase, removing several roadblocks and quickly testing new directions.
AI and technological developments have begun to transform and aid creative and complex processes, as well as challenge what we believe to be advanced reasoning ability. After losing to a computer program for the first time, the Go world champion declared “I’m learning new moves and I’m becoming a better player thanks to AlphaGo…”
The complexity of the context is forcing companies to redesign their organisational model and to clarify the contribution of employees, in particular of managers. This can represent an opportunity to develop human potential and to increase people’s effectiveness and satisfaction.
It is necessary to acquire a higher awareness of human capabilities and personal talents. organisations and educational institutions play a crucial role in developing it. They have to create environments in which people can express themselves, make mistakes, restart, create and innovate and not just execute tasks or process information (like robots).
- This blog post is based on Human intelligence vs. Artificial Intelligence: Developing human capabilities in the era of automation and digitalization, part of ESCP Business School’s New technologies and the future of individuals, organisations, and society impact paper series.
- The post represents the views of its author(s), not the position of LSE Business Review or the London School of Economics and Political Science.
- Featured image provided by Shutterstock.
- When you leave a comment, you’re agreeing to our Comment Policy.
I am skeptical of the article’s optimistic stance on the survival of enough jobs with artificial intelligence. The swift advancements in robotics, deep learning, and emotion simulation will quickly bridge gaps that were once considered uniquely human and would replace many more intelligent hands, minds, and hearts in my view. Mass human contributions will be increasingly replaceable, and only a number of geniuses and artisans will survive.
The only frontiers to human existence are meaning, a sense of purpose, and free will. “What is the purpose of it all?” I invoke the metaphors of Sisyphus eternally rolling his stone uphill and a mouse trapped in its wheel, tirelessly running. We are in an existential quandary and our future roles are less than certain with AI and loose ethics.
Comment image, no alternative text available
Thanks for the heads up re the image. Alt text has been provided now.
I believe the argument the author makes in this blog-entry ignores the empirical evidence AI systems have just delivered over the course of the past year.
I am not convinced that AI won’t be able to imitate these purported three “human” traits, at least not sufficiently well or much better than human average. There is large variation in how different persons’ strength in these three, e.g. many are definitely very awkward and rather unsuccessful in their attempts to adjust their tone to the specific interaction and relationship at hand. On the other hand I believe that the past year has shown that – if anything – AI is frighteningly well able to internally mirror even very complex workings of systems such as our internal compass for whom to trust and how to interact, our hormonal systems. In fact I believe it’s plausible that this is exactly what would happen if I had a sufficiently powerful AI being trained by tagging along with me in the office for several months and observing all my interactions and decisions.
Now, one may be inclined to reply that this all may be true but that it is the very combination of individuals bringing these traits in differently strong and faceted versions to the table that makes humans superior. But we already know that systems with several separate agentic AI systems do develop complex interactions, collaborations and story-lines in which different systems develop different characteristics.
The necessity of (human) presence may be the strongest argument in this article, since in an iron-man version of it we can accept that this may be something an AI is categorically unable to imitate. To do that, we have to put aside the deceptive effects human-like robots and “emotional companion” products have had on probands in experiments. But while the argument may be sound, it is not a necessary condition for saving jobs: if all co-workers in the office are robots/AIs equipped with an internal mirroring of trust-guiding hormonal systems, none of them will really need the presence of a human around. They may be joking around to figure out their standing among each other and potentially this could work for them to identify “odd” colleagues who they don’t trust – without needing any human presence in the room.
Finally, the author reveals a fundamental misunderstanding in the conclusion which serves to substantiate why the article in its entirety seems mistaken: the success of today’s robots, the fundamental breakthrough leading to the capabilities that we can see today, was to turn robots from machines that execute tasks into machines that “create environments, can express themselves, make mistakes, restart, create and [this remains for philosophers to argue], innovate.”