Artificial emotional intelligence promises to revolutionise the way companies do business. With it, robots will be able to understand, react to and mimic human emotions. Ricardo Viana Vargas and André Barcaui discuss how the technology will affect healthcare, the workplace and personal relationships.
Imagine a tech-savvy corporation that unveils an artificial emotional intelligence (AEI) system designed to revolutionise the hiring process. During interviews, candidates’ emotional and cognitive reactions are deftly interpreted by this system, providing deeper understanding of their actual abilities and fit with the company’s culture. The technology navigates the nuanced aspects of human emotions to improve objectivity and fairness, overcoming conventional biases and opening up new opportunities for talent acquisition.
This narrative, inspired by real-world applications of affective computing, demonstrates the blending of technology and human empathy and paves the way for a time when machines will be able to comprehend not only our spoken language but also the emotions we convey.
The concept of AEI, also referred to as emotion AI or affective computing, was pioneered by researcher Rosalind Picard in 1995 with the publication of her book “Affective Computing.” Her work enabled the creation of machines capable of understanding human emotions by interpreting subtle cues in speech, facial expressions, and other physiological signals. Today, this technology allows for a more natural and intuitive interaction between humans and machines, mirroring the way we communicate with each other.
This cutting-edge field changes how humans view and engage with machines. AEI envisions a time when robots will be able to perceive, comprehend, react to and mimic human emotions. The estimated investment in affective computing applications by 2033 is expected to exceed US$1.2 trillion.
The development of “emotionally aware” AI may be observed particularly in research facilities like MIT’s Media Lab, which excels in merging technology with the arts, aiming to invent impactful future technologies. It may also be found in businesses like Kore.ai, which offers cutting-edge AI solutions to automate and enhance business interactions with conversational AI.
This has important ramifications for a variety of contexts, including healthcare, the workplace and personal relationships.
Healthcare
AEI can keep an eye on how patients feel about treatment regimens and offer insights that may be used to further customise therapy to meet the needs of each patient. This is particularly important in mental health care, as treatment outcomes are greatly influenced by subtle emotional factors that may point to the beginning of mental illnesses like depression or anxiety, enabling early intervention and improving the odds of effective treatment.
More sympathetic relationships between patients and healthcare professionals increase patient happiness and participation. Individualised emotional support and monitoring assist patients outside of clinical settings, such as in home care. This improves the chances that a patient will adhere to treatment plans. An added bonus would be individualised relaxation techniques customised to each person’s emotional condition and preferences, to support stress reduction programs.
Education
AEI holds great promise for revolutionising the field of education as well, enabling the use of tailored learning strategies in classroom environments. Students’ emotional responses can be monitored in real time and instructional materials customised to each student’s unique learning style and emotional state. Students who have difficulty with traditional methods of instruction can enjoy a more flexible and responsive experience.
Learning environments and tools adapt to students’ level of difficulty and offer support or extra materials in times when they’re feeling down or disinterested. New opportunities for distant learning enhance AEI’s interaction and emotional appeal. Online learning environments that recognise and respond to a student’s emotional state bridge the gap between virtual and in-person classes and can significantly improve immersion and academic results.
The growing interest in incorporating affective computing into virtual reality (VR) education is leading to substantial investment, with the potential to develop more diverse, interesting, and productive environments for a range of learning platforms. The VR in education market is expected to grow to $28.70 billion by 2030 from $4.40 billion in 2023, reflecting a compound annual growth rate of 30.7 per cent.
Business applications
With AEI, businesses can enhance client relationship and streamline operations. The evaluation of speech tones, facial expressions, and even textual communication for emotional content allows them to answer questions from customers in a way that shows empathy and understanding, potentially increasing satisfaction and loyalty.
Beyond customer support, this kind of customisation can have an impact on marketing plans and product development to better suit the requirements and feelings of customers. AEI can examine emotional feedback from a variety of platforms and help companies spot unmet customer requirements or new trends, spurring innovation and competitiveness.
Understanding the emotional dynamics of the workforce and market can enhance strategic decision-making. For example, tracking employee engagement and morale can allow businesses to address possible problems before they become more serious, resulting in a more positive work environment and possibly contributing to higher productivity.
Understanding consumer behaviour is important to allow companies to predict market changes and modify their approach accordingly. Companies become better able to manage their reputation and handle public relations because of an enhanced ability to analyse social media and internet content for emotional feelings.
Virtual relationships
Perhaps the most intriguing field of AEI use regards human affect. There may be a time when artificial intelligence will play a more active role in our emotional lives than just mere assistance. As a sophisticated interface, it may be able to reflect our emotional states, changing the definition of friendship and emotional connection. Affective computing goes beyond machines comprehending human emotions and makes it harder to distinguish between artificial and sentient entities.
Technological progress has enabled us to have digital assistants that not only obey our directions but also acquire knowledge about who we are. These intelligent algorithms anticipate our needs, adjust to our tastes and make recommendations based on our likes and dislikes, all without expressing even the slightest trace of judgment. We will eventually reach a stage at which these devices start to replace our buddies, confidants and companions in our day-to-day activities.
To put it in perspective, the SexTech market, which includes a broad range of adult technologies such as sex robots, was valued at $31.4 billion in 2022 with a fast growth projection. This, coupled with advancements in robotics and skin simulation, suggests that synthetic companionship could become more commonplace, indicating an increasingly significant role for AI in the intimacy sector.
Imagine all of that combined with a more sophisticated generative AI, such as Anthropic Claude 3 Opus, Google’s Gemini Ultra, or OpenAi’s ChatGPT 4. What effects might that have? Is it possible to fall in love with a machine? It could seem like a gloomy idea to some people, but others would argue it’s unfair to deny technological comfort and company to lonely people or those with mental or physical difficulties. These concerns are at the centre of a larger conversation about how love and connection are changing at a time when artificial intelligence is starting to trespass the territory of human emotions.
In his book “Love and Sex with Robots,” David Levy makes the claim that by the middle of the century, it will be possible to create robots so advanced and attuned to human emotion that they can engage on an emotional level almost identically to how humans interact with one another. This vision also poses a philosophical challenge, urging a reexamination of the essence of love and intimacy in the foreseeable future.
We have a natural tendency to anthropomorphise machines as they get more adept at simulating human emotions. This psychological inclination makes it more difficult to distinguish between a tool and a companion, which raises moral questions regarding manipulation, dependency and the risk of emotional injury.
Large-scale emotional reaction databases, machine learning algorithms, and sophisticated sensors are integrated to enable this ongoing change. AEI-equipped gadgets may identify our emotional needs and moods by interpreting our facial expressions, speech tones, and even bodily signals. This opens the door to more intuitive and sympathetic relationships. .
The long-term effects of AI relationships on human well-being and societal norms are a subject of ongoing research. The question of emotional recognition is a compelling one since true intelligence may seem inextricable from the ability to understand and process emotions. In this vein, the capacity of a machine to recognise emotions stands as a benchmark for its claim to genuine intelligence, given that our learning and adaptation processes are affected by emotions.
These are fundamental issues in the discourse surrounding the so-called Artificial General Intelligence (AGI) and consciousness, a topic that even philosophers find hard to prove or explain, as emotions can even influence our belief systems. We struggle with the idea of sentient machines that improve our lives in ways that go beyond machine learning as we consider how emotion might be woven into AI. Ethical guidelines are crucial to ensure that while AEI improves human well-being, it does not compromise individual autonomy or emotional integrity.
- This blog post represents the views of the author(s), not the position of LSE Business Review or the London School of Economics and Political Science.
- Featured image provided by Shutterstock
- When you leave a comment, you’re agreeing to our Comment Policy.
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
OArtigo é excelente e instigante. Faz pensar, e muito!
Que artigo!… Visão elevada; conteúdo inovador e inspirador. Gatilho para nos levar a profunda reflexão. Nada surpreendente em se tratando de um artigo escrito em parceria por Ricardo e André. Ambos com grande inteligência e personalidade elevada.
Meu saudoso pai costumava dizer “- Duas cabeças, meu filho, pensam melhor do que uma… quando não “batem” uma na outra.” (risos). Esses dois, não “batem cabeça” e, pensando juntos… só pode dar em coisa muito boa!
Quanto ao inevitável e promissor desenvolvimento das “maquinas AEI e AGI”, isso me faz lembrar uma frase de Albert Einstein:
“É preciso cuidado para não fazer da nossa INTELIGÊNCIA o nosso Deus. Ela é poderosa sim, mas não tem PERSONALIDADE. Não pode comandar, pode apenas servir!”
E penso que:
– Qualquer decisão, ou ação, não é fruto apenas da Inteligência e dos dados utilizados. E sim, em ultima instância, fruto de uma Personalidade.
– Quanto mais desenvolvida a INTELIGÊNCIA ARTIFICIAL, e suas “máquinas”, mais elevada espero que seja a PERSONALIDADE dos humanos envolvidos na sua criação e utilização.
– Para que essas possam efetivamente SERVIR ao bem estar e à prosperidade da humanidade… “Elevar esse mundo”.