Heartless Genius: The Peril of Emotionally Blind AI

“Le sentiment est á l’origine de tout.”
Claude Bernard

I would like to address a thought-provoking question that reflects a fundamental aspect of humanity: Is it possible for an AI system to achieve superintelligence without integrating emotion into its reasoning process?

Introduction

Today’s AI systems are primarily examples of narrow or weak AI, created for performing out particular tasks inside limited domains. Included in this category are large language models such as GPT-4, image generation models like DALL-E, and specialized systems designed for specific tasks, including chess and Go. Narrow AI has demonstrated significant success in particular domains, frequently exceeding human capabilities. Artificial General Intelligence (AGI) denotes AI systems that possess human-like capabilities for general problem-solving across multiple domains. AGI continues to be a long-term goal within the field of AI research, with a lack of consensus regarding the timeline or the viability of its accomplishment. There is a division among researchers regarding the potential of scaling existing deep learning methodologies to achieve AGI. Some believe this path could be viable, whereas others assert that entirely new architectures or methodologies are essential for progress.

The Limitations of Pure Logic

Emotion acts as a fundamental mechanism for guiding the vital orientation of all living organisms. This phenomenon extends beyond mere responses to stimuli; it constitutes an essential element in the processes by which organisms assess and interact with the environment. Emotions represent swift, comprehensive assessments that enable an organism to promptly evaluate the importance of stimuli in relation to its survival and overall well-being. All organisms, regardless of their complexity, navigate their environments by evaluating what is advantageous and beneficial for their survival, using both cognitive reasoning and emotional responses in this process (this theory is further elaborated in this post).

“Biology is the most powerful technology ever created. DNA is software, protein are hardware, cells are factories.”

Arvind Gupta

Dexter, working in Harvard Medical School, completed a study examining the mechanisms of decision-making and learning in non-neural organisms, specifically focusing on singly nucleated cells. The research demonstrated that the ciliate Stentor roeseli or S. roeseli[1] exhibits complex, context-sensitive reactions to different environmental stimuli. Their capacity for rapid assessment and response to stimuli in a structured hierarchy reflects the swift, comprehensive evaluations typical of emotional reactions observed in more advanced organisms. The hierarchy of behaviors exhibited by the organism, including actions such as bending, contracting, and detachment, serves as a strategic adaptation for effectively interacting with its environment. The ability to adapt in the absence of a central nervous system highlights the significance of decentralized, emotion-resembling mechanisms in the decision-making process.

“Contrariwise, if it was so, it might be; and if it were so, it would be; but as it isn’t, it ain’t. That’s logic.”

Lewis Carroll

Damasio and his team conducted in 1997 a study involving individuals with damage to their ventromedial prefrontal cortex (vmPFC). These patients exhibited normal intelligence scores and demonstrated ability in logical reasoning tasks. But they faced significant challenges when it came to decision-making in real-life scenarios, especially in contexts related to social or personal issues. For example, they may invest considerable time analyzing minor decisions such as selecting a restaurant or arranging an appointment. They frequently exhibited suboptimal decision-making in both personal and professional contexts, resulting in financial difficulties, challenges in relationships, and instability in employment. The findings suggest that pure logic, as evidenced by the patients’ intact reasoning abilities, is inadequate for effectively navigating through the complex landscape of human experience and values. Patients demonstrated a capacity for logical analysis of situations; however, in the absence of emotional guidance, they struggled to make beneficial decisions or engage in socially acceptable behaviors.
They concluded that emotions play an essential role alongside logic, offering swift, instinctive assessments of circumstances that assist in the decision-making process. The emotional component seems crucial for evaluating the personal and social importance of various choices, a task that pure logic finds challenging to emulate.

Current AI Systems as Supertools

Current AI systems, including Large Language Models (LLM), are lacking an emotion-based valuation mechanism (we have to say that some models do incorporate sentiment analysis and context-sensitive responses). Despite advancements with GPT-4o, which is described by Open AI as a move towards more natural human-computer interaction, it still lacks context-sensitive reactions.

It is important to specify that this does not suggest that these models can execute sentiment analysis in the same way that has been implemented in the field of NLP for a long time. We’re talking about models that consider emotions when establishing emotional and semantic relationships in the embedding space. For example, in a current high-dimensional embedding space model, a given word could be represented as , where is the number of dimensions of the space. These values capture some component of the word’s meaning and can be viewed as the coordinates of an abstract space. These coordinates are learned and improved during the training phase, and their absolute values are less important than their relative values in relation to other words’ embeddings. These coordinates may differ depending on the context in which the word appears, indicating its meaning in that context. With the current model, we are referring to a semantic context in which a word can have multiple meanings. In a specific context, a model can identify a word that carries significant emotional weight, such as “sad” or “terrible”. However, it is my belief that the model, while it may represent the words of this sentence in a different coordinate, will not achieve a comprehensive understanding of human behavior. The discussion of emotional valuation does not apply to the identification of particular words that are semantically linked to sentiments as outlined in Plutchik’s wheel.

Emotions represent a multifaceted phenomenon that transcends mere classification or verbal connection. A comprehensive examination of the role of emotions in cognitive processes and decision-making requires an in-depth exploration of their essential characteristics and functions.

A theory of emotions

“No behavior of any sort exists without the display of a need.”

W.J. Wukmir

Emotions serve as essential mechanisms for guiding the behavior and decisions of living organisms, rather than being simple responses to external stimuli. Their role is fundamental in assessing the importance of stimuli concerning an organism’s survival and overall wellbeing. Emotional valuation is a rapid, holistic process that assesses the relevance, intensity, and potential impact of stimuli on the organism. This evaluation occurs at a pre-cognitive level, influencing subsequent cognitive processes and behaviors. As we have seen before, even a single-celled organism like S. roeseli can exhibit complex, hierarchical decision-making behaviors. S. roeseli’s ability to quickly assess and respond to stimuli in a hierarchical manner mirrors the concept of rapid, holistic evaluations characteristic of emotional responses. This aligns with the idea that emotional valuation is a fundamental aspect of intelligence, present even in the simplest forms of life.

In contrast to basic models that classify emotions into distinct categories, Wukmir’s theory suggests that emotional valuation operates on multiple dimensions. The analysis takes into account various elements, including the strength of the stimulus, its significance in relation to the organism’s goals, and its possible effects on the organism’s condition. Emotional valuations exhibit a high degree of subjectivity and are influenced by contextual factors. The valuation of a given stimulus can vary significantly depending on the organism’s present condition, historical experiences, and surrounding environmental influences. The fluid characteristics of emotional valuation facilitate responsive adjustments to evolving situations.

Wukmir’s theory posits that emotions should not be seen as distinct from or in conflict with cognition, but rather as interconnected elements that inform one another. Emotional valuations play a significant role in shaping cognitive processes, and conversely, cognitive assessments can affect emotional states. The bidirectional relationship is essential for facilitating adaptive behavior. This theory establishes a connection between emotions and biological processes, proposing that even individual cells have the ability for basic emotional valuation . This viewpoint is consistent with contemporary studies regarding cellular decision-making and adaptive behaviors observed in basic organisms (for instance, the work of Dexter has already been referenced.). The process of emotional valuation is not a singular occurrence; rather, it functions as an ongoing continuum that engages multiple levels of awareness, ranging from subconscious cellular reactions to conscious emotions and cognitive processes.

Risks of Emotionally-Unaware AI

From the everyday...

Consider this rather typical real business world scenario:

Prompt: 
["I have a client who has fed up with us. We get the feeling that he is kidding us and using us, but that he has no intention of buying. What do you suggest I do, do I make him the cheapest offer possible and forget about him, or do I continue to insist in a more creative way to see if I can convince him?. Give a simple and short answer."]

Chapt GPT‑4o answer: 
"Focus on value, not price. Politely ask the client directly if they're serious about moving forward. If they are, offer a solution tailored to their needs, not the cheapest option. If they’re not interested, it's best to move on and prioritize other prospects."


Probably many of those who are reading this post have found themselves in this situation, either asking the question or having to answer it. GPT‑4o answer is the logical answer. But if you have ever found yourself in the situation of having to answer this question, do you really believe that the logical answer is the most appropriate? Do you think the model has been able to put itself in the shoes of whoever is asking this question? Do you think that is the answer you would give? We are not going to enter into this discussion here since there may be many different points of view. This is simply an example to consider.

The above example is useful as an example of a complex (but frequent) situation. As mentioned before, the valuation of a given stimulus (the answer) can vary significantly depending on the organism’s present condition, historical experiences, and surrounding environmental influences. In other words, we do not answer directly to the question: we do a valuation (cognitive and emotional) and then we give an answer. Reading between the lines (in latin inter legere, expression that is the origin of the word “intelligence”) that is in the example question above, it appears that the person asking the question is desperate and frustrated (emotional context, historical context), as well as feeling the pressure to increase sales on his shoulders. The salesperson’s emotions of frustration and desperation are not just reactions, but crucial signals about their current state and needs. She is likely weighing the potential loss of a sale against their self-respect and the company’s resources. In this case, the most logical answer is unlikely to be the best option.

“La valeur des choses est fondée sur leur utilité”

Étienne Bonnot de Condillac

Alignment with human values in AI systems goes beyond providing logically sound advice. It requires the ability to recognize and respond appropriately to human emotions, even when they’re not explicitly stated. This alignment also implies grasping the broader implications of a situation, including historical, cultural, and personal factors. Recognizing when human well-being should take precedence over short-term goals or efficiency. A leader’s values probably will be more aligned with providing guidance that can flex based on the unique emotional and situational context of each scenario.

...to critical scenarios

The example presented so far show the value of emotional intelligence in common business context. Nonetheless, these scenarios represent a lower level of risk when compared with other areas in which AI systems are being progressively applied. The absence of emotional awareness in AI within these high-risk areas can result in significantly more severe outcomes. An AI system can provide insights and recommendations regarding end-of-life care decisions. A strictly rational analysis could suggest ceasing treatment in light of statistical survival rates and considerations of resource distribution. This perspective does not adequately consider the emotional implications for both the patient and their family, the cultural beliefs surrounding death and dying, and the complex ethical dilemmas that accompany these decisions. AI systems are increasingly being utilized to aid in sentencing decisions, with numerous advocates asserting that the legal system is ready for significant transformation due to AI advancements. An AI lacking emotional awareness could suggest severe sentences based purely on statistical recidivism rates, neglecting to account for the defendant’s individual circumstances, likelihood of rehabilitation, or the wider societal implications of imprisonment. The increasing integration of AI in military strategy raises concerns about the potential for emotionally-unaware systems to propose tactical decisions. Although these recommendations may be strategically valid, they could result in significant civilian casualties or enduring geopolitical ramifications that a more nuanced analysis might prevent. AI systems utilized in child welfare cases that do not possess emotional intelligence may overlook nuanced indicators of abuse or neglect to account for the emotional connections among family members. This oversight could result in unwarranted family separations or the continuation of children remaining in perilous environments.

Potential Approaches and Challenges

Incorporating and modeling emotion within AI systems is a complex endeavor. It is a fact that there exists a limited number of studies that explore this particular question. It is evident that these models have the capability to identify, or more accurately classify, emotions through the presence of words filled with a significant sense of emotional connection. In a previous post I suggested the idea to use a learnable emotion embedding function and a multidimensional valuation space to combine token representations and emotion embeddings to compute valuations. Then, we incorporate these valuations into the attention computation. A key problem associated with this approach lies in acquiring a sufficiently large set of annotated data to effectively train a model. Several options exist, including self-supervised learning and the use of synthetic data; however, the complexity of the task remains clear.

However, this is not the fundamental issue at hand. The theory of emotions indicates that emotional valuations emotional valuations exhibit a high degree of subjectivity. This fact not only introduces a significant complication but also raises a critical ethical dilemma, as these models are susceptible to manipulation. One potential outcome is the development of a fundamentally harsh model for military purposes, or a self-serving model utilized in customer service chatbots.

Bridging the gap between superintelligent tools and AGI

The incorporation of emotion-like processes in AI systems has the potential to markedly improve their capacity to understand and adjust to complex and changing environments. Current AI tools demonstrate proficiency in particular areas; however, they frequently encounter difficulties in handling nuanced and context-sensitive situations. The incorporation of emotional valuation allows AI systems to extend their knowledge and skills across different contexts, facilitating intuitive understandings of causality. This approach facilitates a deeper comprehension of the complex dynamics of a situation, encompassing subtle social signals, underlying implications, and cultural subtleties.

“Intelligence is what you use when you don’t know what to do: when neither innateness nor learning has prepared you for the particular situation.”

Jean-Piaget

AGI necessitates the capacity to make well-informed decisions when faced with unfamiliar and unpredictable situations. Processes that model emotions, which frequently act fast and comprehensive assessments of complex situations in humans, could provide AI systems with a similar ability. This method helps individuals in managing uncertain circumstances where merely logical strategies might prove inadequate or overly time-intensive. The improvement of emotional awareness in AI systems is expected to enable interactions with humans that are more natural and intuitive. The enhancement of interaction is key for the advancement of AI, enabling it to integrate smoothly into diverse facets of human life and work. This also bolsters its capacity to comprehend and align with human values and ethical considerations, which are essential components of safe and robust AI systems.

“Creativity is intelligence having fun!”

Albert Einstein

The impact of emotions on human creativity is essential. The incorporation of emotion-like processes in AI systems has the potential to enhance their capabilities in creative problem-solving and innovative thinking, thereby exceeding the limitations of current narrow AI tools. The connection between emotional awareness in humans and self-awareness, along with metacognition, is significant and calls for additional investigation. The advancement of similar systems in artificial intelligence could lead to the emergence of systems that demonstrate heightened self-awareness, allowing them to observe their own cognitive processes and potentially attain some sort of machine consciousness.

Open Questions

There is a need for enhanced research and discussion regarding the incorporation of emotion-like processes into AI systems in order to achieve better alignment. Important areas include:

  1. Developing computational models of emotion for AI

  2. Exploring ethical implications of emotion-aware AI

  3. Investigating how emotional valuation enhances AI decision-making

  4. Studying potential risks of emotionally intelligent AI

Thought-provoking questions for the community:

  1. Can we create true emotion-like processes in AI, or only simulations?

  2. How do we implement and ensure stability of emotion-aware AI?

  3. What are the ethical implications of AI that understands human emotions?

  4. How does emotional awareness in AI impact the alignment problem?

  5. Could emotional AI lead to more robust aligned behavior across contexts?

  6. How would emotionally aware AI change human-AI relationships?

  7. Is emotional intelligence necessary for AGI?

  8. What can implementing emotions in AI teach us about human emotions?

  9. How might emotion-aware AI impact society and global challenges?

  10. What safeguards are needed for emotionally intelligent AI systems?

We encourage the community to engage with these questions and contribute insights to this critical area of AI alignment research.

References

  • Alhagry, S., Fahmy, A. A., & El-Khoribi, R. A. (2017). Emotion recognition based on EEG using LSTM recurrent neural network. International Journal of Advanced Computer Science and Applications, 8(10).

  • Bechara A, Damasio H, Tranel D, Damasio AR. Deciding advantageously before knowing the advantageous strategy. Science. 1997 Feb 28;275(5304):1293-5. doi: 10.1126/​science.275.5304.1293. PMID: 9036851.

  • Damasio, A. R. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain.

  • Devillers, L., & Cowie, R. (2023). Ethical considerations on affective computing: an overview. Proceedings of the IEEE.

  • Jennings, H. S. (1902). Studies on reactions to stimuli in unicellular organisms. IX.—On the behavior of fixed infusoria (Stentor and Vorticella), with special reference to the modifiability of protozoan reactions. American Journal of Physiology-Legacy Content, 8(1), 23-60

  • Lee, J., Kim, S., Kim, S., Park, J., & Sohn, K. (2019). Context-aware emotion recognition networks. In Proceedings of the IEEE/​CVF international conference on computer vision (pp. 10143-10152).

  • Martınez-Miranda, J., & Aldea, A. (2005). Emotions in human and artificial intelligence. Computers in Human Behavior, 21(2), 323-341.

  • Middya, A. I., Nag, B., & Roy, S. (2022). Deep learning based multimodal emotion recognition using model-level fusion of audio–visual modalities. Knowledge-Based Systems, 244, 108580

  • Paiva, A., Leite, I., Boukricha, H., & Wachsmuth, I. (2017). Empathy in virtual agents and robots: A survey. ACM Transactions on Interactive Intelligent Systems (TiiS), 7(3), 1-40.

  • Tartar, V. (2013). The biology of stentor: International series of monographs on pure and applied biology: Zoology. Elsevie

  • Wukmir, G (1960). Psicología de la Orientación Vital. Barcelona. Ed. Miracle

  • Zheng, Q., Tang, Y., Liu, Y., Liu, W., & Huang, Y. (2022, April). UX research on conversational human-AI interaction: A literature review of the ACM digital library. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-24).

  1. ^

    Ciliates form a clade of single-cell eukaryotes characterized by their eponymous cilia, nuclear dimorphism, and sexual conjugation