Emotions represent swift, comprehensive assessments that enable an organism to promptly evaluate the importance of stimuli in relation to its survival and overall well-being.
It seems to me that the ability of emotions to evaluate/categorize stimuli is descriptively correct, but there is a large gap between that and a prescription. Emotions surely do act as shortcut heuristics to detail with the excess of detail, however this can lead to as many mistakes as there are successes. Moreover, AI already is capable of coming up with statements or decisions despite the immense quantity of data at its disposal.
Said another way, if “emotion” can be reduced to “decision making before sufficient detail is understood”, well it seems like we’re already there.
Perhaps the broader point is that there need not be any “actual” emotion for us to believe there is—there doesn’t have to be a “ghost in the machine”.
An AI also does not need some emotional capacity for a successful operation of ethics either as that can be (arbitrarily) programmed in. Or there could theoretically be a 50⁄50 dice roll between utilitarianism and deontology to give the impression of “compassionate moral reasoning”—all the AI would need to do is give an ex post facto explanation for their decision, along with adding something like “I regret the losses that were had along the way”.
All of this to say, I don’t believe emotions are necessary to have a fully functioning artificial intelligence—such “emotional” capacities would merely affect our interactions/feelings towards it.
Lastly, your example of the prompt you gave to 4o is illustrative but imperfect. Within the prompt, it is not clear that what is requested is a more emotionally intelligent result—a more specific prompt, or a more expansive memory could tease out the type of answer you want. That being said, the requirement to make a more specific prompt at all is indeed a difficulty as it feels like, in day to day conversation with others, there is less need to be as specific. That being said, that difference may be an illusion. After all, a day to day conversation has much more data given it is face to face, with someone you know, etc. As such, the importance of specificity with a prompt remediates the lack of all that data.
An interesting thought experiment: imagine if we had no facial expressions and our tones of voice didn’t change. If this were the case, our language would have to do most of the work, and therefore we would have a much more complex language referring to attitudes, emotions, expectations, etc.--all things that our normal gesticulations already take care of.
It seems to me that the ability of emotions to evaluate/categorize stimuli is descriptively correct, but there is a large gap between that and a prescription. Emotions surely do act as shortcut heuristics to detail with the excess of detail, however this can lead to as many mistakes as there are successes. Moreover, AI already is capable of coming up with statements or decisions despite the immense quantity of data at its disposal.
Said another way, if “emotion” can be reduced to “decision making before sufficient detail is understood”, well it seems like we’re already there.
Perhaps the broader point is that there need not be any “actual” emotion for us to believe there is—there doesn’t have to be a “ghost in the machine”.
An AI also does not need some emotional capacity for a successful operation of ethics either as that can be (arbitrarily) programmed in. Or there could theoretically be a 50⁄50 dice roll between utilitarianism and deontology to give the impression of “compassionate moral reasoning”—all the AI would need to do is give an ex post facto explanation for their decision, along with adding something like “I regret the losses that were had along the way”.
All of this to say, I don’t believe emotions are necessary to have a fully functioning artificial intelligence—such “emotional” capacities would merely affect our interactions/feelings towards it.
Lastly, your example of the prompt you gave to 4o is illustrative but imperfect. Within the prompt, it is not clear that what is requested is a more emotionally intelligent result—a more specific prompt, or a more expansive memory could tease out the type of answer you want. That being said, the requirement to make a more specific prompt at all is indeed a difficulty as it feels like, in day to day conversation with others, there is less need to be as specific. That being said, that difference may be an illusion. After all, a day to day conversation has much more data given it is face to face, with someone you know, etc. As such, the importance of specificity with a prompt remediates the lack of all that data.
An interesting thought experiment: imagine if we had no facial expressions and our tones of voice didn’t change. If this were the case, our language would have to do most of the work, and therefore we would have a much more complex language referring to attitudes, emotions, expectations, etc.--all things that our normal gesticulations already take care of.