There a lot of thought in AI development of mimicking human neural decision making processes and it’s very well possible that the first human level AGI will be similar in structure to human decision making. Emotions are a core part of how humans make decisions.
I should probably make clear that most of my knowledge of AI comes from LW posts, I do not work with it professionally, and that this discussion is on my part motivated by curiosity and desire to learn.
Emotions are a core part of how humans make decisions.
Agreed.
Your assessment is probably more accurate than mine.
My original line was of thinking was that while AIs might use quick-and-imprecise thinking shortcuts triggered by pattern-matching (which is sort of how I see emotions), human emotions are too inconveniently packaged to be much of use in AI design. (While being necessary, they also misfire a lot; coping with emotions is an important skill to learn; in some situations emotions do more harm than good; all in all this doesn’t seem like good mind design). So I was wondering if whatever AI uses for its thinking, we would even recognize as emotions.
My assessment now is that even if AI uses different thinking shortcuts than humans do, they might still misfire. For example, I can imagine a pattern activation triggering more patterns, which in turn trigger more and more patterns, resulting in a cascade effect not unlike emotional over-stimulation/breakdown in humans. So I think it’s possible that we might see AI having what we would describe as emotions (perhaps somewhat uncanny emotions, but emotions all the same).
P. S. For the sake of completeness: my mental model also includes biological organisms needing emotions in order to create motivation (rather than just drawing conclusions). (example: fear creating motivation to escape danger). AI should already have a supergoal so it does not need “motivation”. However it would need to see how its current context connects to its supergoal, and create/activate subgoals that apply to the current situation, and here once again thinking shortcuts might be useful, perhaps not too unlike human emotions.
Example: AI sees a fast-moving object that it predicts will intersect its current location, and a thinking shortcut activates a dodging strategy. This is a subgoal to the goal of surviving, which is in turn is a subgoal to the AI’s supergoal (whatever that is).
Having a thinking shortcut (this one we might call “reflex” rather “emotion”) results in faster thinking. Slow thinking might be inefficient to the point of fatal “Hm… that object seems to be moving mighty fast in my direction… if it hits me it might damage/destroy me. Would that be a good thing? No, I guess not—I need to functional in order to achieve my supergoal. So I should probably dodg.. ”
AI should already have a supergoal so it does not need “motivation”.
We know relatively little about what it takes to create a AGI. Saying that an AGI should have feature X or feature Y to be a functioning AGI is drawing to much conclusions from the data we have.
On the other hand we know that the architecture on which humans run produces “intelligence” so that at least one possible architecture that could be implemented in a computer.
Bootstraping AGI from Whole Brain Emulations is on of the ideas that is in discussion even on LessWrong.
I find it highly unlikely robots would have anything corresponding to any given human emotion, but if you just look at the general area in thingspace that emotions are in, and you’re perfectly okay with the idea of finding a new one, then it would be perfectly reasonable for robots to have emotions. For one thing, general negative and positive emotions would be pretty important for learning.
I have never thought about this, so this is a serious question. Why do you think evolution resulted in beings with emotions and what makes you confident enough that emotions are unnecessary for practical agents that you would end up being frustrated about the depiction of emotional AIs created by emotional beings in SF stories?
From Wikipedia:
Emotion is often the driving force behind motivation, positive or negative. An alternative definition of emotion is a “positive or negative experience that is associated with a particular pattern of physiological activity.”
...cognition is an important aspect of emotion, particularly the interpretation of events.
Let’s say the AI in your story becomes aware of an imminent and unexpected threat and allocates most resources to dealing with it. This sounds like fear. The rest is semantics. Or how exactly would you tell that the AI is not in fear? I think we’ll quickly come up against the hard problem of consciousness here and whether consciousness is an important feature for agents to possess. And I don’t think one can be confident enough about this issue in order to become frustrated about a science fiction author using emotional terminology to describe the AIs in their story (a world in which AIs have “emotions” is not too absurd).
Asking “Would an AI experience emotions?” is akin to asking “Would a robot have toenails?”
There is little functional reason for either of them to have those, but they would if someone designed them that way.
Edit: the background for this comment—I’m frustrated by the way AI is represented in (non-rationalist) fiction.
What sort of AIs have emotions? How can I tell whether an AI has emotions?
Given how emotions are essential to decision-making, I’d ask what sort of AI doesn’t have emotions.
I’d say that a chess-playing program does not have emotions, and a norn does.
I think you are plain wrong.
There a lot of thought in AI development of mimicking human neural decision making processes and it’s very well possible that the first human level AGI will be similar in structure to human decision making. Emotions are a core part of how humans make decisions.
I should probably make clear that most of my knowledge of AI comes from LW posts, I do not work with it professionally, and that this discussion is on my part motivated by curiosity and desire to learn.
Agreed.
Your assessment is probably more accurate than mine.
My original line was of thinking was that while AIs might use quick-and-imprecise thinking shortcuts triggered by pattern-matching (which is sort of how I see emotions), human emotions are too inconveniently packaged to be much of use in AI design. (While being necessary, they also misfire a lot; coping with emotions is an important skill to learn; in some situations emotions do more harm than good; all in all this doesn’t seem like good mind design). So I was wondering if whatever AI uses for its thinking, we would even recognize as emotions.
My assessment now is that even if AI uses different thinking shortcuts than humans do, they might still misfire. For example, I can imagine a pattern activation triggering more patterns, which in turn trigger more and more patterns, resulting in a cascade effect not unlike emotional over-stimulation/breakdown in humans.
So I think it’s possible that we might see AI having what we would describe as emotions (perhaps somewhat uncanny emotions, but emotions all the same).
P. S. For the sake of completeness: my mental model also includes biological organisms needing emotions in order to create motivation (rather than just drawing conclusions). (example: fear creating motivation to escape danger).
AI should already have a supergoal so it does not need “motivation”. However it would need to see how its current context connects to its supergoal, and create/activate subgoals that apply to the current situation, and here once again thinking shortcuts might be useful, perhaps not too unlike human emotions.
Example: AI sees a fast-moving object that it predicts will intersect its current location, and a thinking shortcut activates a dodging strategy. This is a subgoal to the goal of surviving, which is in turn is a subgoal to the AI’s supergoal (whatever that is).
Having a thinking shortcut (this one we might call “reflex” rather “emotion”) results in faster thinking. Slow thinking might be inefficient to the point of fatal “Hm… that object seems to be moving mighty fast in my direction… if it hits me it might damage/destroy me. Would that be a good thing? No, I guess not—I need to functional in order to achieve my supergoal. So I should probably dodg.. ”
We know relatively little about what it takes to create a AGI. Saying that an AGI should have feature X or feature Y to be a functioning AGI is drawing to much conclusions from the data we have.
On the other hand we know that the architecture on which humans run produces “intelligence” so that at least one possible architecture that could be implemented in a computer.
Bootstraping AGI from Whole Brain Emulations is on of the ideas that is in discussion even on LessWrong.
Define “emotion”.
I find it highly unlikely robots would have anything corresponding to any given human emotion, but if you just look at the general area in thingspace that emotions are in, and you’re perfectly okay with the idea of finding a new one, then it would be perfectly reasonable for robots to have emotions. For one thing, general negative and positive emotions would be pretty important for learning.
I have never thought about this, so this is a serious question. Why do you think evolution resulted in beings with emotions and what makes you confident enough that emotions are unnecessary for practical agents that you would end up being frustrated about the depiction of emotional AIs created by emotional beings in SF stories?
From Wikipedia:
Let’s say the AI in your story becomes aware of an imminent and unexpected threat and allocates most resources to dealing with it. This sounds like fear. The rest is semantics. Or how exactly would you tell that the AI is not in fear? I think we’ll quickly come up against the hard problem of consciousness here and whether consciousness is an important feature for agents to possess. And I don’t think one can be confident enough about this issue in order to become frustrated about a science fiction author using emotional terminology to describe the AIs in their story (a world in which AIs have “emotions” is not too absurd).