The closest thing that we have in real life to the ‘rational agent’ concept in game theory and artificial intelligence are psychopaths. Psychopaths act entirely out of self-interest, without any regard for others in their utility function. Taking this idea further, it’s easy to see why a rational superintelligence would become a UFAI—it is a psychopath. One thing that normal humans have that psychopaths lack is empathy for others. We have some degree of ‘empathizing’ in our utility functions—if we make someone feel bad, we feel bad as well. Our empathy does not have laser-guided precision, and as such is directed not just at human beings but at animals (and sometimes even inanimate objects).
Thus it seems that the best way to create FAI wouldn’t be Coherent Extrapolated Volition, it would be Coherent Extrapolated Emotion. This is probably a stupid question, but why does the concept of ‘artificial empathy’ seem to get such little attention?
Being further along the psychopathy spectrum isn’t all sunshine and daisies and rationality.
Instrumental learning is also interesting. “Instrumental learning involves learning to commit specific behavioral responses in order to gain reward or avoid punishment.” [ibid, pg 51]. Psychopaths have issues with specific forms of this, particularly passive avoidance and response reversal. In passive avoidance, the subject must learn to avoid responding to thing that will give them punishments, while response reversal is when the subject must stop responding to a stimulus that was once a reward but now punishes. The impairment of the first has been repeatedly demonstrated, while Blair uses the example of a card game developed by Joe Newman to demonstrate the second. In that game, participants must decide whether to play a card or not. At first, playing is always rewarding, but as the game goes on the probability of playing being rewarding decreases, and eventually it will be primarily punishing. While most non-psychopaths do learn to stop playing once punishment becomes too likely, psychopaths do not, to the point of losing all of their points.
Basically, I fear you are committing a sort of “straw-spock argument” about rationality, where you assume in the absence of evidence that someone with more muted empathy must be pursuing their goals more rationally.
EDIT: I see the point you are trying to make though, that psychopaths not only lack empathy, but also a large amount of rationality. I agree with this and in fact it has been shown that in terms of logic problems and so on, pscyhopaths are just as rational as normal people i.e. not at all.
The point I’m trying to make is that the presence of empathy can certainly do a lot to destroy psychopathy. This is supported by the study you linked.
The closest thing that we have in real life to the ‘rational agent’ concept in game theory and artificial intelligence are psychopaths. Taking this idea further, it’s easy to see why a rational superintelligence would become a UFAI—it is a psychopath.
This doesn’t quite seem right and here is why; my utility function considers others people’s utility function, therefore by acting rationally and maximizing my utility function leaves room for empathy of others. You only get psychopathy if the utility function of the rational agent, is psychopathic, most people’s utility functions are not.
This doesn’t quite seem right and here is why; my utility function considers others people’s utility function, therefore by acting rationally and maximizing my utility function leaves room for empathy of others.
Yes this is what I said. Usually in game theory setups though, empathy is not included in the utility function. That’s what I meant, sorry if it was unclear. You’re right that an agent can be rational and empathic at the same time.
It’s not just lack of empathy that makes psychopaths act as they do, but also what looks to me like a particularly strong drive for domination. They simply get more emotional rewards out of exerting power over other people, in ways that range from winning socially acceptable status contests to torturing and butchering other humans. Without this drive for domination, they wouldn’t be half as dangerous.
Just pointing out that it’s the presence rather than the absence of a feature that causes one to be actively evil, not just selfish and calculating. Merely self-interested rational agents would stop at callously pursuing whatever their, er, utility function tells them to. They wouldn’t go that extra mile to satisfy a purely emotional need. To exhibit psychopathic behavior—to play mind games with people, to break laws and to engage in power contests even when you don’t have anything rational to gain, just for the thrill of it—well, you need to be able to feel the thrill of it. An extra feature.
As for programming emotion into an AI, I wouldn’t know about that. I have the vague intuition that emotions are a bit of a kludge-y solution to morality; our emotional system is mildly good some of the time, but not great and not all of the time, at getting morality right. A different emotional system, designed from scratch and checked for coherence, might perform better, though I don’t have the qualifications needed to express an opinion one way or the other.
So you’re saying that included in their goals is an explicit urge for dominance, that is absent (or weakened) in ‘normal’ human beings? I suppose it sounds plausible, but I’d like to see some references.
The closest thing that we have in real life to the ‘rational agent’ concept in game theory and artificial intelligence are psychopaths.
Maybe corporations, nation-states, and other institutional actors come even closer? It sure would be nice to be able to add some “artificial empathy” to Nestlé et al.
The closest thing that we have in real life to the ‘rational agent’ concept in game theory and artificial intelligence are psychopaths. Psychopaths act entirely out of self-interest, without any regard for others in their utility function. Taking this idea further, it’s easy to see why a rational superintelligence would become a UFAI—it is a psychopath. One thing that normal humans have that psychopaths lack is empathy for others. We have some degree of ‘empathizing’ in our utility functions—if we make someone feel bad, we feel bad as well. Our empathy does not have laser-guided precision, and as such is directed not just at human beings but at animals (and sometimes even inanimate objects).
Thus it seems that the best way to create FAI wouldn’t be Coherent Extrapolated Volition, it would be Coherent Extrapolated Emotion. This is probably a stupid question, but why does the concept of ‘artificial empathy’ seem to get such little attention?
Being further along the psychopathy spectrum isn’t all sunshine and daisies and rationality.
http://verbosestoic.wordpress.com/fearlessly-amoral-psychopaths-autistics-and-learning-with-emotion/
Basically, I fear you are committing a sort of “straw-spock argument” about rationality, where you assume in the absence of evidence that someone with more muted empathy must be pursuing their goals more rationally.
I am not; see my reply to niceguyanon.
EDIT: I see the point you are trying to make though, that psychopaths not only lack empathy, but also a large amount of rationality. I agree with this and in fact it has been shown that in terms of logic problems and so on, pscyhopaths are just as rational as normal people i.e. not at all.
The point I’m trying to make is that the presence of empathy can certainly do a lot to destroy psychopathy. This is supported by the study you linked.
This doesn’t quite seem right and here is why; my utility function considers others people’s utility function, therefore by acting rationally and maximizing my utility function leaves room for empathy of others. You only get psychopathy if the utility function of the rational agent, is psychopathic, most people’s utility functions are not.
Yes this is what I said. Usually in game theory setups though, empathy is not included in the utility function. That’s what I meant, sorry if it was unclear. You’re right that an agent can be rational and empathic at the same time.
Then the game theory experiments leaving no room for empathy are straw Vulcans.
It’s not just lack of empathy that makes psychopaths act as they do, but also what looks to me like a particularly strong drive for domination. They simply get more emotional rewards out of exerting power over other people, in ways that range from winning socially acceptable status contests to torturing and butchering other humans. Without this drive for domination, they wouldn’t be half as dangerous.
Just pointing out that it’s the presence rather than the absence of a feature that causes one to be actively evil, not just selfish and calculating. Merely self-interested rational agents would stop at callously pursuing whatever their, er, utility function tells them to. They wouldn’t go that extra mile to satisfy a purely emotional need. To exhibit psychopathic behavior—to play mind games with people, to break laws and to engage in power contests even when you don’t have anything rational to gain, just for the thrill of it—well, you need to be able to feel the thrill of it. An extra feature.
As for programming emotion into an AI, I wouldn’t know about that. I have the vague intuition that emotions are a bit of a kludge-y solution to morality; our emotional system is mildly good some of the time, but not great and not all of the time, at getting morality right. A different emotional system, designed from scratch and checked for coherence, might perform better, though I don’t have the qualifications needed to express an opinion one way or the other.
So you’re saying that included in their goals is an explicit urge for dominance, that is absent (or weakened) in ‘normal’ human beings? I suppose it sounds plausible, but I’d like to see some references.
Most people have it to a small extent (it’s a feature present all across the animal kingdom, after all), but in psychopaths it is exacerbated.
Sorry, no references. Just a speculation that seems strongly consistent with what I know so far about them.
Maybe corporations, nation-states, and other institutional actors come even closer? It sure would be nice to be able to add some “artificial empathy” to Nestlé et al.