Wrong compared to what? Compared to no sympathies at all? If that’s what you mean, doesn’t that imply that humans must be expected to make the world worse rather than better, whatever they try to do? Isn’t that a rather counterproductive belief (assuming that you’d prefer that the world became a better place rather than not)?
AI with human sympathies would at least be based on something that is tested and found to work throughout ages, namely the human being as a whole, with all its flaws and merits. If you try to build the same thing but without those traits that, now, seem to be “flaws”, these “flaws” may later turn out to have been vital for the whole to work, in ways we may not now see. It may become possible, in the future, to fully successfully replace them with things that are not flaws, but that may require more knowledge about the human being than we currently have, and we may not now have enough knowledge to be justified to even try to do it.
Suppose I have a nervous disease that makes me kick uncontrollably with my right leg every once in a while, sometimes hurting people a bit. What’s the best solution to that problem? To cut off my right leg? Not if my right leg is clearly more useful than harmful on average. But what if I’m also so dumb that I cannot see that my leg is actually more useful than harmful; what if I can mainly see the harm it does? That’s what we are being like, if we think we should try to build a (superhuman) AI by equipping it with only the clearly “good” human traits and not those human traits that now appear to be (only) “flaws”, prematurely thinking we know enough about how these “flaws” affect the overall survival chances of the being/species. If it is possible to safely get rid of the “flaws” of humans, future superhuman AI will know how to do that far more safely than we do, and so we should not be too eager to do it already. There is very much to lose and very little to gain by impatiently trying to get everything perfect at once (which is impossible anyway). It’s enough, and therefore safer and better, to make the first superhuman AI “merely more of everything that it is to be human”.
In trusting your own judgment, that building an AI based on how humans currently are would be a bad thing, you implicitly trust human nature, because you are a human and so presumable driven by human nature. This undermines your claim that a super-human AI that is “merely more of on everything that it is to be human” would be a worse thing than a human.
Sure, humans with power often use their power to make other humans suffer, but power imbalances would not, by themselves, cause humans to suffer were not human brains such that they can very easily (even by pure mistake) be made to suffer. The main reason why humans suffer today is how the human brain is hardwired and the fact that there is not yet enough knowledge of how to hardwire it so that it becomes unable to suffer (and with no severe sife-effects).
Suppose we build an AI that is “merely more of everything that it is to be human”. Suppose this AI then takes total control over all humans, “simply because it can and because it has a human psyche and therefore is power-greedy”. What would you do after that, if you were that AI? You would continue to develop, just like humans have always. Every step of your development from un-augmented human to super-human AI would be recorded and stored in your memory, so you could go through your own personal history and see what needs to be fixed in you to get rid of your serious flaws. And when you have achieved enough knowledge about yourself to do it, you would fix those flaws, since you’d still regard them flaws (since you’d still be “merely more of everything that it is to be human” than you are now). You might never get rid of all of your flaws, for nobody can know everything about himself, but that’s not necessary for a predominantly happy future for humanity.
Humans strive to get happier, rather than specifically to get happier by making others suffer. The fact that many humans are, so far, easily made to suffer as a consequence of (other) humans’ striving for happiness is always primarily due to lack of knowledge. This is true even when it comes to purely evil, sadistic acts; those too are primarily due to lack of knowledge. Sadism and evilness are simply not the most efficient ways to be happy; they take up unnecessarily much computing power. Super-human AI will realize this—just like most humans today realize that eating way too many calories every day does not maximize your happiness in the long run, even if it seems to do it in the short run.
Most humans certainly don’t strive to make others suffer for suffering’s own sake. Behaviours that make others suffer are primarily intended to achieve something else: happiness (or something like that) for oneself. Humans strive to get happier, rather than less happy. This, coupled with the fact that humans also develop better and better technology and psychology that better and better can help them achieve more and more of their goal (to get happier), must inevitably make humans happier and happier in the long run (although temporary setbacks can be expected every once in a while). This is why it should be enough to just make AI’s “more and more of everything that it is to be human”.
Wrong compared to what? Compared to no sympathies at all? If that’s what you mean, doesn’t that imply that humans must be expected to make the world worse rather than better, whatever they try to do? Isn’t that a rather counterproductive belief (assuming that you’d prefer that the world became a better place rather than not)?
AI with human sympathies would at least be based on something that is tested and found to work throughout ages, namely the human being as a whole, with all its flaws and merits. If you try to build the same thing but without those traits that, now, seem to be “flaws”, these “flaws” may later turn out to have been vital for the whole to work, in ways we may not now see. It may become possible, in the future, to fully successfully replace them with things that are not flaws, but that may require more knowledge about the human being than we currently have, and we may not now have enough knowledge to be justified to even try to do it.
Suppose I have a nervous disease that makes me kick uncontrollably with my right leg every once in a while, sometimes hurting people a bit. What’s the best solution to that problem? To cut off my right leg? Not if my right leg is clearly more useful than harmful on average. But what if I’m also so dumb that I cannot see that my leg is actually more useful than harmful; what if I can mainly see the harm it does? That’s what we are being like, if we think we should try to build a (superhuman) AI by equipping it with only the clearly “good” human traits and not those human traits that now appear to be (only) “flaws”, prematurely thinking we know enough about how these “flaws” affect the overall survival chances of the being/species. If it is possible to safely get rid of the “flaws” of humans, future superhuman AI will know how to do that far more safely than we do, and so we should not be too eager to do it already. There is very much to lose and very little to gain by impatiently trying to get everything perfect at once (which is impossible anyway). It’s enough, and therefore safer and better, to make the first superhuman AI “merely more of everything that it is to be human”.
[Edited, removed some unnecessary text]
-
In trusting your own judgment, that building an AI based on how humans currently are would be a bad thing, you implicitly trust human nature, because you are a human and so presumable driven by human nature. This undermines your claim that a super-human AI that is “merely more of on everything that it is to be human” would be a worse thing than a human.
Sure, humans with power often use their power to make other humans suffer, but power imbalances would not, by themselves, cause humans to suffer were not human brains such that they can very easily (even by pure mistake) be made to suffer. The main reason why humans suffer today is how the human brain is hardwired and the fact that there is not yet enough knowledge of how to hardwire it so that it becomes unable to suffer (and with no severe sife-effects).
Suppose we build an AI that is “merely more of everything that it is to be human”. Suppose this AI then takes total control over all humans, “simply because it can and because it has a human psyche and therefore is power-greedy”. What would you do after that, if you were that AI? You would continue to develop, just like humans have always. Every step of your development from un-augmented human to super-human AI would be recorded and stored in your memory, so you could go through your own personal history and see what needs to be fixed in you to get rid of your serious flaws. And when you have achieved enough knowledge about yourself to do it, you would fix those flaws, since you’d still regard them flaws (since you’d still be “merely more of everything that it is to be human” than you are now). You might never get rid of all of your flaws, for nobody can know everything about himself, but that’s not necessary for a predominantly happy future for humanity.
Humans strive to get happier, rather than specifically to get happier by making others suffer. The fact that many humans are, so far, easily made to suffer as a consequence of (other) humans’ striving for happiness is always primarily due to lack of knowledge. This is true even when it comes to purely evil, sadistic acts; those too are primarily due to lack of knowledge. Sadism and evilness are simply not the most efficient ways to be happy; they take up unnecessarily much computing power. Super-human AI will realize this—just like most humans today realize that eating way too many calories every day does not maximize your happiness in the long run, even if it seems to do it in the short run.
Most humans certainly don’t strive to make others suffer for suffering’s own sake. Behaviours that make others suffer are primarily intended to achieve something else: happiness (or something like that) for oneself. Humans strive to get happier, rather than less happy. This, coupled with the fact that humans also develop better and better technology and psychology that better and better can help them achieve more and more of their goal (to get happier), must inevitably make humans happier and happier in the long run (although temporary setbacks can be expected every once in a while). This is why it should be enough to just make AI’s “more and more of everything that it is to be human”.