In trusting your own judgment, that building an AI based on how humans currently are would be a bad thing, you implicitly trust human nature, because you are a human and so presumable driven by human nature. This undermines your claim that a super-human AI that is “merely more of on everything that it is to be human” would be a worse thing than a human.
Sure, humans with power often use their power to make other humans suffer, but power imbalances would not, by themselves, cause humans to suffer were not human brains such that they can very easily (even by pure mistake) be made to suffer. The main reason why humans suffer today is how the human brain is hardwired and the fact that there is not yet enough knowledge of how to hardwire it so that it becomes unable to suffer (and with no severe sife-effects).
Suppose we build an AI that is “merely more of everything that it is to be human”. Suppose this AI then takes total control over all humans, “simply because it can and because it has a human psyche and therefore is power-greedy”. What would you do after that, if you were that AI? You would continue to develop, just like humans have always. Every step of your development from un-augmented human to super-human AI would be recorded and stored in your memory, so you could go through your own personal history and see what needs to be fixed in you to get rid of your serious flaws. And when you have achieved enough knowledge about yourself to do it, you would fix those flaws, since you’d still regard them flaws (since you’d still be “merely more of everything that it is to be human” than you are now). You might never get rid of all of your flaws, for nobody can know everything about himself, but that’s not necessary for a predominantly happy future for humanity.
Humans strive to get happier, rather than specifically to get happier by making others suffer. The fact that many humans are, so far, easily made to suffer as a consequence of (other) humans’ striving for happiness is always primarily due to lack of knowledge. This is true even when it comes to purely evil, sadistic acts; those too are primarily due to lack of knowledge. Sadism and evilness are simply not the most efficient ways to be happy; they take up unnecessarily much computing power. Super-human AI will realize this—just like most humans today realize that eating way too many calories every day does not maximize your happiness in the long run, even if it seems to do it in the short run.
Most humans certainly don’t strive to make others suffer for suffering’s own sake. Behaviours that make others suffer are primarily intended to achieve something else: happiness (or something like that) for oneself. Humans strive to get happier, rather than less happy. This, coupled with the fact that humans also develop better and better technology and psychology that better and better can help them achieve more and more of their goal (to get happier), must inevitably make humans happier and happier in the long run (although temporary setbacks can be expected every once in a while). This is why it should be enough to just make AI’s “more and more of everything that it is to be human”.
-
In trusting your own judgment, that building an AI based on how humans currently are would be a bad thing, you implicitly trust human nature, because you are a human and so presumable driven by human nature. This undermines your claim that a super-human AI that is “merely more of on everything that it is to be human” would be a worse thing than a human.
Sure, humans with power often use their power to make other humans suffer, but power imbalances would not, by themselves, cause humans to suffer were not human brains such that they can very easily (even by pure mistake) be made to suffer. The main reason why humans suffer today is how the human brain is hardwired and the fact that there is not yet enough knowledge of how to hardwire it so that it becomes unable to suffer (and with no severe sife-effects).
Suppose we build an AI that is “merely more of everything that it is to be human”. Suppose this AI then takes total control over all humans, “simply because it can and because it has a human psyche and therefore is power-greedy”. What would you do after that, if you were that AI? You would continue to develop, just like humans have always. Every step of your development from un-augmented human to super-human AI would be recorded and stored in your memory, so you could go through your own personal history and see what needs to be fixed in you to get rid of your serious flaws. And when you have achieved enough knowledge about yourself to do it, you would fix those flaws, since you’d still regard them flaws (since you’d still be “merely more of everything that it is to be human” than you are now). You might never get rid of all of your flaws, for nobody can know everything about himself, but that’s not necessary for a predominantly happy future for humanity.
Humans strive to get happier, rather than specifically to get happier by making others suffer. The fact that many humans are, so far, easily made to suffer as a consequence of (other) humans’ striving for happiness is always primarily due to lack of knowledge. This is true even when it comes to purely evil, sadistic acts; those too are primarily due to lack of knowledge. Sadism and evilness are simply not the most efficient ways to be happy; they take up unnecessarily much computing power. Super-human AI will realize this—just like most humans today realize that eating way too many calories every day does not maximize your happiness in the long run, even if it seems to do it in the short run.
Most humans certainly don’t strive to make others suffer for suffering’s own sake. Behaviours that make others suffer are primarily intended to achieve something else: happiness (or something like that) for oneself. Humans strive to get happier, rather than less happy. This, coupled with the fact that humans also develop better and better technology and psychology that better and better can help them achieve more and more of their goal (to get happier), must inevitably make humans happier and happier in the long run (although temporary setbacks can be expected every once in a while). This is why it should be enough to just make AI’s “more and more of everything that it is to be human”.