The question Eliezer raises is the first problem any religious person has to face once he abandons the god thesis, i.e. why should I be good now? The answer, I believe, is that you cannot act contrary to your genetic nature. Our brains are wired (or have modules in Pinker terms) for various forms of altruism, for group survival reasons probably. I therefore can’t easily commit acts against my genetic nature, even if intellectually I can see they are in my best interests. (As Eliezer has already recognised this is why AI or uploaded personalities are so dangerous; they will be able to rewrite the brain code that prevents widespread selfishness. I say dangerous of course, because likely the first uploaded person or AI will not be me, so they will be a threat to me.)
More simply, the reason I don’t steal from people is not that stealing is wrong, but that my genetic programming (perhaps also an element of social conditioning) is such that I don’t want to steal, or have an active non-intellectual aversion to stealing.
Why do I try to convince you of this point of view if I am intellectually convinced that I should be selfish? I agree with Robin, it is because I am gentically programmed to do so, probably related to status seeking. Also, I genuinely would like to hear arguments againt this point of view, in case I am wrong.
Eliezer, genetics as a source of our ethical actions mean that it is unlikely we can ever develop a consistent ethical theory, if you accept this does this not present a big problem for your attempt to create an ethical AI? Is it possible your rejection of this approach to ethics and your attempt to prove a standalone moral system is perhaps subconciously driven by the impact this would have on your work?
Nick
My response is, evolution! Let’s say a genuinely (what ever that means) altruistic entity exists. He then is uploaded. He then observed that not all entities are fully altruistic, in other words they will want to take resources from others. In any contest over resources this puts the altruistic entity at a disadvantage (he is spending resources helping others that he could use to defend himself). With potentially mega intelligent entities any weakness is serious. He realises that very quickly he will be eliminated if he doesn’t fix this weakness. He either fixes the weakness (becomes selfish) or he accepts his elimination. Note that uploaded entities are likely to be very paranoid, after all when one is eliminated, a potentially immortal life is eliminated, so they should have very low discount rates. You might be a threat to me in a million years, so if I get the chance I should eliminate you now.
If your answer is that the altruistic entities will be able to use cooperation to defend themselves against the selfish ones, you must realise there is nothing to stop a genuinely selfish entity from pretending to be altruistic. And the altruistic entities will know this.
I don’t think that most people realise that the reason we can work as a society is that we have hardwired cooperation genes in us, and we know that. We are not altruistic through choice. Allow us to make the decision on whether to be altruistic and the game theory becomes very different.