part of his tendency to gloss over ethical and philosophical underpinnings.
All right, it wasn’t really fair of me to say this. I do think that Eliezer is not as careful in such matters as he is in most matters.
Nick:
- Explain how desiring to save humans does not conflict with envisioning a world with no humans. Do not say that these non-humans will be humanity extrapolated, since they must be subject to CEV. Remember that everything more intelligent than a present-day human must be controlled by CEV. If this is not so, explain the processes that gradually increase the amount of intelligence allowable to a free entity. Then explain why these processes cannot be used in place of CEV.
- Mike’s answer “RPOP slaves” is based on saying that all of these AIs are going to be things not worthy of ethical consideration. That is throwing the possibility that humans will become AIs right out the window.
- Eliezer’s “beyond the adversarial attitude”, besides being a bit new-agey, boils down to pretending that CEV is just a variant on the golden rule, and we’re just trying to give our AIs the same moral guidance we should give ourselves. It is not compatible with his longer exposition on CEV, which makes it clear that CEV places bounds on what a friendly AI can do, and in fact seems to require than an AI be a rather useless referee-slave-god, who can observe, but not participate in, most of the human competition that makes the world go round. It also suggests that Eliezer’s program will eventually require forcing everyone, extrapolated humans included, to be bound by CEV. (“We had to assimilate the village to save it, sir.”)
- Regarding the sysop thing:
You are saying that we can be allowed to become superintelligent under a sysop, while simultaneously saying that we can’t be allowed to become superintelligent without a sysop (because then we would be unfriendly AIs). While this may be correct, accepting it should lead you to ask how this transition takes place, and how you compute the level of superintelligence you are allowed as a function of the level of intelligence that the sysop has, and whether you are allowed to be a sysop to those below you, and so on, until you develop a concept of an ecosystem of AIs, with system dynamics that can be managed in more sophisticated, efficient, and moral ways than merely having a sysop Big Brother.
Nick:
- Explain how desiring to save humans does not conflict with envisioning a world with no humans. Do not say that these non-humans will be humanity extrapolated, since they must be subject to CEV. Remember that everything more intelligent than a present-day human must be controlled by CEV. If this is not so, explain the processes that gradually increase the amount of intelligence allowable to a free entity. Then explain why these processes cannot be used in place of CEV.
- Mike’s answer “RPOP slaves” is based on saying that all of these AIs are going to be things not worthy of ethical consideration. That is throwing the possibility that humans will become AIs right out the window.
- Eliezer’s “beyond the adversarial attitude”, besides being a bit new-agey, boils down to pretending that CEV is just a variant on the golden rule, and we’re just trying to give our AIs the same moral guidance we should give ourselves. It is not compatible with his longer exposition on CEV, which makes it clear that CEV places bounds on what a friendly AI can do, and in fact seems to require than an AI be a rather useless referee-slave-god, who can observe, but not participate in, most of the human competition that makes the world go round. It also suggests that Eliezer’s program will eventually require forcing everyone, extrapolated humans included, to be bound by CEV. (“We had to assimilate the village to save it, sir.”)
- Regarding the sysop thing:
You are saying that we can be allowed to become superintelligent under a sysop, while simultaneously saying that we can’t be allowed to become superintelligent without a sysop (because then we would be unfriendly AIs). While this may be correct, accepting it should lead you to ask how this transition takes place, and how you compute the level of superintelligence you are allowed as a function of the level of intelligence that the sysop has, and whether you are allowed to be a sysop to those below you, and so on, until you develop a concept of an ecosystem of AIs, with system dynamics that can be managed in more sophisticated, efficient, and moral ways than merely having a sysop Big Brother.