Richard Hollerith: “It looks to me like Eliezer plans to put humanism at the center of the intelligence explosion.”
“Renormalized” humanism, perhaps; the outcome of which need not be anthropocentric in any way. You are a human being, and you have come up with some non-anthropocentric value system for yourself. This more or less demonstrates that you can start with a human utility function and still produce such an outcome. But there is no point in trying to completely ditch human-specific preferences before doing anything else; if you did that, you wouldn’t even be able to reject paperclip maximization.
Richard Hollerith: “It looks to me like Eliezer plans to put humanism at the center of the intelligence explosion.”
“Renormalized” humanism, perhaps; the outcome of which need not be anthropocentric in any way. You are a human being, and you have come up with some non-anthropocentric value system for yourself. This more or less demonstrates that you can start with a human utility function and still produce such an outcome. But there is no point in trying to completely ditch human-specific preferences before doing anything else; if you did that, you wouldn’t even be able to reject paperclip maximization.