I think that “human pleasure” is such a complicated idea that trying to program it in formally is asking for disaster. That’s one of the things that you should definitely let the AI figure out for itself.
[...]
Eliezer is aware of this problem, but hopes to avoid disaster by being especially smart and careful. That approach has what I think is a bad expected value of outcome.
Huh I thought he wanted to use CEV?
You are right. I think PhilGoetz must be confused. EY has at least certainly never suggested programming an AI to maximise human pleasure.