Eliezer [in response to me]: This just amounts to defining should as an abstract computation, and then excluding all minds that calculate a different rule-of-action as “choosing based on something other than morality”. In what sense is the morality objective, besides the several senses I’ve already defined, if it doesn’t persuade a paperclip maximizer?
I think my position is this:
If there really was such a thing as an objective morality, it would be the case that only a subset of possible minds could actually discover or be persuaded of that fact.
Presumably, for any objective fact, there are possible minds who could never be convinced of that fact.
Eliezer [in response to me]: This just amounts to defining should as an abstract computation, and then excluding all minds that calculate a different rule-of-action as “choosing based on something other than morality”. In what sense is the morality objective, besides the several senses I’ve already defined, if it doesn’t persuade a paperclip maximizer?
I think my position is this:
If there really was such a thing as an objective morality, it would be the case that only a subset of possible minds could actually discover or be persuaded of that fact.
Presumably, for any objective fact, there are possible minds who could never be convinced of that fact.