The problem with “Just decide for yourself” as an approach to dealing with moral decisions in novel contexts (like what to do with the whole galaxy) is that, though it may help you choose actions rather than worrying about what’s right, it’s not much help in building an AI
It’s not much help with morality, either,since it doesn’t tell you anything at all about how to balance your values against those of others. In the absence of God, there is still a space for “we” to solve problems, not just “I”.
Sure. The way it helps is for personal moral indeterminacy—when I want to make a decision, but am aware that, strictly speaking, my values are undefined, I should still do what seems right. A more direct approach to the problem would be Eliezer’s point about type 1 and type 2 calculators.
That seems to imply that you should not bother doing what is right in favour of selfish preferences so long as personal preferences are clear. Surely that is the wrong way round: if there is an objective morality then you morally!should follow it unless it fails to specify an action.
It’s not much help with morality, either,since it doesn’t tell you anything at all about how to balance your values against those of others. In the absence of God, there is still a space for “we” to solve problems, not just “I”.
Sure. The way it helps is for personal moral indeterminacy—when I want to make a decision, but am aware that, strictly speaking, my values are undefined, I should still do what seems right. A more direct approach to the problem would be Eliezer’s point about type 1 and type 2 calculators.
That seems to imply that you should not bother doing what is right in favour of selfish preferences so long as personal preferences are clear. Surely that is the wrong way round: if there is an objective morality then you morally!should follow it unless it fails to specify an action.