Gary’s preference is not itself justification, rather it recognizes moral arguments, and not because it’s Gary’s preference, but for its own specific reasons. Saying that “Gary’s preference states that X is Gary_right” is roughly the same as “Gary should_Gary X”.
(This should_T terminology was discouraged by Eliezer in the sequences, perhaps since it invites incorrect moral-relativistic thinking, as if any decision problem can be assumed as own by any other, and also makes you think of ways of referring to morality, while seeing it as a black box, instead of looking inside morality. And you have to look inside even to refer to it, but won’t notice that until you stop referring and try looking.)
By saying “Gary should_gary X”, do you mean that “Gary would X if Gary was fully informed and had reached a state of reflective equilibrium with regard to terminal values, moral arguments, and what Gary considers to be a moral argument”?
To a first approximation, but not quite, since it might be impossible to know what is right, for any computation not to speak of a mere human, only to make right guesses.
This makes should-statements “subjectively objective”
Every well-defined question has in a sense a “subjectively objective” answer: there’s “subjectivity” in the way the question has to be interpreted by an agent that takes on a task of answering it, and “objectivity” in the rules of reasoning established by such interpretation, that makes some possible answers incorrect with respect to that abstract standard.
Or, perhaps you are saying that one cannot give a concise definition of “should,”
I don’t quite see how this is opposed to the other points of your comment. If you actually start unpacking the notion, you’ll find that it’s a very long list. Alternatively, you might try referring to that list by mentioning it, but that’s a tricky task for various reasons, including the need to use morality to locate (and precisely describe the location of) the list. Perhaps we can refer to morality concisely, but it’s not clear how.
(This should_T terminology was discouraged by Eliezer in the sequences, perhaps since it invites incorrect moral-relativistic thinking, as if any decision problem can be assumed as own by any other, and also makes you think of ways of referring to morality, while seeing it as a black box, instead of looking inside morality. And you have to look inside even to refer to it, but won’t notice that until you stop referring and try looking.)
I had no idea what Eliezer was talking about originally until I started thinking in terms of should_T. Based on that and the general level of confusion among people trying to understand his metaethics, I concluded that EY was wrong—more people would understand if he talked in terms of should_T. Based on some of the back and forth here, I’m revising that opinion somewhat. Apparently this stuff is just confusing and I may just be atypical in being able to initially understand it better in those terms.
Gary’s preference is not itself justification, rather it recognizes moral arguments, and not because it’s Gary’s preference, but for its own specific reasons. Saying that “Gary’s preference states that X is Gary_right” is roughly the same as “Gary should_Gary X”.
(This should_T terminology was discouraged by Eliezer in the sequences, perhaps since it invites incorrect moral-relativistic thinking, as if any decision problem can be assumed as own by any other, and also makes you think of ways of referring to morality, while seeing it as a black box, instead of looking inside morality. And you have to look inside even to refer to it, but won’t notice that until you stop referring and try looking.)
To a first approximation, but not quite, since it might be impossible to know what is right, for any computation not to speak of a mere human, only to make right guesses.
Every well-defined question has in a sense a “subjectively objective” answer: there’s “subjectivity” in the way the question has to be interpreted by an agent that takes on a task of answering it, and “objectivity” in the rules of reasoning established by such interpretation, that makes some possible answers incorrect with respect to that abstract standard.
I don’t quite see how this is opposed to the other points of your comment. If you actually start unpacking the notion, you’ll find that it’s a very long list. Alternatively, you might try referring to that list by mentioning it, but that’s a tricky task for various reasons, including the need to use morality to locate (and precisely describe the location of) the list. Perhaps we can refer to morality concisely, but it’s not clear how.
I had no idea what Eliezer was talking about originally until I started thinking in terms of should_T. Based on that and the general level of confusion among people trying to understand his metaethics, I concluded that EY was wrong—more people would understand if he talked in terms of should_T. Based on some of the back and forth here, I’m revising that opinion somewhat. Apparently this stuff is just confusing and I may just be atypical in being able to initially understand it better in those terms.