By “subjective” I meant that it is indexed to an individual, and properly so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is no further fact that can undermine the truth of that—whereas if Mary thinks the world is flat, there may be some sense in which it is flat-for-Mary, but that doens’t count for anything, because the shape of the world is not something about which Mary has the last word.
If Mary thinks the world is flat, she is asserting that a predicate holds of the earth. It turns out it doesn’t, so she is wrong. In the case of thinking vanilla is nice, there is no sensible niceness predicate, so we assume she’s using shorthand for nice_mary, which does exist, so she is correct. She might, however, get confused and think that nice_mary being true meant nice_x holds for all x, and use nice to mean that. If so, she would be wrong.
Okay then. An agent who thinks the morality-standard says something other than it does, is wrong, since statements about the judgements of the morality-standard are tautologically true.
And there is one such standard in the universe, not one per agent?
There is precisely one morality-standard.
Each (VNM-rational or potentially VNM-rational) agent contains a pointer to a standard—namely, the utility function the agent tries to maximise, or would try to maximise if they were rational. Most of these pointers within a light year of here will point to the morality-standard. A few of them will not. Outside of this volume there will be quite a lot of agents pointing to other standards.
If Mary thinks the world is flat, she is asserting that a predicate holds of the earth. It turns out it doesn’t, so she is wrong. In the case of thinking vanilla is nice, there is no sensible niceness predicate, so we assume she’s using shorthand for nice_mary, which does exist, so she is correct. She might, however, get confused and think that nice_mary being true meant nice_x holds for all x, and use nice to mean that. If so, she would be wrong.
Okay then. An agent who thinks the morality-standard says something other than it does, is wrong, since statements about the judgements of the morality-standard are tautologically true.
There is precisely one morality-standard.
Each (VNM-rational or potentially VNM-rational) agent contains a pointer to a standard—namely, the utility function the agent tries to maximise, or would try to maximise if they were rational. Most of these pointers within a light year of here will point to the morality-standard. A few of them will not. Outside of this volume there will be quite a lot of agents pointing to other standards.