Yep. Subjective statements about X can be phrased in objectivese. But that doesn’t make them objective statements about X.
I don’t know what you mean, if anything, by “subjective” and “objective” here, and what they are for.
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
Okay… I think I’ll have to be more concrete. I’m going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only “moral” standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.
A mind, for example an alien, may be constructed such that it always executes the action that maximises the utility of some other standard. This utility function may be taken to be the “values” of the alien.
Moral praise and blame is not a mistake; whether certain actions result in an increase or decrease in the value of the moral utility function is a analytic fact. It is further an analytic fact that praise and blame, correctly applied, increases the output of the moral utility function, and that if we failed to do that, we would therefore fail to do the most moral thing.
I don’t know what you mean, if anything, by “subjective” and “objective” here, and what they are for.
By “subjective” I meant that it is indexed to an individual, and properly
so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is
no further fact that can undermine the truth of that—whereas if
Mary thinks the world is flat, there may be some sense in which
it is flat-for-Mary, but that doens’t count for anything, because the
shape of the world is not something about which Mary has the last word.
By other standards do you mean other people’s moral standards, or non-moral (eg aesthetic standards)?
Okay… I think I’ll have to be more concrete. I’m going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only “moral” standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.
And there is one such standard in the universe, not one per agent?
By “subjective” I meant that it is indexed to an individual, and properly so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is no further fact that can undermine the truth of that—whereas if Mary thinks the world is flat, there may be some sense in which it is flat-for-Mary, but that doens’t count for anything, because the shape of the world is not something about which Mary has the last word.
If Mary thinks the world is flat, she is asserting that a predicate holds of the earth. It turns out it doesn’t, so she is wrong. In the case of thinking vanilla is nice, there is no sensible niceness predicate, so we assume she’s using shorthand for nice_mary, which does exist, so she is correct. She might, however, get confused and think that nice_mary being true meant nice_x holds for all x, and use nice to mean that. If so, she would be wrong.
Okay then. An agent who thinks the morality-standard says something other than it does, is wrong, since statements about the judgements of the morality-standard are tautologically true.
And there is one such standard in the universe, not one per agent?
There is precisely one morality-standard.
Each (VNM-rational or potentially VNM-rational) agent contains a pointer to a standard—namely, the utility function the agent tries to maximise, or would try to maximise if they were rational. Most of these pointers within a light year of here will point to the morality-standard. A few of them will not. Outside of this volume there will be quite a lot of agents pointing to other standards.
I don’t know what you mean, if anything, by “subjective” and “objective” here, and what they are for.
Okay… I think I’ll have to be more concrete. I’m going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only “moral” standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.
A mind, for example an alien, may be constructed such that it always executes the action that maximises the utility of some other standard. This utility function may be taken to be the “values” of the alien.
Moral praise and blame is not a mistake; whether certain actions result in an increase or decrease in the value of the moral utility function is a analytic fact. It is further an analytic fact that praise and blame, correctly applied, increases the output of the moral utility function, and that if we failed to do that, we would therefore fail to do the most moral thing.
By “subjective” I meant that it is indexed to an individual, and properly so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is no further fact that can undermine the truth of that—whereas if Mary thinks the world is flat, there may be some sense in which it is flat-for-Mary, but that doens’t count for anything, because the shape of the world is not something about which Mary has the last word.
And there is one such standard in the universe, not one per agent?
If Mary thinks the world is flat, she is asserting that a predicate holds of the earth. It turns out it doesn’t, so she is wrong. In the case of thinking vanilla is nice, there is no sensible niceness predicate, so we assume she’s using shorthand for nice_mary, which does exist, so she is correct. She might, however, get confused and think that nice_mary being true meant nice_x holds for all x, and use nice to mean that. If so, she would be wrong.
Okay then. An agent who thinks the morality-standard says something other than it does, is wrong, since statements about the judgements of the morality-standard are tautologically true.
There is precisely one morality-standard.
Each (VNM-rational or potentially VNM-rational) agent contains a pointer to a standard—namely, the utility function the agent tries to maximise, or would try to maximise if they were rational. Most of these pointers within a light year of here will point to the morality-standard. A few of them will not. Outside of this volume there will be quite a lot of agents pointing to other standards.