I think this is an excellent summary. I would make the following comments:
Confusions arise when people mistakenly read this metasemantic subjectivism into the first-order semantics or meaning of ‘right’.
Yes, but I think Eliezer was mistaken in identifying this kind of confusion as the fundamental source of the objections to his theory (as in the Löb’s theorem discussion). Sophisticated readers of LW (or OB, at the time) are surely capable of distinguishing between logical levels. At least, I am—but nevertheless, I still didn’t feel that his theory was adequately “non-relativist” to satisfy the kinds of people who worry about “relativism”. What I had in mind, in other words, was your objections (2) and (3).
The answer to those objections, by the way, is that an “adequately objective” metaethics is impossible: the minds of complex agents (such as humans) are the only place in the universe where information about morality is to be found, and there are plenty of possible minds in mind-design space (paperclippers, pebblesorters, etc.) from which it is impossible to extract the same information. This directly answers (3), anyway; as for (2), “fallibility” is rescued (on the object level) by means of imperfect introspective knowledge: an agent could be mistaken about what its own terminal values are.
Note that your answer to (2) also answers (1): value uncertainty makes it seem as if there is substantive, fundamental normative disagreement even if there isn’t. (Or maybe there is if you don’t buy that particular element of EY’s theory)
The answer to those objections, by the way, is that an “adequately objective” metaethics is impossible: the minds of complex agents (such as humans) are the only place in the universe where information about morality is to be found, and there are plenty of possible minds in mind-design space (paperclippers, pebblesorters, etc.) from which it is impossible to extract the same information.
Elizer attempted to deal with that problem by defining a certain set of things as “h-right”, that is, morally right from the frame of reference of the human mind. He made clear that alien entities probably would not care about what is h-right, but that humans do, and that’s good enough.
The answer to those objections, by the way, is that an “adequately objective” metaethics is impossible
That’s not a reason to prefer EY’s theory to an error theory (according to which properly normative properties would have to be irreducibly normative, but no such properties actually exist).
Until persuaded otherwise, I agree with you on this point. (These days, I take Richard Joyce to have the clearest defense of error theory, and I just subtract his confusing-to-me defense of fictionalism.) Besides, I think there are better ways of getting something like an ‘objective’ ethical theory (in something like a ‘realist’ sense) while still holding that reasons for action arise only from desires, or from relations between desires and states of affairs. In fact, that’s the kind of theory I defend: desirism. Though, I’m not too interested anymore in whether desirism is to be called ‘objective’ or ‘realist’, even though I think a good case can be made for both.
I think this is an excellent summary. I would make the following comments:
Yes, but I think Eliezer was mistaken in identifying this kind of confusion as the fundamental source of the objections to his theory (as in the Löb’s theorem discussion). Sophisticated readers of LW (or OB, at the time) are surely capable of distinguishing between logical levels. At least, I am—but nevertheless, I still didn’t feel that his theory was adequately “non-relativist” to satisfy the kinds of people who worry about “relativism”. What I had in mind, in other words, was your objections (2) and (3).
The answer to those objections, by the way, is that an “adequately objective” metaethics is impossible: the minds of complex agents (such as humans) are the only place in the universe where information about morality is to be found, and there are plenty of possible minds in mind-design space (paperclippers, pebblesorters, etc.) from which it is impossible to extract the same information. This directly answers (3), anyway; as for (2), “fallibility” is rescued (on the object level) by means of imperfect introspective knowledge: an agent could be mistaken about what its own terminal values are.
Note that your answer to (2) also answers (1): value uncertainty makes it seem as if there is substantive, fundamental normative disagreement even if there isn’t. (Or maybe there is if you don’t buy that particular element of EY’s theory)
Elizer attempted to deal with that problem by defining a certain set of things as “h-right”, that is, morally right from the frame of reference of the human mind. He made clear that alien entities probably would not care about what is h-right, but that humans do, and that’s good enough.
That’s not a reason to prefer EY’s theory to an error theory (according to which properly normative properties would have to be irreducibly normative, but no such properties actually exist).
Richard,
Until persuaded otherwise, I agree with you on this point. (These days, I take Richard Joyce to have the clearest defense of error theory, and I just subtract his confusing-to-me defense of fictionalism.) Besides, I think there are better ways of getting something like an ‘objective’ ethical theory (in something like a ‘realist’ sense) while still holding that reasons for action arise only from desires, or from relations between desires and states of affairs. In fact, that’s the kind of theory I defend: desirism. Though, I’m not too interested anymore in whether desirism is to be called ‘objective’ or ‘realist’, even though I think a good case can be made for both.