IE does not make use of those words. But this is intuitively implausible.
...
...
My initial reaction is that 1, while initially implausible, gains plausibility from the rejection of 2 and 3. So your rebuttal of Eliezer’s metaethics needs to take 1 more seriously to be complete.
Ok, let’s take 1 more seriously. In order for Eliezer’s meta-ethics to qualify as meta-ethics, he has to at least roughly specify what IE is. But how do you specify an idealized version of yourself that reasons about morality without using words like “moral”, “right” and “should”? If Eliezer takes Base Eliezer and just deletes the parts of his mind that are related to these words, he’s almost certainly not going to like the results. What else could he do?
But how do you specify an idealized version of yourself that reasons about morality without using words like “moral”, “right” and “should”?
You don’t use those words, you refer to your brain as a whole, which happens to already contain those things, and specify extrapolation operations like time passing that it might go through. (Note that no one has nailed down what exactly the ideal extrapolation procedure would be, although there’s some intuition about what is and isn’t allowed. There is an implied claim there that different extrapolation procedures will tend to converge on similar results, although this is unlikely to be the case for every moral question or for quantitative moral questions at high precision.)
Indeed I did misinterpret it that way. To answer the other interpretation of that question,
how do you specify an (idealized version of yourself that reasons about morality without using words like “moral”, “right” and “should”)?
The answer is, I don’t think there’s any problem with your idealized self using those words. Sure, it’s self-referential, but self-referential in a way that makes stating that X is moral equivalent to returning, and asking whether Y is moral equivalent to recursing on Y. This is no different from an ordinary person thinking about a decision they’re going to make; the statements “I decide X” and “I decide not-X” are both tautologically true, but this is not a contradiction because these are performatives, not declaratives.
My initial reaction is that 1, while initially implausible, gains plausibility from the rejection of 2 and 3. So your rebuttal of Eliezer’s metaethics needs to take 1 more seriously to be complete.
Ok, let’s take 1 more seriously. In order for Eliezer’s meta-ethics to qualify as meta-ethics, he has to at least roughly specify what IE is. But how do you specify an idealized version of yourself that reasons about morality without using words like “moral”, “right” and “should”? If Eliezer takes Base Eliezer and just deletes the parts of his mind that are related to these words, he’s almost certainly not going to like the results. What else could he do?
You don’t use those words, you refer to your brain as a whole, which happens to already contain those things, and specify extrapolation operations like time passing that it might go through. (Note that no one has nailed down what exactly the ideal extrapolation procedure would be, although there’s some intuition about what is and isn’t allowed. There is an implied claim there that different extrapolation procedures will tend to converge on similar results, although this is unlikely to be the case for every moral question or for quantitative moral questions at high precision.)
I meant :
how do you specify an (idealized version of yourself that reasons about morality without using words like “moral”, “right” and “should”)?
But I think you interpreted me as:
how do you specify an (idealized version of yourself that reasons about morality) without using words like “moral”, “right” and “should”?
Indeed I did misinterpret it that way. To answer the other interpretation of that question,
The answer is, I don’t think there’s any problem with your idealized self using those words. Sure, it’s self-referential, but self-referential in a way that makes stating that X is moral equivalent to returning, and asking whether Y is moral equivalent to recursing on Y. This is no different from an ordinary person thinking about a decision they’re going to make; the statements “I decide X” and “I decide not-X” are both tautologically true, but this is not a contradiction because these are performatives, not declaratives.