Consider three cases in which someone is asking you about morality: a clever child, your guru (and/or Socrates, if you’re more comfortable with that tradition), or an about-to-FOOM AI of indeterminate friendliness. For each of them, you want your thoughts to be as clear as possible- the other entity is clever enough to point out flaws (or powerful enough that your flaws might be deadly), and for none of them can you assume that their prior or posterior morality will be very similar to your own. (As Thomas Sowell puts it, children are barbarians who need to be civilized before it is too late; your guru will seem willing to lead you anywhere, and the AI probably doesn’t think the way you do.)
I suggest that all three can be approached in the same way: by attempting to construct an amoral approach to morality. At first impression, this approach gives a significant benefit: circular reasoning is headed off at the pass, because you need to explain morality (as best as you can) to someone who does not understand or feel it.
Interested in what comes next?
The main concern I have is that there is a rather extensive Metaethics sequence already, and this seems to be very similar to The Moral Void and The Meaning of Right. The benefit of this post, if there is one, seems to be in a different approach to the issue- I think I can get a useful sketch of the issue in one post- and probably a different conclusion. At the moment, I don’t buy Eliezer’s approach to the Is-Ought gap (Right is a 1-place function… why?), and I think a redefinition of the question may make for somewhat better answers.
(The inspirations for this post, if you’re interested in me tackling them directly instead, are criticisms of utilitarianism obliquely raised in a huge tree in the Luminosity discussion thread (the two interesting dimensions are questioning assumptions, and talking about scope errors, of which I suspect scope errors is the more profitable) and the discussion around, as shokwave puts it, the Really Scary Idea.)
Amoral Approaches to Morality
Consider three cases in which someone is asking you about morality: a clever child, your guru (and/or Socrates, if you’re more comfortable with that tradition), or an about-to-FOOM AI of indeterminate friendliness. For each of them, you want your thoughts to be as clear as possible- the other entity is clever enough to point out flaws (or powerful enough that your flaws might be deadly), and for none of them can you assume that their prior or posterior morality will be very similar to your own. (As Thomas Sowell puts it, children are barbarians who need to be civilized before it is too late; your guru will seem willing to lead you anywhere, and the AI probably doesn’t think the way you do.)
I suggest that all three can be approached in the same way: by attempting to construct an amoral approach to morality. At first impression, this approach gives a significant benefit: circular reasoning is headed off at the pass, because you need to explain morality (as best as you can) to someone who does not understand or feel it.
Interested in what comes next?
The main concern I have is that there is a rather extensive Metaethics sequence already, and this seems to be very similar to The Moral Void and The Meaning of Right. The benefit of this post, if there is one, seems to be in a different approach to the issue- I think I can get a useful sketch of the issue in one post- and probably a different conclusion. At the moment, I don’t buy Eliezer’s approach to the Is-Ought gap (Right is a 1-place function… why?), and I think a redefinition of the question may make for somewhat better answers.
(The inspirations for this post, if you’re interested in me tackling them directly instead, are criticisms of utilitarianism obliquely raised in a huge tree in the Luminosity discussion thread (the two interesting dimensions are questioning assumptions, and talking about scope errors, of which I suspect scope errors is the more profitable) and the discussion around, as shokwave puts it, the Really Scary Idea.)