I’d probably put somewhat less weight on the innateness of it, but still very valuable here.
I’d especially signal boost this, which argues for being more specific, and I basically agree with that recommendation, but also I think this is why we need to be able to decouple moral/valence assignments from positive facts, and you should not be both debating good or bad things with factual matters:
+9 for deconfusing people on fundamental matters related to morality.
I think sections 2.2, 2.4.5, 2.7.1, 2.7.2 and 2.7.3 should be in the LW canon as how to deal with moral/value questions, including why CEV doesn’t really work as an AI alignment strategy.
I basically think @sunwillrise got it correct, so I’m basically going to link to it, but I will expand on the implications below:
https://www.lesswrong.com/posts/SqgRtCwueovvwxpDQ/valence-series-2-valence-and-normativity#d54v5ThrDtt8Lmaer
I’d probably put somewhat less weight on the innateness of it, but still very valuable here.
I’d especially signal boost this, which argues for being more specific, and I basically agree with that recommendation, but also I think this is why we need to be able to decouple moral/valence assignments from positive facts, and you should not be both debating good or bad things with factual matters:
https://www.lesswrong.com/posts/SqgRtCwueovvwxpDQ/valence-series-2-valence-and-normativity#2_4_5_Should_we_be__anti___normative_heuristics_in_general_
+9 for deconfusing people on fundamental matters related to morality.
I think sections 2.2, 2.4.5, 2.7.1, 2.7.2 and 2.7.3 should be in the LW canon as how to deal with moral/value questions, including why CEV doesn’t really work as an AI alignment strategy.