“Handled as conflicting” seems to either mean “all-out war” or at best “temporary putting off of all-out war until we’ve used all the atoms on our side of the universe”.
If the two sides shared your desire to be symmetrically peaceful with other sides whose only point of similarity with them was the desire to be symmetrically peaceful with other sides whose… then Universalism isn’t false. That’s its minimal case.
And if it does fail, it seems counterproductive for you to point that out to us, because while we’re happily and deludedly trying to apply it, we’re not genociding each other all over your lawn.
Sorry, when I said “False Universalism”, I meant things like, “one group wants to have kings, and another wants parliamentary democracy”. Or “one group wants chocolate, and the other wants vanilla”. Common moral algorithms seem to simply assume that the majority wins, so if the majority wants chocolate, everyone gets chocolate. Moral constructionism gets around this by saying: values may not be universal, but we can come to game-theoretically sound agreements (even if they’re only Timelessly sound, like Rawls’ Theory of Justice) on how to handle the disagreements productively, thus wasting fewer resources on fighting each other when we could be spending them on Fun.
Basically, I think the correct moral algorithm is: use a constructionist algorithm to cluster people into groups who can then use realist universalisms internally.
“Handled as conflicting” seems to either mean “all-out war” or at best “temporary putting off of all-out war until we’ve used all the atoms on our side of the universe”.
If the two sides shared your desire to be symmetrically peaceful with other sides whose only point of similarity with them was the desire to be symmetrically peaceful with other sides whose… then Universalism isn’t false. That’s its minimal case.
And if it does fail, it seems counterproductive for you to point that out to us, because while we’re happily and deludedly trying to apply it, we’re not genociding each other all over your lawn.
Sorry, when I said “False Universalism”, I meant things like, “one group wants to have kings, and another wants parliamentary democracy”. Or “one group wants chocolate, and the other wants vanilla”. Common moral algorithms seem to simply assume that the majority wins, so if the majority wants chocolate, everyone gets chocolate. Moral constructionism gets around this by saying: values may not be universal, but we can come to game-theoretically sound agreements (even if they’re only Timelessly sound, like Rawls’ Theory of Justice) on how to handle the disagreements productively, thus wasting fewer resources on fighting each other when we could be spending them on Fun.
Basically, I think the correct moral algorithm is: use a constructionist algorithm to cluster people into groups who can then use realist universalisms internally.