Your point probably depend on the thinker’s take on moral realism. Because in a world where some things are inherently more moral, there is a way to choose between competing intuitions.
I guess moral uncertainty could be a way to deal with the problem you point to. Applying it to intuitions maybe, to get a sort of criteria that mixes multiple ones.
I really need to get around to writing more anti-moral-uncertainty posts :P
What it functionally is here is aggregating different peoples’ preferences via linear combination. And this is fine! But it’s not totally unobjectionable—some humans may in fact object to it (not just to the particular weights, which are of course subjective). So moral uncertainty isn’t a solution to meta-moral disagreement, it’s a framework you can use only after you’ve already resolved it to your own satisfaction and decided that you want to aggregate linearly.
I think that personal choices about morality are unaffected by the fact that significantly different cultures exist. Perhaps they call for a soupçon more humility, but your moral intuitions remain axiomatic for you.
Rather I think the adjustment needed in some cases is a greater weight on the idea that your moral intuitions are significantly shaped by the culture that you found yourself in, and that the scope of possibilities is wide.
Perhaps this has little practical impact because, though your axioms might be more arbitrary than supposed, you have little choice but to use them. But there will exist people shaped by very different cultures, who formed different rules, and it’s not clear that there’s necessarily any ground for debate; the desire for universal morality might be hopeless.
(Or perhaps communications technology will cause Earth to tend towards a singular culture, giving grounds for a morality universal to all, at least until aliens or disconnected space colonies.)
Two things comes to mind after reading this post:
Your point probably depend on the thinker’s take on moral realism. Because in a world where some things are inherently more moral, there is a way to choose between competing intuitions.
I guess moral uncertainty could be a way to deal with the problem you point to. Applying it to intuitions maybe, to get a sort of criteria that mixes multiple ones.
I really need to get around to writing more anti-moral-uncertainty posts :P
What it functionally is here is aggregating different peoples’ preferences via linear combination. And this is fine! But it’s not totally unobjectionable—some humans may in fact object to it (not just to the particular weights, which are of course subjective). So moral uncertainty isn’t a solution to meta-moral disagreement, it’s a framework you can use only after you’ve already resolved it to your own satisfaction and decided that you want to aggregate linearly.
I think that personal choices about morality are unaffected by the fact that significantly different cultures exist. Perhaps they call for a soupçon more humility, but your moral intuitions remain axiomatic for you.
Rather I think the adjustment needed in some cases is a greater weight on the idea that your moral intuitions are significantly shaped by the culture that you found yourself in, and that the scope of possibilities is wide.
Perhaps this has little practical impact because, though your axioms might be more arbitrary than supposed, you have little choice but to use them. But there will exist people shaped by very different cultures, who formed different rules, and it’s not clear that there’s necessarily any ground for debate; the desire for universal morality might be hopeless.
(Or perhaps communications technology will cause Earth to tend towards a singular culture, giving grounds for a morality universal to all, at least until aliens or disconnected space colonies.)