The root problem here is that the category “moral” lumps together
(a) intuitions about what’s intrinsically valuable,
(b) intuitions about what the correct coordination protocols are, and
(c) intuitions about what’s healthy for a human.
Kantian morality, like the property intuitions you’ve identified, is about (b) (“don’t lie” doesn’t fail gracefully in a mixed world, but makes sense and is coherent as a proposed operating protocol), while Rawlsian morality and the sort of utilitarian calculus people are trying to derive from weird thought experiments about trolleys is about (a) (questions about things like distribution presuppose that we already have decent operating protocols to enable a shared deliberative mechanism, rather than a state of constant epistemic war).
I mean, yes, but I’m not sure this much impacts Katja’s analysis which is mostly about moral intuitions that are in conflict with moral reasoning. That the category of things we consider when talking about morals, ethics, and axiology is not clean cut (other than perhaps along the lines of being about “things we care about/value”) doesn’t really change the dissonance between intuition and reasoning in particular instances.
I think that the sort of division I’m proposing offers a way to decompose apparently incoherent “moral intuitions” into much more well-defined and coherent subcategories. I think that if someone practiced making this sort of distinction, they’d find this type of dissonance substantially reduced.
In other words, I’m interpreting the dissonance as evidence that we’re missing an important distinction, and then proposing a distinction. In particular I think this is a good alternative to Katja’s proposed writeoff of intuitions that can be explained away by e.g. property rights.
That’s flattering to Rawls, but is it actually what he meant?
Or did he just assume that you don’t need a mutually acceptable protocol for deciding how to allocate resources, and you can just skip right to enforcing the desirable outcome?
Crossposted from Katja’s blog:
The root problem here is that the category “moral” lumps together (a) intuitions about what’s intrinsically valuable, (b) intuitions about what the correct coordination protocols are, and (c) intuitions about what’s healthy for a human.
Kantian morality, like the property intuitions you’ve identified, is about (b) (“don’t lie” doesn’t fail gracefully in a mixed world, but makes sense and is coherent as a proposed operating protocol), while Rawlsian morality and the sort of utilitarian calculus people are trying to derive from weird thought experiments about trolleys is about (a) (questions about things like distribution presuppose that we already have decent operating protocols to enable a shared deliberative mechanism, rather than a state of constant epistemic war).
I mean, yes, but I’m not sure this much impacts Katja’s analysis which is mostly about moral intuitions that are in conflict with moral reasoning. That the category of things we consider when talking about morals, ethics, and axiology is not clean cut (other than perhaps along the lines of being about “things we care about/value”) doesn’t really change the dissonance between intuition and reasoning in particular instances.
I think that the sort of division I’m proposing offers a way to decompose apparently incoherent “moral intuitions” into much more well-defined and coherent subcategories. I think that if someone practiced making this sort of distinction, they’d find this type of dissonance substantially reduced.
In other words, I’m interpreting the dissonance as evidence that we’re missing an important distinction, and then proposing a distinction. In particular I think this is a good alternative to Katja’s proposed writeoff of intuitions that can be explained away by e.g. property rights.
That’s flattering to Rawls, but is it actually what he meant?
Or did he just assume that you don’t need a mutually acceptable protocol for deciding how to allocate resources, and you can just skip right to enforcing the desirable outcome?