I’m super skeptical of the moral uncertainty framing. Maybe I should go into more detail elsewhere...
Basically, it puts too much focus on “what theory is Right” and not enough focus on “what are my moral beliefs right now?”
First, this can get you into trouble with essentialism. I’m worried that this whole framing plays straight into a further fact argument: something like “Ah yes, you’ve told me what your moral beliefs are right now, but there’s also some further fact about whether the theory you endorse is actually Right. That is, after all, the thing that you must be morally uncertain about. QED, morality is a non-physical essence.”
Second, it’s because in practice I think it enables some bad habits. I’m thinking of the sort of person who is looking for “the right moral theory,” and is convinced that this theory will be neat and elegant, taking only a few sentences to write down. Moral uncertainty, therefore, can become a crutch where people become uncertain between different moral theories, but still only those theories that are neat and elegant. This is the sort of person who will be tempted to put, say, 30% on total utilitarianism, and 20% on negative utilitarianism, etc., and 0.0% on their own revealed preferences about population ethics.
Anyhow, thank you for the interesting and not really all that monstrously long summary :)
Hmm. Moral uncertainty definitely doesn’t assume moral realism. You could just have some credence in the possibility that there are no moral facts.
If instead by ‘essentialism’ you mean moral cognitivism (the view that moral beliefs can take truth values) then you’re right that moral uncertainty makes most sense under cognitivism. But non-cognitivist versions (where your moral beliefs are just expressions of preference, approval, or desire) also seem workable. I’m not sure what any of this has to do with ‘non-physical essences’ though. I think I know what you mean by that, but maybe you could clarify?
Interesting point about moral uncertainty favouring elegant theories. Not sure it’s necessarily true however — again, I could just have some credence in the possibility that a messy version of folk morality is true.
Yeah, I agree that it’s compatible with non-realism / non-cognitivism. But it doesn’t make it convenient for you :) For example, your first thought for how to include non-realism was probably to include a possibility in which no moral theory is Right. But this doesn’t get you out of trouble, because it’s still equivalent to saying that realist moral theories are all that we should base decisions on.
Instead, compatibility has to come from interpreting the whole moral uncertainty framework in a different light. This brings us back to “non-physical essences”—things that there are facts about that are independent of physical reality. The most obvious interpretation of moral uncertainty is as uncertainty over the state of this Rightness essence, even given full facts about the physical world. To re-interpret moral uncertainty in a naturalistic way (in terms of, e.g. a human’s model of how their feelings might change in the future) seems interesting, but also seems to require swimming against the current of the framing.
Got it, thanks. I think the phrase ‘non-physical essences’ makes moral realism sound way spookier than necessary. I don’t think involve ‘essences’ in a similar way to how one decision could be objectively more rational than another without there being any rationality ‘essences’. But what you’re saying sounds basically right. Makes me wonder — it’s super unclear what to do if you’re also just uncertain between cognitivism and non-cognitivism. Would you need some extra layer of uncertainty and a corresponding decision procedure? I’m really not sure.
Yeah, that’s a good point. With rationality, though, we can usually agree that half of the heavy lifting is done inside the definition of the word “rational.” If I say something controversial, like that preferring chocolate ice cream to vanilla is rational, then you might suppose that we’re using the word “rational” in different ways, not that we disagree about some unique and impersonal standard of rationality.
Not to say that you can’t do the same thing with morality. But when I mention a morality-essence, I mean to imply the other treatment, where there’s something outside of ourselves and our definitions that does most of the heavy lifting, so that when we disagree it’s probably not that we define morality differently, it’s that we disagree about the state of this external factor.
Understood. I’m not so sure there is such a big difference between uses of ‘rational’ and ‘moral’ in terms of implying the existence of norms ‘outside of ourselves’. In any case, it sounds to me now like you’re saying that everyday moral language assumes cognitivism + realism. Maybe so, but I’m not so sure what this has to do with moral uncertainty specifically.
I’m super skeptical of the moral uncertainty framing. Maybe I should go into more detail elsewhere...
Basically, it puts too much focus on “what theory is Right” and not enough focus on “what are my moral beliefs right now?”
First, this can get you into trouble with essentialism. I’m worried that this whole framing plays straight into a further fact argument: something like “Ah yes, you’ve told me what your moral beliefs are right now, but there’s also some further fact about whether the theory you endorse is actually Right. That is, after all, the thing that you must be morally uncertain about. QED, morality is a non-physical essence.”
Second, it’s because in practice I think it enables some bad habits. I’m thinking of the sort of person who is looking for “the right moral theory,” and is convinced that this theory will be neat and elegant, taking only a few sentences to write down. Moral uncertainty, therefore, can become a crutch where people become uncertain between different moral theories, but still only those theories that are neat and elegant. This is the sort of person who will be tempted to put, say, 30% on total utilitarianism, and 20% on negative utilitarianism, etc., and 0.0% on their own revealed preferences about population ethics.
Anyhow, thank you for the interesting and not really all that monstrously long summary :)
Hmm. Moral uncertainty definitely doesn’t assume moral realism. You could just have some credence in the possibility that there are no moral facts.
If instead by ‘essentialism’ you mean moral cognitivism (the view that moral beliefs can take truth values) then you’re right that moral uncertainty makes most sense under cognitivism. But non-cognitivist versions (where your moral beliefs are just expressions of preference, approval, or desire) also seem workable. I’m not sure what any of this has to do with ‘non-physical essences’ though. I think I know what you mean by that, but maybe you could clarify?
Interesting point about moral uncertainty favouring elegant theories. Not sure it’s necessarily true however — again, I could just have some credence in the possibility that a messy version of folk morality is true.
Yeah, I agree that it’s compatible with non-realism / non-cognitivism. But it doesn’t make it convenient for you :) For example, your first thought for how to include non-realism was probably to include a possibility in which no moral theory is Right. But this doesn’t get you out of trouble, because it’s still equivalent to saying that realist moral theories are all that we should base decisions on.
Instead, compatibility has to come from interpreting the whole moral uncertainty framework in a different light. This brings us back to “non-physical essences”—things that there are facts about that are independent of physical reality. The most obvious interpretation of moral uncertainty is as uncertainty over the state of this Rightness essence, even given full facts about the physical world. To re-interpret moral uncertainty in a naturalistic way (in terms of, e.g. a human’s model of how their feelings might change in the future) seems interesting, but also seems to require swimming against the current of the framing.
Got it, thanks. I think the phrase ‘non-physical essences’ makes moral realism sound way spookier than necessary. I don’t think involve ‘essences’ in a similar way to how one decision could be objectively more rational than another without there being any rationality ‘essences’. But what you’re saying sounds basically right. Makes me wonder — it’s super unclear what to do if you’re also just uncertain between cognitivism and non-cognitivism. Would you need some extra layer of uncertainty and a corresponding decision procedure? I’m really not sure.
Yeah, that’s a good point. With rationality, though, we can usually agree that half of the heavy lifting is done inside the definition of the word “rational.” If I say something controversial, like that preferring chocolate ice cream to vanilla is rational, then you might suppose that we’re using the word “rational” in different ways, not that we disagree about some unique and impersonal standard of rationality.
Not to say that you can’t do the same thing with morality. But when I mention a morality-essence, I mean to imply the other treatment, where there’s something outside of ourselves and our definitions that does most of the heavy lifting, so that when we disagree it’s probably not that we define morality differently, it’s that we disagree about the state of this external factor.
Understood. I’m not so sure there is such a big difference between uses of ‘rational’ and ‘moral’ in terms of implying the existence of norms ‘outside of ourselves’. In any case, it sounds to me now like you’re saying that everyday moral language assumes cognitivism + realism. Maybe so, but I’m not so sure what this has to do with moral uncertainty specifically.