Do we even need the concept “moral uncertainty”? Would the more complete phrases “uncertainty of moral importance” be better, to distinguish from “uncertainty of effects of an action”, which is just plain old rational uncertainty.
Not sure I understand what you mean there. The term “Moral uncertainty” is (I believe) meant to be analogous to the term “empirical uncertainty”, which was already established, and I think it covers what you mean by “uncertainty of moral importance”, so I’m not sure what we’d come up with another, different-sounding, longer term.
Also, “uncertainty of moral importance” might make it sound like we want to just separately consider how morally important each given act may be. But it could be far more efficient to think that we’re “morally uncertain” about things like the moral status of animals or whether to believe utilitarianism or virtue ethics, and then have our judgement of the “moral importance” of many different actions informed by that more general moral uncertainty. So I think “moral uncertainty” is also clearer/less misleading.
This is again analogous to empirical uncertainty, I believe. We don’t want to just track our uncertainty about the effects of each given action. It’s more natural and efficient to also track our uncertainty about certain states of the world (e.g., how many people are working on AGI and how many are working on AI safety), and have that feed into our uncertainty about the effects of specific actions (e.g. funding a certain AI safety project).
I also don’t believe I’ve come across the term “rational uncertainty” before. It seems to me that we’d have empirical and moral uncertainty (as well as perhaps some other types of uncertainty, like meta-ethical uncertainty), and then put that together with a decision theory (which we may also have some uncertainty about), and get out what we rationally should do. See my two prior posts. I guess being uncertain about rationality might be like being uncertain about what decision theory to use to translate preferences and probability distributions into actions, but then we should call that decision-theoretic uncertainty. Or perhaps you mean “cases in which it is rational to be uncertain”, in which case it seems that would be a subset of all other types of uncertainty.
Do we even need the concept “moral uncertainty”? Would the more complete phrases “uncertainty of moral importance” be better, to distinguish from “uncertainty of effects of an action”, which is just plain old rational uncertainty.
Not sure I understand what you mean there. The term “Moral uncertainty” is (I believe) meant to be analogous to the term “empirical uncertainty”, which was already established, and I think it covers what you mean by “uncertainty of moral importance”, so I’m not sure what we’d come up with another, different-sounding, longer term.
Also, “uncertainty of moral importance” might make it sound like we want to just separately consider how morally important each given act may be. But it could be far more efficient to think that we’re “morally uncertain” about things like the moral status of animals or whether to believe utilitarianism or virtue ethics, and then have our judgement of the “moral importance” of many different actions informed by that more general moral uncertainty. So I think “moral uncertainty” is also clearer/less misleading.
This is again analogous to empirical uncertainty, I believe. We don’t want to just track our uncertainty about the effects of each given action. It’s more natural and efficient to also track our uncertainty about certain states of the world (e.g., how many people are working on AGI and how many are working on AI safety), and have that feed into our uncertainty about the effects of specific actions (e.g. funding a certain AI safety project).
I also don’t believe I’ve come across the term “rational uncertainty” before. It seems to me that we’d have empirical and moral uncertainty (as well as perhaps some other types of uncertainty, like meta-ethical uncertainty), and then put that together with a decision theory (which we may also have some uncertainty about), and get out what we rationally should do. See my two prior posts. I guess being uncertain about rationality might be like being uncertain about what decision theory to use to translate preferences and probability distributions into actions, but then we should call that decision-theoretic uncertainty. Or perhaps you mean “cases in which it is rational to be uncertain”, in which case it seems that would be a subset of all other types of uncertainty.
Let me know if I’m misunderstanding you, though.