I believe such a sentence is indeed lacking. One reason is that, as far as I can tell, there isn’t really a crisp definition of moral uncertainty in terms of a small set of necessary and sufficient criteria. Instead, it’s basically “Moral uncertainty is uncertainty about moral matters”, which then has to be accompanied with a range of examples and counterexamples of the sort of thing we mean by that.
That’s part of why I’m writing a series of posts on the various aspects of what we mean by moral uncertainty, rather than just putting a quick definition at the start of one post and then moving on to how to make decisions when morally uncertain. (Which is what I originally did for the earlier version of this other post, before receiving a comment there making a similar point to your one here! I think with such fuzzy terms it’s somewhat hard to avoid such issues, though I do appreciate the feedback pushing me to keep trying harder :) )
Another reason such a sentence is lacking is that this post is intended to follow on from my prior one, where I open with a quote listing examples of moral uncertainties, and then write:
I consider the above quote a great starting point for understanding what moral uncertainty is; it gives clear examples of moral uncertainties, and contrasts these with related empirical uncertainties. From what I’ve seen, a lot of academic work on moral uncertainty essentially opens with something like the above, then notes that the rational approach to decision-making under empirical uncertainty is typically considered to be expected utility theory, then discusses various approaches for decision-making under moral uncertainty.
That’s fair enough, as no one article can cover everything, but it also leaves open some major questions about what moral uncertainty actuallyis.
So this post is meant to follow one in which many examples of moral uncertainty are given, and moral uncertainty is contrasted against various related concepts. Those together provide a better starting point than a “Moral uncertainty is defined as...” sentence can, given how fuzzy the concept of “moral uncertainty” is and how its definition would rely on other terms that do a lot of work (like what “moral matters” are).
But it’s true that many people may read this post without having read that one, and without having a background familiarity with the term. So it may well be good to add near the start even just a sentence like “Moral uncertainty is uncertainty about moral matters”, and perhaps an explicit note that I partly intend the meaning to become increasingly clear through the provision of various examples. I plan to touch up these posts once I’m done with the sequence of them, and I’ve made a note to maybe add something like that then.
It’s also possible changing the title could help with that, but I didn’t manage to think of anything that wasn’t overly long or obscure and that better captured the content. (I did explicitly decide to avoid “What is moral uncertainty?”, as that felt like even more of an oversell—one reasonably sized post can only tackle part of that question, not all of it.)
And if anyone has any particularly good ideas for snappy definitions or fitting titles, I’d be happy to hear them :)
Update: I’m now considering changing the title to “What kind of ‘should’ is involved in moral uncertainty?” It seems to me that’s a bit of a weird title and it’s less immediately apparent what it’d mean, but it might more accurately capture what’s in this post. Open to people’s thoughts on that.
Just to give context for people reading the comments later, the original title was “What do we mean by “moral uncertainty”?”, which I now realise poorly captured the contents of the post.
Instead, it’s basically “Moral uncertainty is uncertainty about moral matters”, which then has to be accompanied with a range of examples and counterexamples of the sort of thing we mean by that.
What need is there for a definition of “moral uncertainty”? Empirical uncertainty is uncertainty about empirical matters. Logical uncertainty is uncertainty about logical matters. Moral uncertainty is uncertainty about moral matters. These phrases mean these things in the same way that “red car” means a car that is red, and does not need a definition.
If one does not believe there are objective moral truths, then “Moral uncertainty is uncertainty about moral matters” might feel problematic. The problem lies not in “uncertainty” but in “moral matters”. But that is an issue you have postponed.
In my experience, stating things outright and giving examples helps with communication. You might not need a definition, but the relevant question is would it improve the text for other readers?
I agree to an extent. I do think, in practice, “It’s like empirical uncertainty, but for moral stuff” really is sufficient for many purposes, for most non-philosophers. But, as commenters on a prior post of mine said, there are some issues not explained by that, which are potentially worth unpacking and which some people would like unpacked. For example...
You note the ambiguity with the term “moral matters”, but there’s also the ambiguity in the term “uncertainty” (e.g., the risk-uncertainty distinction people sometimes make, or different types of probabilities that might feed into uncertainties), which will be the subject of my next post. And when we talk about moral uncertainty, we very likely want to know what we “should” do given uncertainty, so what we mean by “should” there is also important and relevant, and, as covered in this post, is debated in multiple ways. And then, as you say, there’s also the question of what moral uncertainty can mean for antirealists.
And as I covered in an earlier post, there are many other concepts which are somewhat similar to moral uncertainty, so it seems worth pulling those concepts apart (or showing where the lines really are just unclear/arbitrary). E.g., some philosophers seem fairly adamant that moral uncertainty must be treated totally differently to empirical uncertainty (e.g., arguing we basically just have to “Do what’s actually right”, even if we have no idea what that is, and can’t meaningfully take into account our current best guesses as to moral matters). I’d argue (as would people like MacAskill and Tarsney) that realising how hard it is to separate moral and empirical uncertainty helps highlight why that view is flawed.
Do we even need the concept “moral uncertainty”? Would the more complete phrases “uncertainty of moral importance” be better, to distinguish from “uncertainty of effects of an action”, which is just plain old rational uncertainty.
Not sure I understand what you mean there. The term “Moral uncertainty” is (I believe) meant to be analogous to the term “empirical uncertainty”, which was already established, and I think it covers what you mean by “uncertainty of moral importance”, so I’m not sure what we’d come up with another, different-sounding, longer term.
Also, “uncertainty of moral importance” might make it sound like we want to just separately consider how morally important each given act may be. But it could be far more efficient to think that we’re “morally uncertain” about things like the moral status of animals or whether to believe utilitarianism or virtue ethics, and then have our judgement of the “moral importance” of many different actions informed by that more general moral uncertainty. So I think “moral uncertainty” is also clearer/less misleading.
This is again analogous to empirical uncertainty, I believe. We don’t want to just track our uncertainty about the effects of each given action. It’s more natural and efficient to also track our uncertainty about certain states of the world (e.g., how many people are working on AGI and how many are working on AI safety), and have that feed into our uncertainty about the effects of specific actions (e.g. funding a certain AI safety project).
I also don’t believe I’ve come across the term “rational uncertainty” before. It seems to me that we’d have empirical and moral uncertainty (as well as perhaps some other types of uncertainty, like meta-ethical uncertainty), and then put that together with a decision theory (which we may also have some uncertainty about), and get out what we rationally should do. See my two prior posts. I guess being uncertain about rationality might be like being uncertain about what decision theory to use to translate preferences and probability distributions into actions, but then we should call that decision-theoretic uncertainty. Or perhaps you mean “cases in which it is rational to be uncertain”, in which case it seems that would be a subset of all other types of uncertainty.
Those questions all help point to the concept at hand, but they’re actually all about decision-making under moral uncertainty, rather than moral uncertainty itself. In the same way, empirical uncertainty is uncertainty about things like whether a stock will increase in price tomorrow, which can then be blended with other things (like decision theory and your preferences) to answer questions like “What should I do, given that I don’t know whether this stock will increase in price tomorrow?”
I did start with a post on decision-making under moral uncertainty, but then got the feedback (which I’ve now realised was very much on point) that it would be worth stepping back quite a bit to discuss what moral uncertainty itself actually is.
Additionally, I’d say that none of those quoted questions at all disentangle moral from empirical uncertainty. For example, I could be 100% certain in some moral theory where infringing people’s rights is bad but everything else is fine, but still not know what I should do, because I don’t know which of a set of actions is least likely to end up infringing rights (an empirical uncertainty). So it’d be necessary to modify those questions to something like “What should I do, given that I don’t know what’s morally right, despite knowing the relevant empirical facts?” …which now involves two other terms worth defining/distinguishing, and so here we’re getting into the complexities I mentioned :) (And back into the sort of stuff that my post prior to this one unpacked.)
But all that said, I think it probably is a good idea to open this post with something to point at the concept at hand, for those readers who didn’t read the prior post and are relatively unfamiliar with the term “moral uncertainty”. So I’ve added two short sentences at the start to accomplish that objective.
(For anyone who’s for some reason interested, the original version of this post is here.)
I believe such a sentence is indeed lacking. One reason is that, as far as I can tell, there isn’t really a crisp definition of moral uncertainty in terms of a small set of necessary and sufficient criteria. Instead, it’s basically “Moral uncertainty is uncertainty about moral matters”, which then has to be accompanied with a range of examples and counterexamples of the sort of thing we mean by that.
That’s part of why I’m writing a series of posts on the various aspects of what we mean by moral uncertainty, rather than just putting a quick definition at the start of one post and then moving on to how to make decisions when morally uncertain. (Which is what I originally did for the earlier version of this other post, before receiving a comment there making a similar point to your one here! I think with such fuzzy terms it’s somewhat hard to avoid such issues, though I do appreciate the feedback pushing me to keep trying harder :) )
Another reason such a sentence is lacking is that this post is intended to follow on from my prior one, where I open with a quote listing examples of moral uncertainties, and then write:
So this post is meant to follow one in which many examples of moral uncertainty are given, and moral uncertainty is contrasted against various related concepts. Those together provide a better starting point than a “Moral uncertainty is defined as...” sentence can, given how fuzzy the concept of “moral uncertainty” is and how its definition would rely on other terms that do a lot of work (like what “moral matters” are).
But it’s true that many people may read this post without having read that one, and without having a background familiarity with the term. So it may well be good to add near the start even just a sentence like “Moral uncertainty is uncertainty about moral matters”, and perhaps an explicit note that I partly intend the meaning to become increasingly clear through the provision of various examples. I plan to touch up these posts once I’m done with the sequence of them, and I’ve made a note to maybe add something like that then.
It’s also possible changing the title could help with that, but I didn’t manage to think of anything that wasn’t overly long or obscure and that better captured the content. (I did explicitly decide to avoid “What is moral uncertainty?”, as that felt like even more of an oversell—one reasonably sized post can only tackle part of that question, not all of it.)
And if anyone has any particularly good ideas for snappy definitions or fitting titles, I’d be happy to hear them :)
Update: I’m now considering changing the title to “What kind of ‘should’ is involved in moral uncertainty?” It seems to me that’s a bit of a weird title and it’s less immediately apparent what it’d mean, but it might more accurately capture what’s in this post. Open to people’s thoughts on that.
I’ve just changed the title along those lines.
Just to give context for people reading the comments later, the original title was “What do we mean by “moral uncertainty”?”, which I now realise poorly captured the contents of the post.
What need is there for a definition of “moral uncertainty”? Empirical uncertainty is uncertainty about empirical matters. Logical uncertainty is uncertainty about logical matters. Moral uncertainty is uncertainty about moral matters. These phrases mean these things in the same way that “red car” means a car that is red, and does not need a definition.
If one does not believe there are objective moral truths, then “Moral uncertainty is uncertainty about moral matters” might feel problematic. The problem lies not in “uncertainty” but in “moral matters”. But that is an issue you have postponed.
In my experience, stating things outright and giving examples helps with communication. You might not need a definition, but the relevant question is would it improve the text for other readers?
I agree to an extent. I do think, in practice, “It’s like empirical uncertainty, but for moral stuff” really is sufficient for many purposes, for most non-philosophers. But, as commenters on a prior post of mine said, there are some issues not explained by that, which are potentially worth unpacking and which some people would like unpacked. For example...
You note the ambiguity with the term “moral matters”, but there’s also the ambiguity in the term “uncertainty” (e.g., the risk-uncertainty distinction people sometimes make, or different types of probabilities that might feed into uncertainties), which will be the subject of my next post. And when we talk about moral uncertainty, we very likely want to know what we “should” do given uncertainty, so what we mean by “should” there is also important and relevant, and, as covered in this post, is debated in multiple ways. And then, as you say, there’s also the question of what moral uncertainty can mean for antirealists.
And as I covered in an earlier post, there are many other concepts which are somewhat similar to moral uncertainty, so it seems worth pulling those concepts apart (or showing where the lines really are just unclear/arbitrary). E.g., some philosophers seem fairly adamant that moral uncertainty must be treated totally differently to empirical uncertainty (e.g., arguing we basically just have to “Do what’s actually right”, even if we have no idea what that is, and can’t meaningfully take into account our current best guesses as to moral matters). I’d argue (as would people like MacAskill and Tarsney) that realising how hard it is to separate moral and empirical uncertainty helps highlight why that view is flawed.
Do we even need the concept “moral uncertainty”? Would the more complete phrases “uncertainty of moral importance” be better, to distinguish from “uncertainty of effects of an action”, which is just plain old rational uncertainty.
Not sure I understand what you mean there. The term “Moral uncertainty” is (I believe) meant to be analogous to the term “empirical uncertainty”, which was already established, and I think it covers what you mean by “uncertainty of moral importance”, so I’m not sure what we’d come up with another, different-sounding, longer term.
Also, “uncertainty of moral importance” might make it sound like we want to just separately consider how morally important each given act may be. But it could be far more efficient to think that we’re “morally uncertain” about things like the moral status of animals or whether to believe utilitarianism or virtue ethics, and then have our judgement of the “moral importance” of many different actions informed by that more general moral uncertainty. So I think “moral uncertainty” is also clearer/less misleading.
This is again analogous to empirical uncertainty, I believe. We don’t want to just track our uncertainty about the effects of each given action. It’s more natural and efficient to also track our uncertainty about certain states of the world (e.g., how many people are working on AGI and how many are working on AI safety), and have that feed into our uncertainty about the effects of specific actions (e.g. funding a certain AI safety project).
I also don’t believe I’ve come across the term “rational uncertainty” before. It seems to me that we’d have empirical and moral uncertainty (as well as perhaps some other types of uncertainty, like meta-ethical uncertainty), and then put that together with a decision theory (which we may also have some uncertainty about), and get out what we rationally should do. See my two prior posts. I guess being uncertain about rationality might be like being uncertain about what decision theory to use to translate preferences and probability distributions into actions, but then we should call that decision-theoretic uncertainty. Or perhaps you mean “cases in which it is rational to be uncertain”, in which case it seems that would be a subset of all other types of uncertainty.
Let me know if I’m misunderstanding you, though.
30 seconds of googling gave me this link, which might not be anything exceptional but at least it offers a couple of relevant definitions:
and
and later a more focused question
At least they define what they are working on...
Those questions all help point to the concept at hand, but they’re actually all about decision-making under moral uncertainty, rather than moral uncertainty itself. In the same way, empirical uncertainty is uncertainty about things like whether a stock will increase in price tomorrow, which can then be blended with other things (like decision theory and your preferences) to answer questions like “What should I do, given that I don’t know whether this stock will increase in price tomorrow?”
I did start with a post on decision-making under moral uncertainty, but then got the feedback (which I’ve now realised was very much on point) that it would be worth stepping back quite a bit to discuss what moral uncertainty itself actually is.
Additionally, I’d say that none of those quoted questions at all disentangle moral from empirical uncertainty. For example, I could be 100% certain in some moral theory where infringing people’s rights is bad but everything else is fine, but still not know what I should do, because I don’t know which of a set of actions is least likely to end up infringing rights (an empirical uncertainty). So it’d be necessary to modify those questions to something like “What should I do, given that I don’t know what’s morally right, despite knowing the relevant empirical facts?” …which now involves two other terms worth defining/distinguishing, and so here we’re getting into the complexities I mentioned :) (And back into the sort of stuff that my post prior to this one unpacked.)
But all that said, I think it probably is a good idea to open this post with something to point at the concept at hand, for those readers who didn’t read the prior post and are relatively unfamiliar with the term “moral uncertainty”. So I’ve added two short sentences at the start to accomplish that objective.
(For anyone who’s for some reason interested, the original version of this post is here.)