I agree to an extent. I do think, in practice, “It’s like empirical uncertainty, but for moral stuff” really is sufficient for many purposes, for most non-philosophers. But, as commenters on a prior post of mine said, there are some issues not explained by that, which are potentially worth unpacking and which some people would like unpacked. For example...
You note the ambiguity with the term “moral matters”, but there’s also the ambiguity in the term “uncertainty” (e.g., the risk-uncertainty distinction people sometimes make, or different types of probabilities that might feed into uncertainties), which will be the subject of my next post. And when we talk about moral uncertainty, we very likely want to know what we “should” do given uncertainty, so what we mean by “should” there is also important and relevant, and, as covered in this post, is debated in multiple ways. And then, as you say, there’s also the question of what moral uncertainty can mean for antirealists.
And as I covered in an earlier post, there are many other concepts which are somewhat similar to moral uncertainty, so it seems worth pulling those concepts apart (or showing where the lines really are just unclear/arbitrary). E.g., some philosophers seem fairly adamant that moral uncertainty must be treated totally differently to empirical uncertainty (e.g., arguing we basically just have to “Do what’s actually right”, even if we have no idea what that is, and can’t meaningfully take into account our current best guesses as to moral matters). I’d argue (as would people like MacAskill and Tarsney) that realising how hard it is to separate moral and empirical uncertainty helps highlight why that view is flawed.
I agree to an extent. I do think, in practice, “It’s like empirical uncertainty, but for moral stuff” really is sufficient for many purposes, for most non-philosophers. But, as commenters on a prior post of mine said, there are some issues not explained by that, which are potentially worth unpacking and which some people would like unpacked. For example...
You note the ambiguity with the term “moral matters”, but there’s also the ambiguity in the term “uncertainty” (e.g., the risk-uncertainty distinction people sometimes make, or different types of probabilities that might feed into uncertainties), which will be the subject of my next post. And when we talk about moral uncertainty, we very likely want to know what we “should” do given uncertainty, so what we mean by “should” there is also important and relevant, and, as covered in this post, is debated in multiple ways. And then, as you say, there’s also the question of what moral uncertainty can mean for antirealists.
And as I covered in an earlier post, there are many other concepts which are somewhat similar to moral uncertainty, so it seems worth pulling those concepts apart (or showing where the lines really are just unclear/arbitrary). E.g., some philosophers seem fairly adamant that moral uncertainty must be treated totally differently to empirical uncertainty (e.g., arguing we basically just have to “Do what’s actually right”, even if we have no idea what that is, and can’t meaningfully take into account our current best guesses as to moral matters). I’d argue (as would people like MacAskill and Tarsney) that realising how hard it is to separate moral and empirical uncertainty helps highlight why that view is flawed.