Though I’d also emphasise that that’s most clearly true for, as you say, resolving the uncertainties. I currently think that recognising what type of uncertainty one is dealing with may not matter, or may matter less or matter less often, for making decisions given uncertainty (ignoring the possibility of deciding to gather more info or otherwise work towards resolving the uncertainties).
This is for the reasons I discuss in this comment. In particular (and to obnoxiously quote myself):
Analogy to empirical uncertainty: There are a huge number of different reasons I might be empirically uncertain—e.g., I might not have enough data on a known issue, I might have bad data on the issue, I might have the wrong model of a situation, I might not be aware of a relevant concept, I might have all the right data and model but limited processing/computation ability/effort. And this is certainly relevant to the matter of how to resolve uncertainty. But as far as I’m aware, expected value reasoning/expected utility theory is seen as the “rational” response in any case of empirical uncertainty. (Possibly excluding edge cases like Pascal’s wagers, which in any case seem to be issues of size of probability rather than of type of uncertainty.) It seems that, likewise, the “right” approach to making decisions under moral uncertainty may apply regardless of the type/source of that uncertainty (especially because MEC was developed by conscious analogy to approaches for handling empirical uncertainty).
(This isn’t disagreeing with you at all, as you only mentioned attempts to resolve rather than act under uncertainty, but I just thought it was worth noting.)
That said, it is possible there are cases in which even procedures for decision-making under uncertainty would still require knowing what kind of uncertainty one is facing. Both Tarsney and MacAskill argue that, for decision making under moral uncertainty, different types of moral theories may need to aggregated for in different way. (E.g., you can do expected value style reasoning with theories that say “how much” better one option is than another, but perhaps not for those that only say which option is better; I discuss this here.)
It seems plausible that similar things could occur for different types of uncertainty, such that, for example, if you realise that you’re actually uncertain about decision theories rather than moral theories that changes how you should aggregate the “views” of the different theories. But this is currently my own speculation; I haven’t tried to work through the details, and haven’t seen prior work explicitly discussing matters like this (beyond the semi-related or in-passing stuff covered in this post).
I think it’s valuable because of value of information considerations. Some types of uncertainty are dramatically more reducible than others. Some will be more prone to gotchas (sign flips).
Yes, that’s part of what I mean by the “resolving uncertainties” side. Value of information has to do with the chance new information would change one’s current views, which is a matter of (partially) resolving uncertainty, rather than a matter of making decisions given current uncertainties (if we ignore for a moment the possibility of making decisions about whether to gain more info).
I’ll be writing a post that has to do with resolving uncertainties soon, and then another applying VoI to moral uncertainty. I wasn’t planning to discuss the different types of uncertainty there (I was planning to instead focus just on different subtypes of moral uncertainty). But your comments have made me think maybe it’d be worth doing so (if I can think of something useful to say, and if saying it doesn’t add more length/complexity than its worth).
Thanks! And yes, I’d agree with that.
Though I’d also emphasise that that’s most clearly true for, as you say, resolving the uncertainties. I currently think that recognising what type of uncertainty one is dealing with may not matter, or may matter less or matter less often, for making decisions given uncertainty (ignoring the possibility of deciding to gather more info or otherwise work towards resolving the uncertainties).
This is for the reasons I discuss in this comment. In particular (and to obnoxiously quote myself):
(This isn’t disagreeing with you at all, as you only mentioned attempts to resolve rather than act under uncertainty, but I just thought it was worth noting.)
That said, it is possible there are cases in which even procedures for decision-making under uncertainty would still require knowing what kind of uncertainty one is facing. Both Tarsney and MacAskill argue that, for decision making under moral uncertainty, different types of moral theories may need to aggregated for in different way. (E.g., you can do expected value style reasoning with theories that say “how much” better one option is than another, but perhaps not for those that only say which option is better; I discuss this here.)
It seems plausible that similar things could occur for different types of uncertainty, such that, for example, if you realise that you’re actually uncertain about decision theories rather than moral theories that changes how you should aggregate the “views” of the different theories. But this is currently my own speculation; I haven’t tried to work through the details, and haven’t seen prior work explicitly discussing matters like this (beyond the semi-related or in-passing stuff covered in this post).
I think it’s valuable because of value of information considerations. Some types of uncertainty are dramatically more reducible than others. Some will be more prone to gotchas (sign flips).
Yes, that’s part of what I mean by the “resolving uncertainties” side. Value of information has to do with the chance new information would change one’s current views, which is a matter of (partially) resolving uncertainty, rather than a matter of making decisions given current uncertainties (if we ignore for a moment the possibility of making decisions about whether to gain more info).
I’ll be writing a post that has to do with resolving uncertainties soon, and then another applying VoI to moral uncertainty. I wasn’t planning to discuss the different types of uncertainty there (I was planning to instead focus just on different subtypes of moral uncertainty). But your comments have made me think maybe it’d be worth doing so (if I can think of something useful to say, and if saying it doesn’t add more length/complexity than its worth).