Yes, that’s part of what I mean by the “resolving uncertainties” side. Value of information has to do with the chance new information would change one’s current views, which is a matter of (partially) resolving uncertainty, rather than a matter of making decisions given current uncertainties (if we ignore for a moment the possibility of making decisions about whether to gain more info).
I’ll be writing a post that has to do with resolving uncertainties soon, and then another applying VoI to moral uncertainty. I wasn’t planning to discuss the different types of uncertainty there (I was planning to instead focus just on different subtypes of moral uncertainty). But your comments have made me think maybe it’d be worth doing so (if I can think of something useful to say, and if saying it doesn’t add more length/complexity than its worth).
Yes, that’s part of what I mean by the “resolving uncertainties” side. Value of information has to do with the chance new information would change one’s current views, which is a matter of (partially) resolving uncertainty, rather than a matter of making decisions given current uncertainties (if we ignore for a moment the possibility of making decisions about whether to gain more info).
I’ll be writing a post that has to do with resolving uncertainties soon, and then another applying VoI to moral uncertainty. I wasn’t planning to discuss the different types of uncertainty there (I was planning to instead focus just on different subtypes of moral uncertainty). But your comments have made me think maybe it’d be worth doing so (if I can think of something useful to say, and if saying it doesn’t add more length/complexity than its worth).