Part of me wants to say I don’t think there’s really that much to say about moral uncertainty itself, before getting into how to handle it.
I’m confused by you saying this, given that you indicate having read my post on types of moral uncertainty. To me the different types warrant different ways of dealing with them. For example, intrinsic moral uncertainty was defined as different parts of your brain having fundamental disagreements about what kind of a value system to endorse. That kind of situation would require entirely different kinds of approaches, ones that would be better described as psychological than decision-theoretical.
It seems to me that before outlining any method for dealing with moral uncertainty, one would need to outline what type of MU it was applicable for and why.
I’m now definitely planning to write the above-mentioned post, discussing various definitions, types, and sources of moral uncertainty. As this will require thinking more deeply about that topic, I’ll be able to answer more properly once I have. (And I’ll also comment a link to post that in this thread.)
For now, some thoughts in response to your comment, which are not fully-formed and not meant to be convincing; just meant to indicate my current thinking and maybe help me formulate that other post through discussion:
Well, I did say “part of me”… :)
I think the last two sentences of your comment raise an interesting point worth taking seriously
I do think there are meaningfully different ways we can think about what moral uncertainty is, and that a categorisation/analysis of the different types and “sources” (i.e., why is one morally uncertain) could advance one’s thinking (that’s what that other post I’ll write will aim to do)
I think the main way it would help advance one’s thinking is by giving clues as to what one should do to resolve one’s uncertainty.
This seems to me to be what your post focuses on, for each of the three types you mention. E.g., to paraphrase (to relate this more to humans than AI, though both applications seem important), you seem to suggest that if the source of the uncertainty is our limited self-knowledge, we should engage in processes along the lines of introspection. In contrast, if the source of our uncertainty is that we don’t know what we’d enjoy/value because we haven’t tried it, we should engage with the world to learn more. This all seems right to me.
But what I cover in this post and the following one is how to make decisions when one is morally uncertain. I.e., imagining that you are stuck with uncertainty, what do you do? This is a different question to “how do I get rid of this uncertainty and find the best answer” (resolving it).
(Though in reality the best move under uncertainty may actually often be to gather more info—which I’ll discuss somewhat in my upcoming post on the applying value of information analysis to this topic—in which case the matter of how to resolve the uncertainty becomes relevant again.)
I’d currently guess (though I’m definitely open to being convinced otherwise) that the different types and sources of moral uncertainty don’t have substantial bearing on how to make decisions under moral uncertainty. This is for three main reasons:
Analogy to empirical uncertainty: There are a huge number of different reasons I might be empirically uncertain—e.g., I might not have enough data on a known issue, I might have bad data on the issue, I might have the wrong model of a situation, I might not be aware of a relevant concept, I might have all the right data and model but limited processing/computation ability/effort. And this is certainly relevant to the matter of how to resolve uncertainty. But as far as I’m aware, expected value reasoning/expected utility theory is seen as the “rational” response in any case of empirical uncertainty. (Possibly excluding edge cases like Pascal’s wagers, which in any case seem to be issues related to the size of the probability rather than to the type of uncertainty.) It seems that, likewise, the “right” approach to making decisions under moral uncertainty may apply regardless of the type/source of that uncertainty (especially because MEC was developed by conscious analogy to approaches for handling empirical uncertainty).
The fact that most academic sources I’ve seen about moral uncertainty seem to just very briefly discuss what moral uncertainty is, largely through analogy to empirical uncertainty and an example, and then launch into how to make decisions when morally uncertain. (Which is probably part of why I did it that way too.) It’s certainly possible that there are other sources I haven’t seen which discuss how different types/sources might suggest different approaches would be best. It’s also certainly possible the academics all just haven’t thought of this issue, or haven’t taken it seriously enough. But to me this is at least evidence that the matter of types/sources of moral uncertainty shouldn’t affect how one makes decisions under moral uncertainty.
(That said, I’m a little surprised I haven’t yet seen academic sources analysing different types/sources of moral uncertainty in order to discuss the best approaches for resolving it. Maybe they feel that’s effectively covered by more regular moral philosophy work. Or maybe it’s the sort of thing that’s better as a blog post than an academic article.)
I can’t presently see a reason why one should have a different decision-making procedure/aggregation procedure/approach under moral uncertainty depending on the type/source of uncertainty. (This is partly double-counting the points about empirical uncertainty and academic sources, but here I’m also indicating I’ve tried to think this through myself at least a bit.)
I think the main way it would help advance one’s thinking is by giving clues as to what one should do to resolve one’s uncertainty. [...] But what I cover in this post and the following one is how to make decisions when oneismorally uncertain. I.e., imagining that you are stuck with uncertainty, what do you do? This is a different question to “how do I get rid of this uncertainty and find the best answer” (resolving it).
Makes sense to me, and clarified your approach. I think I agree with it.
So the post I decided to write based on Said Achmiz and Kaj_Sotala’s feedback will now be at least three posts. Turns out you two were definitely right that there’s a lot worth saying about what moral uncertainty actually is!
The first post, which takes an even further step back and compares “morality” to related concepts, is here. I hope to publish the next one, half of a discussion of what moral uncertainty is, in the next couple days.
I’ve finally gotten around to the post you two would probably be most interested in, on (roughly speaking) moral uncertainty for antirealists/subjectivists (as well as for AI alignment, and for moral realists in some ways). That also touches on how to “resolve” the various types of uncertainty I propose.
Yes, I think thinking through that for that comment clarified things for myself as well! Once I’m further through this series, I’ll edit the first posts, and I’ve made a note to mention something like that in the first two posts.
(Also, should’ve mentioned more explicitly—I’d be interested in hearing people’s thoughts on that “current thinking” of mine, to inform the “What is moral uncertainty?” post I’m working on.)
I’m confused by you saying this, given that you indicate having read my post on types of moral uncertainty. To me the different types warrant different ways of dealing with them. For example, intrinsic moral uncertainty was defined as different parts of your brain having fundamental disagreements about what kind of a value system to endorse. That kind of situation would require entirely different kinds of approaches, ones that would be better described as psychological than decision-theoretical.
It seems to me that before outlining any method for dealing with moral uncertainty, one would need to outline what type of MU it was applicable for and why.
I’m now definitely planning to write the above-mentioned post, discussing various definitions, types, and sources of moral uncertainty. As this will require thinking more deeply about that topic, I’ll be able to answer more properly once I have. (And I’ll also comment a link to post that in this thread.)
For now, some thoughts in response to your comment, which are not fully-formed and not meant to be convincing; just meant to indicate my current thinking and maybe help me formulate that other post through discussion:
Well, I did say “part of me”… :)
I think the last two sentences of your comment raise an interesting point worth taking seriously
I do think there are meaningfully different ways we can think about what moral uncertainty is, and that a categorisation/analysis of the different types and “sources” (i.e., why is one morally uncertain) could advance one’s thinking (that’s what that other post I’ll write will aim to do)
I think the main way it would help advance one’s thinking is by giving clues as to what one should do to resolve one’s uncertainty.
This seems to me to be what your post focuses on, for each of the three types you mention. E.g., to paraphrase (to relate this more to humans than AI, though both applications seem important), you seem to suggest that if the source of the uncertainty is our limited self-knowledge, we should engage in processes along the lines of introspection. In contrast, if the source of our uncertainty is that we don’t know what we’d enjoy/value because we haven’t tried it, we should engage with the world to learn more. This all seems right to me.
But what I cover in this post and the following one is how to make decisions when one is morally uncertain. I.e., imagining that you are stuck with uncertainty, what do you do? This is a different question to “how do I get rid of this uncertainty and find the best answer” (resolving it).
(Though in reality the best move under uncertainty may actually often be to gather more info—which I’ll discuss somewhat in my upcoming post on the applying value of information analysis to this topic—in which case the matter of how to resolve the uncertainty becomes relevant again.)
I’d currently guess (though I’m definitely open to being convinced otherwise) that the different types and sources of moral uncertainty don’t have substantial bearing on how to make decisions under moral uncertainty. This is for three main reasons:
Analogy to empirical uncertainty: There are a huge number of different reasons I might be empirically uncertain—e.g., I might not have enough data on a known issue, I might have bad data on the issue, I might have the wrong model of a situation, I might not be aware of a relevant concept, I might have all the right data and model but limited processing/computation ability/effort. And this is certainly relevant to the matter of how to resolve uncertainty. But as far as I’m aware, expected value reasoning/expected utility theory is seen as the “rational” response in any case of empirical uncertainty. (Possibly excluding edge cases like Pascal’s wagers, which in any case seem to be issues related to the size of the probability rather than to the type of uncertainty.) It seems that, likewise, the “right” approach to making decisions under moral uncertainty may apply regardless of the type/source of that uncertainty (especially because MEC was developed by conscious analogy to approaches for handling empirical uncertainty).
The fact that most academic sources I’ve seen about moral uncertainty seem to just very briefly discuss what moral uncertainty is, largely through analogy to empirical uncertainty and an example, and then launch into how to make decisions when morally uncertain. (Which is probably part of why I did it that way too.) It’s certainly possible that there are other sources I haven’t seen which discuss how different types/sources might suggest different approaches would be best. It’s also certainly possible the academics all just haven’t thought of this issue, or haven’t taken it seriously enough. But to me this is at least evidence that the matter of types/sources of moral uncertainty shouldn’t affect how one makes decisions under moral uncertainty.
(That said, I’m a little surprised I haven’t yet seen academic sources analysing different types/sources of moral uncertainty in order to discuss the best approaches for resolving it. Maybe they feel that’s effectively covered by more regular moral philosophy work. Or maybe it’s the sort of thing that’s better as a blog post than an academic article.)
I can’t presently see a reason why one should have a different decision-making procedure/aggregation procedure/approach under moral uncertainty depending on the type/source of uncertainty. (This is partly double-counting the points about empirical uncertainty and academic sources, but here I’m also indicating I’ve tried to think this through myself at least a bit.)
Thanks! This bit in particular
Makes sense to me, and clarified your approach. I think I agree with it.
So the post I decided to write based on Said Achmiz and Kaj_Sotala’s feedback will now be at least three posts. Turns out you two were definitely right that there’s a lot worth saying about what moral uncertainty actually is!
The first post, which takes an even further step back and compares “morality” to related concepts, is here. I hope to publish the next one, half of a discussion of what moral uncertainty is, in the next couple days.
I’ve just finished the next post too—this one comparing moral uncertainty itself (rather than morality) to related concepts.
I’ve finally gotten around to the post you two would probably be most interested in, on (roughly speaking) moral uncertainty for antirealists/subjectivists (as well as for AI alignment, and for moral realists in some ways). That also touches on how to “resolve” the various types of uncertainty I propose.
Yes, I think thinking through that for that comment clarified things for myself as well! Once I’m further through this series, I’ll edit the first posts, and I’ve made a note to mention something like that in the first two posts.
(Also, should’ve mentioned more explicitly—I’d be interested in hearing people’s thoughts on that “current thinking” of mine, to inform the “What is moral uncertainty?” post I’m working on.)