Good point. I wondered if maybe this should instead be called “Overview of moral uncertainty” or “Making decisions under moral uncertainty” (edit: I’ve now changed it to the latter title, both here and on the EA forum, partly due to your feedback).
Do you mean adding a paragraph or two at the start, or a whole other post?
Part of me wants to say I don’t think there’s really that much to say about moral uncertainty itself, before getting into how to handle it. I also think it’s probably best explained through examples, which it therefore seems efficient to combine with examples of handling it (e.g., the Devon example both illustrates an instance of moral uncertainty, and how to handle it, saving the reader time). But maybe in that case I should explicitly note early in the post that I’ll mostly illustrate moral uncertainty through the examples to come, rather than explaining it abstractly up front.
But I am also considering writing a whole post on different types/sources of moral uncertainty (particularly integrating ideas from posts by Justin Shovelain,Kaj_Sotala, Stuart_Armstrong, an anonymous poster, and a few other places. This would for example discuss how it can be conceptualised under moral realism vs under antirealism. So maybe I’ll try write that soon, and then provide near the start of this post a very brief summary of (and link to) that.
Do you mean adding a paragraph or two at the start, or a whole other post?
I would think an entire post would be needed, yes. (At least!)
But I am also considering writing a whole post on different types/sources of moral uncertainty (particularly integrating ideas from posts by Justin Shovelain, Kaj_Sotala, Stuart_Armstrong, an anonymous poster, and a few other places. This would for example discuss how it can be conceptualised under moral realism vs under antirealism. So maybe I’ll try write that soon, and then provide near the start of this post a very brief summary of (and link to) that.
This sounds promising.
Basically, I’m wondering the following (this is an incomplete list):
What is this ‘moral uncertainty’ business?
Where did this idea come from; what is its history?
What does it mean to be uncertain about morality?
Is ‘moral uncertainty’ like uncertainty about facts? How so? Or is it different? How is it different?
Is moral uncertainty like physical, computational, or indexical uncertainty? Or all of the above? Or none of the above?
How would one construe increasing or decreasing moral uncertainty?
… etc., etc. To put it another way—Eliezer spends a big part of the Sequences discussing probability and uncertainty about facts, conceptually and practically and mathematically, etc. It seems like ‘moral uncertainty’ deserves some of the same sort of treatment.
Ok, this has increased the likelihood I’ll commit the time to writing that other post. I think it’ll address some of the sorts of questions you list, but not all of them.
One reason is that I’m not a proper expert on this.
Another reason is that I think that, very roughly speaking, answers to a lot of questions like that would be “Basically import what we already know about regular/factual/empirical uncertainty.” For moral realists, the basis for the analogy seems clear. For moral antirealists, one can roughly imagine dealing with moral uncertainty as something like trying to work out the fact of the matter about one’s own preferences, or one’s idealised preferences (something like CEV). But that other post I’ll likely write should flesh this out a bit more.
Part of me wants to say I don’t think there’s really that much to say about moral uncertainty itself, before getting into how to handle it.
I’m confused by you saying this, given that you indicate having read my post on types of moral uncertainty. To me the different types warrant different ways of dealing with them. For example, intrinsic moral uncertainty was defined as different parts of your brain having fundamental disagreements about what kind of a value system to endorse. That kind of situation would require entirely different kinds of approaches, ones that would be better described as psychological than decision-theoretical.
It seems to me that before outlining any method for dealing with moral uncertainty, one would need to outline what type of MU it was applicable for and why.
I’m now definitely planning to write the above-mentioned post, discussing various definitions, types, and sources of moral uncertainty. As this will require thinking more deeply about that topic, I’ll be able to answer more properly once I have. (And I’ll also comment a link to post that in this thread.)
For now, some thoughts in response to your comment, which are not fully-formed and not meant to be convincing; just meant to indicate my current thinking and maybe help me formulate that other post through discussion:
Well, I did say “part of me”… :)
I think the last two sentences of your comment raise an interesting point worth taking seriously
I do think there are meaningfully different ways we can think about what moral uncertainty is, and that a categorisation/analysis of the different types and “sources” (i.e., why is one morally uncertain) could advance one’s thinking (that’s what that other post I’ll write will aim to do)
I think the main way it would help advance one’s thinking is by giving clues as to what one should do to resolve one’s uncertainty.
This seems to me to be what your post focuses on, for each of the three types you mention. E.g., to paraphrase (to relate this more to humans than AI, though both applications seem important), you seem to suggest that if the source of the uncertainty is our limited self-knowledge, we should engage in processes along the lines of introspection. In contrast, if the source of our uncertainty is that we don’t know what we’d enjoy/value because we haven’t tried it, we should engage with the world to learn more. This all seems right to me.
But what I cover in this post and the following one is how to make decisions when one is morally uncertain. I.e., imagining that you are stuck with uncertainty, what do you do? This is a different question to “how do I get rid of this uncertainty and find the best answer” (resolving it).
(Though in reality the best move under uncertainty may actually often be to gather more info—which I’ll discuss somewhat in my upcoming post on the applying value of information analysis to this topic—in which case the matter of how to resolve the uncertainty becomes relevant again.)
I’d currently guess (though I’m definitely open to being convinced otherwise) that the different types and sources of moral uncertainty don’t have substantial bearing on how to make decisions under moral uncertainty. This is for three main reasons:
Analogy to empirical uncertainty: There are a huge number of different reasons I might be empirically uncertain—e.g., I might not have enough data on a known issue, I might have bad data on the issue, I might have the wrong model of a situation, I might not be aware of a relevant concept, I might have all the right data and model but limited processing/computation ability/effort. And this is certainly relevant to the matter of how to resolve uncertainty. But as far as I’m aware, expected value reasoning/expected utility theory is seen as the “rational” response in any case of empirical uncertainty. (Possibly excluding edge cases like Pascal’s wagers, which in any case seem to be issues related to the size of the probability rather than to the type of uncertainty.) It seems that, likewise, the “right” approach to making decisions under moral uncertainty may apply regardless of the type/source of that uncertainty (especially because MEC was developed by conscious analogy to approaches for handling empirical uncertainty).
The fact that most academic sources I’ve seen about moral uncertainty seem to just very briefly discuss what moral uncertainty is, largely through analogy to empirical uncertainty and an example, and then launch into how to make decisions when morally uncertain. (Which is probably part of why I did it that way too.) It’s certainly possible that there are other sources I haven’t seen which discuss how different types/sources might suggest different approaches would be best. It’s also certainly possible the academics all just haven’t thought of this issue, or haven’t taken it seriously enough. But to me this is at least evidence that the matter of types/sources of moral uncertainty shouldn’t affect how one makes decisions under moral uncertainty.
(That said, I’m a little surprised I haven’t yet seen academic sources analysing different types/sources of moral uncertainty in order to discuss the best approaches for resolving it. Maybe they feel that’s effectively covered by more regular moral philosophy work. Or maybe it’s the sort of thing that’s better as a blog post than an academic article.)
I can’t presently see a reason why one should have a different decision-making procedure/aggregation procedure/approach under moral uncertainty depending on the type/source of uncertainty. (This is partly double-counting the points about empirical uncertainty and academic sources, but here I’m also indicating I’ve tried to think this through myself at least a bit.)
I think the main way it would help advance one’s thinking is by giving clues as to what one should do to resolve one’s uncertainty. [...] But what I cover in this post and the following one is how to make decisions when oneismorally uncertain. I.e., imagining that you are stuck with uncertainty, what do you do? This is a different question to “how do I get rid of this uncertainty and find the best answer” (resolving it).
Makes sense to me, and clarified your approach. I think I agree with it.
So the post I decided to write based on Said Achmiz and Kaj_Sotala’s feedback will now be at least three posts. Turns out you two were definitely right that there’s a lot worth saying about what moral uncertainty actually is!
The first post, which takes an even further step back and compares “morality” to related concepts, is here. I hope to publish the next one, half of a discussion of what moral uncertainty is, in the next couple days.
I’ve finally gotten around to the post you two would probably be most interested in, on (roughly speaking) moral uncertainty for antirealists/subjectivists (as well as for AI alignment, and for moral realists in some ways). That also touches on how to “resolve” the various types of uncertainty I propose.
Yes, I think thinking through that for that comment clarified things for myself as well! Once I’m further through this series, I’ll edit the first posts, and I’ve made a note to mention something like that in the first two posts.
(Also, should’ve mentioned more explicitly—I’d be interested in hearing people’s thoughts on that “current thinking” of mine, to inform the “What is moral uncertainty?” post I’m working on.)
Good point. I wondered if maybe this should instead be called “Overview of moral uncertainty” or “Making decisions under moral uncertainty” (edit: I’ve now changed it to the latter title, both here and on the EA forum, partly due to your feedback).
Do you mean adding a paragraph or two at the start, or a whole other post?
Part of me wants to say I don’t think there’s really that much to say about moral uncertainty itself, before getting into how to handle it. I also think it’s probably best explained through examples, which it therefore seems efficient to combine with examples of handling it (e.g., the Devon example both illustrates an instance of moral uncertainty, and how to handle it, saving the reader time). But maybe in that case I should explicitly note early in the post that I’ll mostly illustrate moral uncertainty through the examples to come, rather than explaining it abstractly up front.
But I am also considering writing a whole post on different types/sources of moral uncertainty (particularly integrating ideas from posts by Justin Shovelain, Kaj_Sotala, Stuart_Armstrong, an anonymous poster, and a few other places. This would for example discuss how it can be conceptualised under moral realism vs under antirealism. So maybe I’ll try write that soon, and then provide near the start of this post a very brief summary of (and link to) that.
I would think an entire post would be needed, yes. (At least!)
This sounds promising.
Basically, I’m wondering the following (this is an incomplete list):
What is this ‘moral uncertainty’ business?
Where did this idea come from; what is its history?
What does it mean to be uncertain about morality?
Is ‘moral uncertainty’ like uncertainty about facts? How so? Or is it different? How is it different?
Is moral uncertainty like physical, computational, or indexical uncertainty? Or all of the above? Or none of the above?
How would one construe increasing or decreasing moral uncertainty?
… etc., etc. To put it another way—Eliezer spends a big part of the Sequences discussing probability and uncertainty about facts, conceptually and practically and mathematically, etc. It seems like ‘moral uncertainty’ deserves some of the same sort of treatment.
Ok, this has increased the likelihood I’ll commit the time to writing that other post. I think it’ll address some of the sorts of questions you list, but not all of them.
One reason is that I’m not a proper expert on this.
Another reason is that I think that, very roughly speaking, answers to a lot of questions like that would be “Basically import what we already know about regular/factual/empirical uncertainty.” For moral realists, the basis for the analogy seems clear. For moral antirealists, one can roughly imagine dealing with moral uncertainty as something like trying to work out the fact of the matter about one’s own preferences, or one’s idealised preferences (something like CEV). But that other post I’ll likely write should flesh this out a bit more.
I’m confused by you saying this, given that you indicate having read my post on types of moral uncertainty. To me the different types warrant different ways of dealing with them. For example, intrinsic moral uncertainty was defined as different parts of your brain having fundamental disagreements about what kind of a value system to endorse. That kind of situation would require entirely different kinds of approaches, ones that would be better described as psychological than decision-theoretical.
It seems to me that before outlining any method for dealing with moral uncertainty, one would need to outline what type of MU it was applicable for and why.
I’m now definitely planning to write the above-mentioned post, discussing various definitions, types, and sources of moral uncertainty. As this will require thinking more deeply about that topic, I’ll be able to answer more properly once I have. (And I’ll also comment a link to post that in this thread.)
For now, some thoughts in response to your comment, which are not fully-formed and not meant to be convincing; just meant to indicate my current thinking and maybe help me formulate that other post through discussion:
Well, I did say “part of me”… :)
I think the last two sentences of your comment raise an interesting point worth taking seriously
I do think there are meaningfully different ways we can think about what moral uncertainty is, and that a categorisation/analysis of the different types and “sources” (i.e., why is one morally uncertain) could advance one’s thinking (that’s what that other post I’ll write will aim to do)
I think the main way it would help advance one’s thinking is by giving clues as to what one should do to resolve one’s uncertainty.
This seems to me to be what your post focuses on, for each of the three types you mention. E.g., to paraphrase (to relate this more to humans than AI, though both applications seem important), you seem to suggest that if the source of the uncertainty is our limited self-knowledge, we should engage in processes along the lines of introspection. In contrast, if the source of our uncertainty is that we don’t know what we’d enjoy/value because we haven’t tried it, we should engage with the world to learn more. This all seems right to me.
But what I cover in this post and the following one is how to make decisions when one is morally uncertain. I.e., imagining that you are stuck with uncertainty, what do you do? This is a different question to “how do I get rid of this uncertainty and find the best answer” (resolving it).
(Though in reality the best move under uncertainty may actually often be to gather more info—which I’ll discuss somewhat in my upcoming post on the applying value of information analysis to this topic—in which case the matter of how to resolve the uncertainty becomes relevant again.)
I’d currently guess (though I’m definitely open to being convinced otherwise) that the different types and sources of moral uncertainty don’t have substantial bearing on how to make decisions under moral uncertainty. This is for three main reasons:
Analogy to empirical uncertainty: There are a huge number of different reasons I might be empirically uncertain—e.g., I might not have enough data on a known issue, I might have bad data on the issue, I might have the wrong model of a situation, I might not be aware of a relevant concept, I might have all the right data and model but limited processing/computation ability/effort. And this is certainly relevant to the matter of how to resolve uncertainty. But as far as I’m aware, expected value reasoning/expected utility theory is seen as the “rational” response in any case of empirical uncertainty. (Possibly excluding edge cases like Pascal’s wagers, which in any case seem to be issues related to the size of the probability rather than to the type of uncertainty.) It seems that, likewise, the “right” approach to making decisions under moral uncertainty may apply regardless of the type/source of that uncertainty (especially because MEC was developed by conscious analogy to approaches for handling empirical uncertainty).
The fact that most academic sources I’ve seen about moral uncertainty seem to just very briefly discuss what moral uncertainty is, largely through analogy to empirical uncertainty and an example, and then launch into how to make decisions when morally uncertain. (Which is probably part of why I did it that way too.) It’s certainly possible that there are other sources I haven’t seen which discuss how different types/sources might suggest different approaches would be best. It’s also certainly possible the academics all just haven’t thought of this issue, or haven’t taken it seriously enough. But to me this is at least evidence that the matter of types/sources of moral uncertainty shouldn’t affect how one makes decisions under moral uncertainty.
(That said, I’m a little surprised I haven’t yet seen academic sources analysing different types/sources of moral uncertainty in order to discuss the best approaches for resolving it. Maybe they feel that’s effectively covered by more regular moral philosophy work. Or maybe it’s the sort of thing that’s better as a blog post than an academic article.)
I can’t presently see a reason why one should have a different decision-making procedure/aggregation procedure/approach under moral uncertainty depending on the type/source of uncertainty. (This is partly double-counting the points about empirical uncertainty and academic sources, but here I’m also indicating I’ve tried to think this through myself at least a bit.)
Thanks! This bit in particular
Makes sense to me, and clarified your approach. I think I agree with it.
So the post I decided to write based on Said Achmiz and Kaj_Sotala’s feedback will now be at least three posts. Turns out you two were definitely right that there’s a lot worth saying about what moral uncertainty actually is!
The first post, which takes an even further step back and compares “morality” to related concepts, is here. I hope to publish the next one, half of a discussion of what moral uncertainty is, in the next couple days.
I’ve just finished the next post too—this one comparing moral uncertainty itself (rather than morality) to related concepts.
I’ve finally gotten around to the post you two would probably be most interested in, on (roughly speaking) moral uncertainty for antirealists/subjectivists (as well as for AI alignment, and for moral realists in some ways). That also touches on how to “resolve” the various types of uncertainty I propose.
Yes, I think thinking through that for that comment clarified things for myself as well! Once I’m further through this series, I’ll edit the first posts, and I’ve made a note to mention something like that in the first two posts.
(Also, should’ve mentioned more explicitly—I’d be interested in hearing people’s thoughts on that “current thinking” of mine, to inform the “What is moral uncertainty?” post I’m working on.)