Do you mean adding a paragraph or two at the start, or a whole other post?
I would think an entire post would be needed, yes. (At least!)
But I am also considering writing a whole post on different types/sources of moral uncertainty (particularly integrating ideas from posts by Justin Shovelain, Kaj_Sotala, Stuart_Armstrong, an anonymous poster, and a few other places. This would for example discuss how it can be conceptualised under moral realism vs under antirealism. So maybe I’ll try write that soon, and then provide near the start of this post a very brief summary of (and link to) that.
This sounds promising.
Basically, I’m wondering the following (this is an incomplete list):
What is this ‘moral uncertainty’ business?
Where did this idea come from; what is its history?
What does it mean to be uncertain about morality?
Is ‘moral uncertainty’ like uncertainty about facts? How so? Or is it different? How is it different?
Is moral uncertainty like physical, computational, or indexical uncertainty? Or all of the above? Or none of the above?
How would one construe increasing or decreasing moral uncertainty?
… etc., etc. To put it another way—Eliezer spends a big part of the Sequences discussing probability and uncertainty about facts, conceptually and practically and mathematically, etc. It seems like ‘moral uncertainty’ deserves some of the same sort of treatment.
Ok, this has increased the likelihood I’ll commit the time to writing that other post. I think it’ll address some of the sorts of questions you list, but not all of them.
One reason is that I’m not a proper expert on this.
Another reason is that I think that, very roughly speaking, answers to a lot of questions like that would be “Basically import what we already know about regular/factual/empirical uncertainty.” For moral realists, the basis for the analogy seems clear. For moral antirealists, one can roughly imagine dealing with moral uncertainty as something like trying to work out the fact of the matter about one’s own preferences, or one’s idealised preferences (something like CEV). But that other post I’ll likely write should flesh this out a bit more.
I would think an entire post would be needed, yes. (At least!)
This sounds promising.
Basically, I’m wondering the following (this is an incomplete list):
What is this ‘moral uncertainty’ business?
Where did this idea come from; what is its history?
What does it mean to be uncertain about morality?
Is ‘moral uncertainty’ like uncertainty about facts? How so? Or is it different? How is it different?
Is moral uncertainty like physical, computational, or indexical uncertainty? Or all of the above? Or none of the above?
How would one construe increasing or decreasing moral uncertainty?
… etc., etc. To put it another way—Eliezer spends a big part of the Sequences discussing probability and uncertainty about facts, conceptually and practically and mathematically, etc. It seems like ‘moral uncertainty’ deserves some of the same sort of treatment.
Ok, this has increased the likelihood I’ll commit the time to writing that other post. I think it’ll address some of the sorts of questions you list, but not all of them.
One reason is that I’m not a proper expert on this.
Another reason is that I think that, very roughly speaking, answers to a lot of questions like that would be “Basically import what we already know about regular/factual/empirical uncertainty.” For moral realists, the basis for the analogy seems clear. For moral antirealists, one can roughly imagine dealing with moral uncertainty as something like trying to work out the fact of the matter about one’s own preferences, or one’s idealised preferences (something like CEV). But that other post I’ll likely write should flesh this out a bit more.