Ok, this has increased the likelihood I’ll commit the time to writing that other post. I think it’ll address some of the sorts of questions you list, but not all of them.
One reason is that I’m not a proper expert on this.
Another reason is that I think that, very roughly speaking, answers to a lot of questions like that would be “Basically import what we already know about regular/factual/empirical uncertainty.” For moral realists, the basis for the analogy seems clear. For moral antirealists, one can roughly imagine dealing with moral uncertainty as something like trying to work out the fact of the matter about one’s own preferences, or one’s idealised preferences (something like CEV). But that other post I’ll likely write should flesh this out a bit more.
Ok, this has increased the likelihood I’ll commit the time to writing that other post. I think it’ll address some of the sorts of questions you list, but not all of them.
One reason is that I’m not a proper expert on this.
Another reason is that I think that, very roughly speaking, answers to a lot of questions like that would be “Basically import what we already know about regular/factual/empirical uncertainty.” For moral realists, the basis for the analogy seems clear. For moral antirealists, one can roughly imagine dealing with moral uncertainty as something like trying to work out the fact of the matter about one’s own preferences, or one’s idealised preferences (something like CEV). But that other post I’ll likely write should flesh this out a bit more.