I think the main way it would help advance one’s thinking is by giving clues as to what one should do to resolve one’s uncertainty. [...] But what I cover in this post and the following one is how to make decisions when oneismorally uncertain. I.e., imagining that you are stuck with uncertainty, what do you do? This is a different question to “how do I get rid of this uncertainty and find the best answer” (resolving it).
Makes sense to me, and clarified your approach. I think I agree with it.
So the post I decided to write based on Said Achmiz and Kaj_Sotala’s feedback will now be at least three posts. Turns out you two were definitely right that there’s a lot worth saying about what moral uncertainty actually is!
The first post, which takes an even further step back and compares “morality” to related concepts, is here. I hope to publish the next one, half of a discussion of what moral uncertainty is, in the next couple days.
I’ve finally gotten around to the post you two would probably be most interested in, on (roughly speaking) moral uncertainty for antirealists/subjectivists (as well as for AI alignment, and for moral realists in some ways). That also touches on how to “resolve” the various types of uncertainty I propose.
Yes, I think thinking through that for that comment clarified things for myself as well! Once I’m further through this series, I’ll edit the first posts, and I’ve made a note to mention something like that in the first two posts.
Thanks! This bit in particular
Makes sense to me, and clarified your approach. I think I agree with it.
So the post I decided to write based on Said Achmiz and Kaj_Sotala’s feedback will now be at least three posts. Turns out you two were definitely right that there’s a lot worth saying about what moral uncertainty actually is!
The first post, which takes an even further step back and compares “morality” to related concepts, is here. I hope to publish the next one, half of a discussion of what moral uncertainty is, in the next couple days.
I’ve just finished the next post too—this one comparing moral uncertainty itself (rather than morality) to related concepts.
I’ve finally gotten around to the post you two would probably be most interested in, on (roughly speaking) moral uncertainty for antirealists/subjectivists (as well as for AI alignment, and for moral realists in some ways). That also touches on how to “resolve” the various types of uncertainty I propose.
Yes, I think thinking through that for that comment clarified things for myself as well! Once I’m further through this series, I’ll edit the first posts, and I’ve made a note to mention something like that in the first two posts.