I’ve finally gotten around to the post you two would probably be most interested in, on (roughly speaking) moral uncertainty for antirealists/subjectivists (as well as for AI alignment, and for moral realists in some ways). That also touches on how to “resolve” the various types of uncertainty I propose.
I’ve finally gotten around to the post you two would probably be most interested in, on (roughly speaking) moral uncertainty for antirealists/subjectivists (as well as for AI alignment, and for moral realists in some ways). That also touches on how to “resolve” the various types of uncertainty I propose.