This was really great! I’ve been trying to grasp at a bunch of stuff in this direction for a while, and recently had a long conversation about moral uncertainty and uncertainty about decision theory (which was proposed in this paper by Will MacAskill), in which I expressed a bunch of confusion and discomfort with commonly suggested solutions, with reasoning similar to what you outlined in this post. This has clarified a good amount of what I had tried to say, though I have many more thoughts.
However, my thoughts are still all super vague and I have some urgent deadlines coming, and so probably won’t have the time to make them coherent in the next few days. If anyone reads this a week from now or later, you are welcome to ping me to write down my thoughts on this.
This was really great! I’ve been trying to grasp at a bunch of stuff in this direction for a while, and recently had a long conversation about moral uncertainty and uncertainty about decision theory (which was proposed in this paper by Will MacAskill), in which I expressed a bunch of confusion and discomfort with commonly suggested solutions, with reasoning similar to what you outlined in this post. This has clarified a good amount of what I had tried to say, though I have many more thoughts.
However, my thoughts are still all super vague and I have some urgent deadlines coming, and so probably won’t have the time to make them coherent in the next few days. If anyone reads this a week from now or later, you are welcome to ping me to write down my thoughts on this.
Have you happened to write down your thoughts on this in the meantime?