If I’m 50% sure that the asymmetry between suffering and happiness is just because it’s very difficult to make humans happy (and so in general achieving great happiness is about as important as avoiding great suffering), and 50% sure that the asymmetry is because of something intrinsic to how these things work (and so avoiding great suffering is maybe a hundred times as important), should I act in the mean time as if avoiding great suffering is slightly over 50 times as important as achieving great happiness, slightly under 2 times as important as achieving great happiness, or something in between? This is where you need the sort of moral uncertainty theory that Nick Bostrom has been working on I think.
If I’m 50% sure that the asymmetry between suffering and happiness is just because it’s very difficult to make humans happy (and so in general achieving great happiness is about as important as avoiding great suffering), and 50% sure that the asymmetry is because of something intrinsic to how these things work (and so avoiding great suffering is maybe a hundred times as important), should I act in the mean time as if avoiding great suffering is slightly over 50 times as important as achieving great happiness, slightly under 2 times as important as achieving great happiness, or something in between? This is where you need the sort of moral uncertainty theory that Nick Bostrom has been working on I think.