When under moral uncertainty, rational EV maximization will look a lot like preserving attainable utility / choiceworthiness for your different moral theories / utility functions, while you resolve that uncertainty.
This seems right to me, and I think it’s essentially the rationale for the idea of the Long Reflection.
When under moral uncertainty, rational EV maximization will look a lot like preserving attainable utility / choiceworthiness for your different moral theories / utility functions, while you resolve that uncertainty.
This seems right to me, and I think it’s essentially the rationale for the idea of the Long Reflection.