Paul, since my background is in AI, it is natural for me to ask how a “duty” gets cashed out computationally, if not as a contribution to expected utility. If I’m not using some kind of moral points, how do I calculate what my “duty” is?
How should I weigh a 10% chance of saving 20 lives against a 90% chance of saving one life?
If saving life takes lexical priority, should I weigh a 1/googleplex (or 1/Graham’s Number) chance of saving one life equally with a certainty of making a billion people very unhappy for fifty years?
Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization.
Paul, since my background is in AI, it is natural for me to ask how a “duty” gets cashed out computationally, if not as a contribution to expected utility. If I’m not using some kind of moral points, how do I calculate what my “duty” is?
How should I weigh a 10% chance of saving 20 lives against a 90% chance of saving one life?
If saving life takes lexical priority, should I weigh a 1/googleplex (or 1/Graham’s Number) chance of saving one life equally with a certainty of making a billion people very unhappy for fifty years?
Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization.