Asking people to trade off various goods against risk of death allows you to elicit a utility function with a zero point, where death has zero utility. But such a utility function is only determined up to multiplication by a positive constant. With just this information, we can’t even decide how to distribute goods among a population consisting of two people. Depending on how we scale their utility functions, one of them could be a utility monster. If you choose two calibration points for utility functions (say, death and some other outcome O), then you can make interpersonal comparisons of utility — although this comes at the cost of deciding a priori that one person’s death is as good as another’s, and one person’s outcome O is as good as another’s, ceteris paribus, independently of their preferences.
Asking people to trade off various goods against risk of death allows you to elicit a utility function with a zero point, where death has zero utility. But such a utility function is only determined up to multiplication by a positive constant. With just this information, we can’t even decide how to distribute goods among a population consisting of two people. Depending on how we scale their utility functions, one of them could be a utility monster. If you choose two calibration points for utility functions (say, death and some other outcome O), then you can make interpersonal comparisons of utility — although this comes at the cost of deciding a priori that one person’s death is as good as another’s, and one person’s outcome O is as good as another’s, ceteris paribus, independently of their preferences.
Yes, thank you for taking the time to explain.