Well you can make such comparisons if you allow for empathic preferences (imagine placing yourself in someone else’s position, and ask how good or bad that would be, relative to some other position). Also the fact that human behavior doesn’t perfectly fit a utility function is not in itself a huge issue: just apply a best fit function (this is the “revealed preference” approach to utility).
Ken Binmore has a rather good paper on this topic, see here.
Well you can make such comparisons if you allow for empathic preferences (imagine placing yourself in someone else’s position, and ask how good or bad that would be, relative to some other position). Also the fact that human behavior doesn’t perfectly fit a utility function is not in itself a huge issue: just apply a best fit function (this is the “revealed preference” approach to utility).
Ken Binmore has a rather good paper on this topic, see here.