People don’t have utilities; we have desires, preferences, moral sentiments, etc… and we want to (or have to) translate them into utility-equivalent formats. We also have meta-preferences that we want to respect, such as “treat the desires/happiness/value of similar beings similarly”. That leads straight to unbounded utility as the first candidate.
So I’m looking at “what utility should we choose” rather than “what utility do we have” (because we don’t have any currently).
I agree that we do not objectively have a utility function, but the kinds of things of that you say. I am simply saying that the utility function that those things most resemble is a bounded utility function, and people’s absolute refusal to do anything for the sake of an extremely small probability proves that fact.
I am not sure that the meta-preference that you mention “leads straight to unbounded utility.” However, I agree that understood in a certain way it might lead to that. But if so, it would also lead straight to accepting extremely small probabilities of extremely large rewards. I think that people’s desire to avoid the latter is stronger than their desire for the former, if they have the former at all.
I do not have that particular meta-preference because I think it is a mistaken result of a true meta-preference for being logical and reasonable. I think one can be logical and reasonable while preferring benefits that are closer to benefits that are more distant, even when those benefits are similar in themselves.
I think that people’s desire to avoid the latter is stronger than their desire for the former, if they have the former at all.
Yes, which is what my system is set up for. It allows people to respect their meta-preference, up to the extent where mugging and other issues become possible.
People don’t have utilities; we have desires, preferences, moral sentiments, etc… and we want to (or have to) translate them into utility-equivalent formats. We also have meta-preferences that we want to respect, such as “treat the desires/happiness/value of similar beings similarly”. That leads straight to unbounded utility as the first candidate.
So I’m looking at “what utility should we choose” rather than “what utility do we have” (because we don’t have any currently).
I agree that we do not objectively have a utility function, but the kinds of things of that you say. I am simply saying that the utility function that those things most resemble is a bounded utility function, and people’s absolute refusal to do anything for the sake of an extremely small probability proves that fact.
I am not sure that the meta-preference that you mention “leads straight to unbounded utility.” However, I agree that understood in a certain way it might lead to that. But if so, it would also lead straight to accepting extremely small probabilities of extremely large rewards. I think that people’s desire to avoid the latter is stronger than their desire for the former, if they have the former at all.
I do not have that particular meta-preference because I think it is a mistaken result of a true meta-preference for being logical and reasonable. I think one can be logical and reasonable while preferring benefits that are closer to benefits that are more distant, even when those benefits are similar in themselves.
Yes, which is what my system is set up for. It allows people to respect their meta-preference, up to the extent where mugging and other issues become possible.