Utility and probability functions are not perfect or neatly walled off. But that doesn’t mean you should change them to fix a problem with your expected utility function. The goal of a probability function is to represent the actual probability of an event happening as closely as possible. And the goal of a utility function is to represent what you states you would prefer the universe to be in. This also shouldn’t change unless you’ve actually changed your preferences.
And the goal of a utility function is to represent what you states you would prefer the universe to be in. This also shouldn’t change unless you’ve actually changed your preferences.
There’s plenty of evidence of people changing their preferences over significant periods of time: it would be weird not to.
Of course people can change their preferences. But if your preferences are not consistent you will likely end up in situations that are less preferable than if you had the same preferences the entire time. It also makes you a potential money pump.
And I am well aware that the theory of stable utility functions is standardly patched up with a further theory of terminal values, for which there is also no direct evidence.
What? Terminal values are not a patch for utility functions. It’s basically another word that means the same thing, what state you would prefer the world to end up in. And how can there be evidence for a decision theory?
Terminal values are not a patch for utility functions.
Well, I’ve certainly seen discussions here in which the observed inconsistency among our professed values is treated as a non-problem on the grounds that those are mere instrumental values, and our terminal values are presumed to be more consistent than that.
Insofar as stable utility functions depend on consistent values, it’s not unreasonable to describe such discussions as positing consistent terminal values in order to support a belief in stable utility functions.
I don’t know what you mean. All I’m saying is that you shouldn’t change your preferences because of a problem with your expected utility function. Your preferences are just what you want. Utility functions are just a mathematical way of expressing that.
I don’t see why our preferences can’t be expressed by a utility function even as they are. The only reason it wouldn’t work out is if there were circular preferences, and I don’t think most peoples preferences would work out to be truly circular if they were to think about the specific occurrence and decide what they really preferred.
Though mapping out which outcomes are more preferred than others is not enough to assign them an actual utility, you’d somehow have to guess how much more preferable one outcome is to another quantitatively.But even then I think most people could if they thought about it enough. The problem is that our utility functions are complex and we don’t really know what they are, not that they don’t exist.
I don’t see why our preferences can’t be expressed by a utility function even as they are. The only reason it wouldn’t work out is if there were circular preferences, and I don’t think most peoples preferences would work out to be truly circular if they were to think about the specific occurrence and decide what they really preferred.
Or they might violate the independence axiom, but in any case what do you mean by ” think about the specific occurrence and decide what they really preferred”, since the result of such thinking is likely to depend on the exact order they thought about things in.
Utility and probability functions are not perfect or neatly walled off. But that doesn’t mean you should change them to fix a problem with your expected utility function. The goal of a probability function is to represent the actual probability of an event happening as closely as possible. And the goal of a utility function is to represent what you states you would prefer the universe to be in. This also shouldn’t change unless you’ve actually changed your preferences.
There’s plenty of evidence of people changing their preferences over significant periods of time: it would be weird not to. And I am well aware that the theory of stable utility functions is standardly patched up with a further theory of terminal values, for which there is also no direct evidence.
Of course people can change their preferences. But if your preferences are not consistent you will likely end up in situations that are less preferable than if you had the same preferences the entire time. It also makes you a potential money pump.
What? Terminal values are not a patch for utility functions. It’s basically another word that means the same thing, what state you would prefer the world to end up in. And how can there be evidence for a decision theory?
Well, I’ve certainly seen discussions here in which the observed inconsistency among our professed values is treated as a non-problem on the grounds that those are mere instrumental values, and our terminal values are presumed to be more consistent than that.
Insofar as stable utility functions depend on consistent values, it’s not unreasonable to describe such discussions as positing consistent terminal values in order to support a belief in stable utility functions.
Well, how is this different from changing our preferences to utility functions to fix problems with our naive preferences?
I don’t know what you mean. All I’m saying is that you shouldn’t change your preferences because of a problem with your expected utility function. Your preferences are just what you want. Utility functions are just a mathematical way of expressing that.
Human preferences don’t naturally satisfy the VNM axioms, thus by expressing them as a utility function you’ve already changed them.
I don’t see why our preferences can’t be expressed by a utility function even as they are. The only reason it wouldn’t work out is if there were circular preferences, and I don’t think most peoples preferences would work out to be truly circular if they were to think about the specific occurrence and decide what they really preferred.
Though mapping out which outcomes are more preferred than others is not enough to assign them an actual utility, you’d somehow have to guess how much more preferable one outcome is to another quantitatively.But even then I think most people could if they thought about it enough. The problem is that our utility functions are complex and we don’t really know what they are, not that they don’t exist.
Or they might violate the independence axiom, but in any case what do you mean by ” think about the specific occurrence and decide what they really preferred”, since the result of such thinking is likely to depend on the exact order they thought about things in.