And the goal of a utility function is to represent what you states you would prefer the universe to be in. This also shouldn’t change unless you’ve actually changed your preferences.
There’s plenty of evidence of people changing their preferences over significant periods of time: it would be weird not to.
Of course people can change their preferences. But if your preferences are not consistent you will likely end up in situations that are less preferable than if you had the same preferences the entire time. It also makes you a potential money pump.
And I am well aware that the theory of stable utility functions is standardly patched up with a further theory of terminal values, for which there is also no direct evidence.
What? Terminal values are not a patch for utility functions. It’s basically another word that means the same thing, what state you would prefer the world to end up in. And how can there be evidence for a decision theory?
Terminal values are not a patch for utility functions.
Well, I’ve certainly seen discussions here in which the observed inconsistency among our professed values is treated as a non-problem on the grounds that those are mere instrumental values, and our terminal values are presumed to be more consistent than that.
Insofar as stable utility functions depend on consistent values, it’s not unreasonable to describe such discussions as positing consistent terminal values in order to support a belief in stable utility functions.
There’s plenty of evidence of people changing their preferences over significant periods of time: it would be weird not to. And I am well aware that the theory of stable utility functions is standardly patched up with a further theory of terminal values, for which there is also no direct evidence.
Of course people can change their preferences. But if your preferences are not consistent you will likely end up in situations that are less preferable than if you had the same preferences the entire time. It also makes you a potential money pump.
What? Terminal values are not a patch for utility functions. It’s basically another word that means the same thing, what state you would prefer the world to end up in. And how can there be evidence for a decision theory?
Well, I’ve certainly seen discussions here in which the observed inconsistency among our professed values is treated as a non-problem on the grounds that those are mere instrumental values, and our terminal values are presumed to be more consistent than that.
Insofar as stable utility functions depend on consistent values, it’s not unreasonable to describe such discussions as positing consistent terminal values in order to support a belief in stable utility functions.