Well among economists it is accepted as rational for your preferences to change with context, including time. As you probably know there are EU equivalence theorems that for any p0,U0, there are many other p1,U1; p2,U2; etc. that produce all the same choices. I break this symmetry by saying the p is about the world while the U is about you. The patterns of choice that are explained by changes in you should go in U, and the patters of choices that are explained by changes in what you believe about the world go in p.
Exponential discounting and, more generally, time-consistent preferences are often assumed in rational choice theory, since they imply that all of a decision-maker’s selves will agree with the choices made by each self.
Later on it says:
This would imply disagreement by people’s different selves on decisions made and a rejection of the time consistency aspect of rational choice theory.
But I thought this rejection means rejection as a positive/descriptive theory of how humans actually behave, not as a normative theory of what is rational. Are you saying that economists no longer consider time consistency to be normative?
ETA: Whoever is voting Robin down, why are you doing that?
Conflicts are unfortunate, but hardly irrational. If is is not irrational for two different people at the same time to have different preferences, it is not irratoinal for the same person at different time to have different preferences.
I have to admit, I always thought of time consistency as a standard part of individual rationality, and didn’t consider that anyone might take the position that you’re taking. I’ll have to think about this some more. In the mean time, what about my other question, how to actually become pre-rational? Have you looked at this comment yet?
If people could cheaply bind their future selves, and didn’t directly prefer not to do so, it would be irrational of them to let their future selves have different preferences.
If you owned any slave and could cheaply do so, you’d want to mold it to share exactly your preferences. But should you treat your future selves as your slaves?
Upon further reflection, I think altruism towards one’s future selves can’t justify having different preferences, because there should be a set of compromise preferences such that both your current self and your future selves are better off if you bind yourself (both current and future) to that set.
The logical structure of this argument is flawed. Here’s another argument that shares the same structure, but is clearly wrong:
If you owned any slave and could cheaply do so, you’d want to ensure it doesn’t die of neglect. But should you treat your future selves as your slaves?
Here’s another version that makes more sense:
If you had an opportunity to mold a friend to share exactly your preferences, and could do so cheaply, you might still not want to do so, and wouldn’t be considered irrational for it. So why should you be considered irrational for not molding your future selves to share exactly your preferences?
One answer here might be that changing your friend’s preferences is a wrong because it hurts him according to his current preferences. Doing the same to your future selves isn’t wrong because they don’t exist yet. But I think Robin’s moral philosophy says that we should respect the preferences of nonexistent people, so his position seems consistent with that.
This seems like the well-worn discussion on whether rational agents should be expected to change their preferences. Here’s Omohundro on the topic:
“Their utility function will be precious to these systems. It encapsulates their values and any changes to it would be disastrous to them. If a malicious external agent were able to make modifications, their future selves would forevermore act in ways contrary to their current values. This could be a fate worse than death! Imagine a book loving agent whose utility function was changed by an arsonist to cause the agent to enjoy burning books. Its future self not only wouldn’t work to collect and preserve books, but would actively go about destroying them. This kind of outcome has such a negative utility that systems will go to great lengths to protect their utility functions.”
Well among economists it is accepted as rational for your preferences to change with context, including time. As you probably know there are EU equivalence theorems that for any p0,U0, there are many other p1,U1; p2,U2; etc. that produce all the same choices. I break this symmetry by saying the p is about the world while the U is about you. The patterns of choice that are explained by changes in you should go in U, and the patters of choices that are explained by changes in what you believe about the world go in p.
That’s surprising for me to hear, and seems to contradict the information given at http://en.wikipedia.org/wiki/Time_inconsistency#In_behavioral_economics
Later on it says:
But I thought this rejection means rejection as a positive/descriptive theory of how humans actually behave, not as a normative theory of what is rational. Are you saying that economists no longer consider time consistency to be normative?
ETA: Whoever is voting Robin down, why are you doing that?
Conflicts are unfortunate, but hardly irrational. If is is not irrational for two different people at the same time to have different preferences, it is not irratoinal for the same person at different time to have different preferences.
I have to admit, I always thought of time consistency as a standard part of individual rationality, and didn’t consider that anyone might take the position that you’re taking. I’ll have to think about this some more. In the mean time, what about my other question, how to actually become pre-rational? Have you looked at this comment yet?
If people could cheaply bind their future selves, and didn’t directly prefer not to do so, it would be irrational of them to let their future selves have different preferences.
If you owned any slave and could cheaply do so, you’d want to mold it to share exactly your preferences. But should you treat your future selves as your slaves?
Upon further reflection, I think altruism towards one’s future selves can’t justify having different preferences, because there should be a set of compromise preferences such that both your current self and your future selves are better off if you bind yourself (both current and future) to that set.
The logical structure of this argument is flawed. Here’s another argument that shares the same structure, but is clearly wrong:
Here’s another version that makes more sense:
One answer here might be that changing your friend’s preferences is a wrong because it hurts him according to his current preferences. Doing the same to your future selves isn’t wrong because they don’t exist yet. But I think Robin’s moral philosophy says that we should respect the preferences of nonexistent people, so his position seems consistent with that.
This seems like the well-worn discussion on whether rational agents should be expected to change their preferences. Here’s Omohundro on the topic:
“Their utility function will be precious to these systems. It encapsulates their values and any changes to it would be disastrous to them. If a malicious external agent were able to make modifications, their future selves would forevermore act in ways contrary to their current values. This could be a fate worse than death! Imagine a book loving agent whose utility function was changed by an arsonist to cause the agent to enjoy burning books. Its future self not only wouldn’t work to collect and preserve books, but would actively go about destroying them. This kind of outcome has such a negative utility that systems will go to great lengths to protect their utility functions.”
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
He goes on to discuss the issue in detail and lists some exceptional cases.