I think it’s a mistake to always treat distinct temporal slices of the same person as different agents, since agency is tied up with decision making and decision making is a temporally extended process. I presume you regard intransitive preferences as irrational, but why? The usual rationale is that it turns you into a money pump, but since any realistic money pumping scenario will be temporally extended, it’s unclear why this is evidence for irrationality on your view. If an arbitrageur can make money by engaging in a sequence of trades, each with a different agent, why should any one of those agents be convicted of irrationality?
Anyway, the problem with hyperbolic discounting is not just that the agent’s utility function changes with time. The preference switches are implicit in the agent’s current utility function; they are predictable. As a self-aware hyperbolic discounter, I know right now that I will be willing to make deals in the future that will undo deals I make now and cost me some additional money, and that this condition will persist unless I self-modify, allowing my adversary to pump an arbitrarily large amount of money out of me (or out of my future selves, if you prefer). I will sign a contract right now pledging to pay you $55 next Friday in return for $100 the following Saturday, even though I know right now that when Friday comes around I will be willing to sign a contract paying you $105 on Saturday in exchange for $50 immediately.
since agency is tied up with decision making and decision making is a temporally extended process.
You can make the decision to consider the options and let future!you make a better-informed decision.
I presume you regard intransitive preferences as irrational, but why?
If you prefer paper to rocks, scissors to paper, and rock to scissors, that can be taken advantage of in a single step. If your preferences change, you don’t have intransitive preferences. You do have to take into account that an action changes your preferences, and future!you might not do what you want, as with the murder pill.
The preference switches are implicit in the agent’s current utility function; they are predictable.
They are predictable, but they are not part of the agent’s current utility function. It’s no more irrational than the idea of agents caring more about themselves than each other. An adversary could take advantage of this by setting up a prisoner’s dilemma, just as past! and future! you could be taken advantage of with a prisoner’s dilemma. You might use a decision theory that avoids that, but that’s not the same as changing the utility function.
It’s no more irrational than the idea of agents caring more about themselves than each other.
I don’t get the equivocation of future selves with other agents. Rationality is about winning, but it’s not about your present self winning, it’s about your future selves winning. When you’re engaged in rational decision-making, you’re playing for your future selves. I love mangoes right now, but if I knew for sure that one-minute-in-the-future-me was going to suddenly develop a deep aversion to mangoes, it would be irrational for me to set out to acquire mangoes right now. It would be irrational for me to say “Who cares about that guy’s utility function?”
I don’t get the equivocation of past and future selves with each other.
Rationality is about winning according to some given utility function. Claiming that you have to make everyone who happens to be connected along some world line win is no less arbitrary than claiming that you have to make everyone contained in state boundaries win.
Future!you tends to agree with present!you’s values far more often than your closest other allies. As such, an idea of personal identity tends to be useful. It’s not like it’s some fundamental thing that makes you all the same person, though.
I love mangoes right now, but if I knew for sure that one-minute-in-the-future-me was going to suddenly develop a deep aversion to mangoes, it would be irrational for me to set out to acquire mangoes right now.
Present!mangoes are instrumentally useful to make present!you happy. Future!mangoes don’t make present!you or future!you happy, and are therefore not instrumentally helpful. If you thought it was intrinsically valuable that future!you has mangoes, then you would get future!you mangoes regardless of what he thought.
Rationality is about winning according to some given utility function.
Pretty much any sequence of outcomes can be construed as wins according to some utility function. But rationality is not that trivial. If you accuse me of irrationality, I shouldn’t be able to respond by saying “Well, my actions look irrational according to my utility function, but you should be evaluating them using Steve’s utility function, not mine.”
Claiming that you have to make everyone who happens to be connected along some world line win is no less arbitrary than claiming that you have to make everyone contained in state boundaries win.
There are a number of physical differences between time and space, and these differences are very relevant to the
way organisms have evolved. In particular, they are relevant to the evolution of agency and decision-making. Our tendency to regard all spatially separated organisms as others but certain temporally separated organisms as ourselves is not an arbitrary quirk, it is the consequence of important and fundamental differences between space and time, such as the temporal (but not spatial) asymmetry of causal connections. When we’re talking about decision-making, it is not arbitrary to treat space and time differently.
If everyone who happens to be connected to me-now along a world line didn’t exist, I would not be an agent. There is no sense in which a momentary self (if such an entity is even coherent) would be a decision maker, if it merely appeared and then disappeared instantaneously. On the other hand, if everyone else within my state boundaries disappeared, I would still be an agent. So there is a principled distinction here. Agency (and consequently decision-making) is intimately tied up with the existence of future “selves”. It is not similarly dependent on the existence of spatially separated “selves”.
Future!you tends to agree with present!you’s values far more often than your closest other allies.
Talking of different time slices as distinct selves is a useful heuristic for many purposes, but you’re elevating it to something more fundamental, and that’s a mistake. Every single mental process associated with the generation of self-hood is a temporally extended process. There is no such thing as a genuinely instantaneous self. So when you’re talking about future!me and present!me, you’re already talking about extended segments of world-lines (or world-tubes) rather than points. It is not a difference in kind to talk of a slightly longer segment as single “self”, one that encompasses both future!me and present!me.
If you accuse me of irrationality, I shouldn’t be able to respond by saying “Well, my actions look irrational according to my utility function, but you should be evaluating them using Steve’s utility function, not mine.”
No, but you should be able to respond “Well, my actions look irrational according to Steve’s utility function, but you should be evaluating them using my utility function, not his,” or similarly, “Well, my actions look irrational according to future!me’s utility function, but you should be evaluating them using present!me’s utility function, not his,”
Is it future!me’s or future!my? Somehow, my English classes never went into much depth about characterization tags.
Your decisions aren’t totally instantaneous. You depend at on at least a little of future!you and past!you before you could really be thought of as much in the way of a rational agent, but that doesn’t mean that you should think of future!you from an hour later as exactly the same person. It especially doesn’t mean that you after you wake up the next morning is the same as you before you go to sleep. Those two are only vaguely connected.
Well, “only vaguely” is a massive understatement. There’s a helluva lot of mutual information between me tomorrow and me today, much, much more than between me today and you today.
I think it’s a mistake to always treat distinct temporal slices of the same person as different agents, since agency is tied up with decision making and decision making is a temporally extended process. I presume you regard intransitive preferences as irrational, but why? The usual rationale is that it turns you into a money pump, but since any realistic money pumping scenario will be temporally extended, it’s unclear why this is evidence for irrationality on your view. If an arbitrageur can make money by engaging in a sequence of trades, each with a different agent, why should any one of those agents be convicted of irrationality?
Anyway, the problem with hyperbolic discounting is not just that the agent’s utility function changes with time. The preference switches are implicit in the agent’s current utility function; they are predictable. As a self-aware hyperbolic discounter, I know right now that I will be willing to make deals in the future that will undo deals I make now and cost me some additional money, and that this condition will persist unless I self-modify, allowing my adversary to pump an arbitrarily large amount of money out of me (or out of my future selves, if you prefer). I will sign a contract right now pledging to pay you $55 next Friday in return for $100 the following Saturday, even though I know right now that when Friday comes around I will be willing to sign a contract paying you $105 on Saturday in exchange for $50 immediately.
You can make the decision to consider the options and let future!you make a better-informed decision.
If you prefer paper to rocks, scissors to paper, and rock to scissors, that can be taken advantage of in a single step. If your preferences change, you don’t have intransitive preferences. You do have to take into account that an action changes your preferences, and future!you might not do what you want, as with the murder pill.
They are predictable, but they are not part of the agent’s current utility function. It’s no more irrational than the idea of agents caring more about themselves than each other. An adversary could take advantage of this by setting up a prisoner’s dilemma, just as past! and future! you could be taken advantage of with a prisoner’s dilemma. You might use a decision theory that avoids that, but that’s not the same as changing the utility function.
I don’t get the equivocation of future selves with other agents. Rationality is about winning, but it’s not about your present self winning, it’s about your future selves winning. When you’re engaged in rational decision-making, you’re playing for your future selves. I love mangoes right now, but if I knew for sure that one-minute-in-the-future-me was going to suddenly develop a deep aversion to mangoes, it would be irrational for me to set out to acquire mangoes right now. It would be irrational for me to say “Who cares about that guy’s utility function?”
I don’t get the equivocation of past and future selves with each other.
Rationality is about winning according to some given utility function. Claiming that you have to make everyone who happens to be connected along some world line win is no less arbitrary than claiming that you have to make everyone contained in state boundaries win.
Future!you tends to agree with present!you’s values far more often than your closest other allies. As such, an idea of personal identity tends to be useful. It’s not like it’s some fundamental thing that makes you all the same person, though.
Present!mangoes are instrumentally useful to make present!you happy. Future!mangoes don’t make present!you or future!you happy, and are therefore not instrumentally helpful. If you thought it was intrinsically valuable that future!you has mangoes, then you would get future!you mangoes regardless of what he thought.
Pretty much any sequence of outcomes can be construed as wins according to some utility function. But rationality is not that trivial. If you accuse me of irrationality, I shouldn’t be able to respond by saying “Well, my actions look irrational according to my utility function, but you should be evaluating them using Steve’s utility function, not mine.”
There are a number of physical differences between time and space, and these differences are very relevant to the way organisms have evolved. In particular, they are relevant to the evolution of agency and decision-making. Our tendency to regard all spatially separated organisms as others but certain temporally separated organisms as ourselves is not an arbitrary quirk, it is the consequence of important and fundamental differences between space and time, such as the temporal (but not spatial) asymmetry of causal connections. When we’re talking about decision-making, it is not arbitrary to treat space and time differently.
If everyone who happens to be connected to me-now along a world line didn’t exist, I would not be an agent. There is no sense in which a momentary self (if such an entity is even coherent) would be a decision maker, if it merely appeared and then disappeared instantaneously. On the other hand, if everyone else within my state boundaries disappeared, I would still be an agent. So there is a principled distinction here. Agency (and consequently decision-making) is intimately tied up with the existence of future “selves”. It is not similarly dependent on the existence of spatially separated “selves”.
Talking of different time slices as distinct selves is a useful heuristic for many purposes, but you’re elevating it to something more fundamental, and that’s a mistake. Every single mental process associated with the generation of self-hood is a temporally extended process. There is no such thing as a genuinely instantaneous self. So when you’re talking about future!me and present!me, you’re already talking about extended segments of world-lines (or world-tubes) rather than points. It is not a difference in kind to talk of a slightly longer segment as single “self”, one that encompasses both future!me and present!me.
No, but you should be able to respond “Well, my actions look irrational according to Steve’s utility function, but you should be evaluating them using my utility function, not his,” or similarly, “Well, my actions look irrational according to future!me’s utility function, but you should be evaluating them using present!me’s utility function, not his,”
Is it future!me’s or future!my? Somehow, my English classes never went into much depth about characterization tags.
Your decisions aren’t totally instantaneous. You depend at on at least a little of future!you and past!you before you could really be thought of as much in the way of a rational agent, but that doesn’t mean that you should think of future!you from an hour later as exactly the same person. It especially doesn’t mean that you after you wake up the next morning is the same as you before you go to sleep. Those two are only vaguely connected.
Well, “only vaguely” is a massive understatement. There’s a helluva lot of mutual information between me tomorrow and me today, much, much more than between me today and you today.
Yeah, but there’s no continuity.
What do you mean? The differences between me now and me in epsilon seconds are of order epsilon, aren’t they?