It’s no more irrational than the idea of agents caring more about themselves than each other.
I don’t get the equivocation of future selves with other agents. Rationality is about winning, but it’s not about your present self winning, it’s about your future selves winning. When you’re engaged in rational decision-making, you’re playing for your future selves. I love mangoes right now, but if I knew for sure that one-minute-in-the-future-me was going to suddenly develop a deep aversion to mangoes, it would be irrational for me to set out to acquire mangoes right now. It would be irrational for me to say “Who cares about that guy’s utility function?”
I don’t get the equivocation of past and future selves with each other.
Rationality is about winning according to some given utility function. Claiming that you have to make everyone who happens to be connected along some world line win is no less arbitrary than claiming that you have to make everyone contained in state boundaries win.
Future!you tends to agree with present!you’s values far more often than your closest other allies. As such, an idea of personal identity tends to be useful. It’s not like it’s some fundamental thing that makes you all the same person, though.
I love mangoes right now, but if I knew for sure that one-minute-in-the-future-me was going to suddenly develop a deep aversion to mangoes, it would be irrational for me to set out to acquire mangoes right now.
Present!mangoes are instrumentally useful to make present!you happy. Future!mangoes don’t make present!you or future!you happy, and are therefore not instrumentally helpful. If you thought it was intrinsically valuable that future!you has mangoes, then you would get future!you mangoes regardless of what he thought.
Rationality is about winning according to some given utility function.
Pretty much any sequence of outcomes can be construed as wins according to some utility function. But rationality is not that trivial. If you accuse me of irrationality, I shouldn’t be able to respond by saying “Well, my actions look irrational according to my utility function, but you should be evaluating them using Steve’s utility function, not mine.”
Claiming that you have to make everyone who happens to be connected along some world line win is no less arbitrary than claiming that you have to make everyone contained in state boundaries win.
There are a number of physical differences between time and space, and these differences are very relevant to the
way organisms have evolved. In particular, they are relevant to the evolution of agency and decision-making. Our tendency to regard all spatially separated organisms as others but certain temporally separated organisms as ourselves is not an arbitrary quirk, it is the consequence of important and fundamental differences between space and time, such as the temporal (but not spatial) asymmetry of causal connections. When we’re talking about decision-making, it is not arbitrary to treat space and time differently.
If everyone who happens to be connected to me-now along a world line didn’t exist, I would not be an agent. There is no sense in which a momentary self (if such an entity is even coherent) would be a decision maker, if it merely appeared and then disappeared instantaneously. On the other hand, if everyone else within my state boundaries disappeared, I would still be an agent. So there is a principled distinction here. Agency (and consequently decision-making) is intimately tied up with the existence of future “selves”. It is not similarly dependent on the existence of spatially separated “selves”.
Future!you tends to agree with present!you’s values far more often than your closest other allies.
Talking of different time slices as distinct selves is a useful heuristic for many purposes, but you’re elevating it to something more fundamental, and that’s a mistake. Every single mental process associated with the generation of self-hood is a temporally extended process. There is no such thing as a genuinely instantaneous self. So when you’re talking about future!me and present!me, you’re already talking about extended segments of world-lines (or world-tubes) rather than points. It is not a difference in kind to talk of a slightly longer segment as single “self”, one that encompasses both future!me and present!me.
If you accuse me of irrationality, I shouldn’t be able to respond by saying “Well, my actions look irrational according to my utility function, but you should be evaluating them using Steve’s utility function, not mine.”
No, but you should be able to respond “Well, my actions look irrational according to Steve’s utility function, but you should be evaluating them using my utility function, not his,” or similarly, “Well, my actions look irrational according to future!me’s utility function, but you should be evaluating them using present!me’s utility function, not his,”
Is it future!me’s or future!my? Somehow, my English classes never went into much depth about characterization tags.
Your decisions aren’t totally instantaneous. You depend at on at least a little of future!you and past!you before you could really be thought of as much in the way of a rational agent, but that doesn’t mean that you should think of future!you from an hour later as exactly the same person. It especially doesn’t mean that you after you wake up the next morning is the same as you before you go to sleep. Those two are only vaguely connected.
Well, “only vaguely” is a massive understatement. There’s a helluva lot of mutual information between me tomorrow and me today, much, much more than between me today and you today.
I don’t get the equivocation of future selves with other agents. Rationality is about winning, but it’s not about your present self winning, it’s about your future selves winning. When you’re engaged in rational decision-making, you’re playing for your future selves. I love mangoes right now, but if I knew for sure that one-minute-in-the-future-me was going to suddenly develop a deep aversion to mangoes, it would be irrational for me to set out to acquire mangoes right now. It would be irrational for me to say “Who cares about that guy’s utility function?”
I don’t get the equivocation of past and future selves with each other.
Rationality is about winning according to some given utility function. Claiming that you have to make everyone who happens to be connected along some world line win is no less arbitrary than claiming that you have to make everyone contained in state boundaries win.
Future!you tends to agree with present!you’s values far more often than your closest other allies. As such, an idea of personal identity tends to be useful. It’s not like it’s some fundamental thing that makes you all the same person, though.
Present!mangoes are instrumentally useful to make present!you happy. Future!mangoes don’t make present!you or future!you happy, and are therefore not instrumentally helpful. If you thought it was intrinsically valuable that future!you has mangoes, then you would get future!you mangoes regardless of what he thought.
Pretty much any sequence of outcomes can be construed as wins according to some utility function. But rationality is not that trivial. If you accuse me of irrationality, I shouldn’t be able to respond by saying “Well, my actions look irrational according to my utility function, but you should be evaluating them using Steve’s utility function, not mine.”
There are a number of physical differences between time and space, and these differences are very relevant to the way organisms have evolved. In particular, they are relevant to the evolution of agency and decision-making. Our tendency to regard all spatially separated organisms as others but certain temporally separated organisms as ourselves is not an arbitrary quirk, it is the consequence of important and fundamental differences between space and time, such as the temporal (but not spatial) asymmetry of causal connections. When we’re talking about decision-making, it is not arbitrary to treat space and time differently.
If everyone who happens to be connected to me-now along a world line didn’t exist, I would not be an agent. There is no sense in which a momentary self (if such an entity is even coherent) would be a decision maker, if it merely appeared and then disappeared instantaneously. On the other hand, if everyone else within my state boundaries disappeared, I would still be an agent. So there is a principled distinction here. Agency (and consequently decision-making) is intimately tied up with the existence of future “selves”. It is not similarly dependent on the existence of spatially separated “selves”.
Talking of different time slices as distinct selves is a useful heuristic for many purposes, but you’re elevating it to something more fundamental, and that’s a mistake. Every single mental process associated with the generation of self-hood is a temporally extended process. There is no such thing as a genuinely instantaneous self. So when you’re talking about future!me and present!me, you’re already talking about extended segments of world-lines (or world-tubes) rather than points. It is not a difference in kind to talk of a slightly longer segment as single “self”, one that encompasses both future!me and present!me.
No, but you should be able to respond “Well, my actions look irrational according to Steve’s utility function, but you should be evaluating them using my utility function, not his,” or similarly, “Well, my actions look irrational according to future!me’s utility function, but you should be evaluating them using present!me’s utility function, not his,”
Is it future!me’s or future!my? Somehow, my English classes never went into much depth about characterization tags.
Your decisions aren’t totally instantaneous. You depend at on at least a little of future!you and past!you before you could really be thought of as much in the way of a rational agent, but that doesn’t mean that you should think of future!you from an hour later as exactly the same person. It especially doesn’t mean that you after you wake up the next morning is the same as you before you go to sleep. Those two are only vaguely connected.
Well, “only vaguely” is a massive understatement. There’s a helluva lot of mutual information between me tomorrow and me today, much, much more than between me today and you today.
Yeah, but there’s no continuity.
What do you mean? The differences between me now and me in epsilon seconds are of order epsilon, aren’t they?