Of course you can differentiate between them. I can differentiate between me-in-five-years and me-in-twenty-years, as well. There exist differences between these things.
I initially thought you were asking how one could identify with a different-but-similar person, given the absence of feedback (“If they have a shitty day, I feel nothing. If they have a good day, I feel nothing.”).
With respect to that, it seems to me that my ability to identify with myself-in-the-future despite the lack of feedback suggests that the lack of feedback isn’t a showstopper, and more generally that what I identify with is more a function of my capacity for empathy than it is of any “me”-ness in the world.
I’m no longer sure I understood you correctly in the first place, though.
And for that matter, I don’t really have that much feedback from me in 20 seconds from now, or me 20 seconds ago either. My current remembering self has instantaneous inclinations, some of which are predicated on memories or anticipations, but at no point am I ever really a smearing of multiple time slices of myself. (I am probably a smearing of different quantum branches of myself, though. Until those selves decohere and I incrementally discover which branch that “I” have been on “all along”).
For example, what is the difference between what we would commonly call “me”, and an entity whose conscious experience is the Heaviside function with argument equal to the entire description of my brain state right as I type this question mark? That version of me just started existing, but luckily had molecules and quarks in all the right places to feel and remember everything as if he’s been alive for 26 years and ate tofu for dinner.
Well, in practical terms, the anticipations matter. I expect decisions I make now to affect the state of me-in-twenty-seconds; if I jump off a tall building, for example, I expect me-in-twenty-seconds to be a smear on the pavement, so if I value me-in-twenty-seconds not being a smear on the pavement, that inclines me not to jump off a tall building.
But no such relationship exists between me and me-twenty-seconds-ago; I don’t expect decisions I make to affect the state of that entity.
You are of course correct, though, that my anticipations are facts about my minds and not facts about reality-other-than-my-mind, which might have all kinds of properties that make my anticipations (and recollections, and current perceptions and beliefs) simply false.
Yeah, it was your last paragraph that I was meaning. In a given moment, I don’t value something about me 20 seconds from now. I value the current experience of thoughts that involve simulations about an idealization of me extrapolated in time. The thing I am valuing are immediate thoughts though. Much like altruistic values being rooted in your own immediate value of anticipations. My meat computer will act on observations to induce an anticipation of X in my brain. If I want to anticipate Y then I should do Z to bring about the actions that lead to the anticipation of Y. Once I am at the precise instant that I’m experiencing Y, I’m not valuing Y because my mind is valuing anticipations post-Y.
It is very interesting given things like closed-timelike curves and so on that there is no anticipation of past selves. It would be great to see a write up of why the perceived flow of entropy causes me to only future-value an anticipation like being proud of my former actions or accomplishments. I’m sure that the right level of articulation is evolutionary biology. I don’t see how having visceral cognitive anticipations of the past could be adaptive. But it’s still interesting. And even more interesting to think that there is some most-like-me entity within the subspace of entities that do have past-looking anticipations, probably wondering why people don’t have future-looking anticipations right now (in a Big Universe, anyway)
The point I was trying to convey was that there is a huge difference between if I die vs a copy elsewhere of me dying.
Hence me using the programming metaphor. If I am an instance, it matters a hell of a lot if it is this particular instance that gets scrapped vs some other instance of Sly.
My argument is that ageing is more like modifying the variables, whereas the Big Universe copy of sly is a separate instance.
Therefore it makes a lot of sense why I do not consider the copy of Sly to be me. I do not equate the two as other people here really want too. I also reject the idea that ageing identity loss is comparable to death identity loss, this seems to be completely wishful thinking.
You started out by asking how you were supposed to relate in any way to a copy of you . What I’m gathering from our subsequent discussion is that this was a rhetorical question; what you actually meant to express was that you don’t relate in any way to such a copy, and you don’t feel obligated to.
I accept that you don’t, and I agree that you aren’t obligated to.
Identity in the sense we’re discussing is a fact about the mind, not a fact about the world. If you choose to identify solely with your present self and future selves in the same body, and treat everything else that could conceivably exist as not-you, that’s fine. It’s a perfectly reasonable choice, and I’ve no doubt that you can come up with lots of perfectly valid arguments to support making that choice.
The fact that other people make different choices about their identity than you do doesn’t mean that either of you is wrong about what your identity “really” is, or that either of you is ignoring reality in favor of “wishful thinking”.
There are consequences to those choices, of course: if I choose to identify with me-now but not with me-in-ten-years, for example, I will tend to make decisions such that ten years from now I am worse off. If I choose to identify with me-in-this-body but not copies of me, I will tend to make decisions such that copies of me are worse off. (Obviously, this doesn’t actually create consequences in cases where none of my decisions can affect copies of me that exist.) Etc.
Of course you can differentiate between them. I can differentiate between me-in-five-years and me-in-twenty-years, as well. There exist differences between these things.
I initially thought you were asking how one could identify with a different-but-similar person, given the absence of feedback (“If they have a shitty day, I feel nothing. If they have a good day, I feel nothing.”).
With respect to that, it seems to me that my ability to identify with myself-in-the-future despite the lack of feedback suggests that the lack of feedback isn’t a showstopper, and more generally that what I identify with is more a function of my capacity for empathy than it is of any “me”-ness in the world.
I’m no longer sure I understood you correctly in the first place, though.
And for that matter, I don’t really have that much feedback from me in 20 seconds from now, or me 20 seconds ago either. My current remembering self has instantaneous inclinations, some of which are predicated on memories or anticipations, but at no point am I ever really a smearing of multiple time slices of myself. (I am probably a smearing of different quantum branches of myself, though. Until those selves decohere and I incrementally discover which branch that “I” have been on “all along”).
For example, what is the difference between what we would commonly call “me”, and an entity whose conscious experience is the Heaviside function with argument equal to the entire description of my brain state right as I type this question mark? That version of me just started existing, but luckily had molecules and quarks in all the right places to feel and remember everything as if he’s been alive for 26 years and ate tofu for dinner.
Well, in practical terms, the anticipations matter. I expect decisions I make now to affect the state of me-in-twenty-seconds; if I jump off a tall building, for example, I expect me-in-twenty-seconds to be a smear on the pavement, so if I value me-in-twenty-seconds not being a smear on the pavement, that inclines me not to jump off a tall building.
But no such relationship exists between me and me-twenty-seconds-ago; I don’t expect decisions I make to affect the state of that entity.
You are of course correct, though, that my anticipations are facts about my minds and not facts about reality-other-than-my-mind, which might have all kinds of properties that make my anticipations (and recollections, and current perceptions and beliefs) simply false.
Yeah, it was your last paragraph that I was meaning. In a given moment, I don’t value something about me 20 seconds from now. I value the current experience of thoughts that involve simulations about an idealization of me extrapolated in time. The thing I am valuing are immediate thoughts though. Much like altruistic values being rooted in your own immediate value of anticipations. My meat computer will act on observations to induce an anticipation of X in my brain. If I want to anticipate Y then I should do Z to bring about the actions that lead to the anticipation of Y. Once I am at the precise instant that I’m experiencing Y, I’m not valuing Y because my mind is valuing anticipations post-Y.
It is very interesting given things like closed-timelike curves and so on that there is no anticipation of past selves. It would be great to see a write up of why the perceived flow of entropy causes me to only future-value an anticipation like being proud of my former actions or accomplishments. I’m sure that the right level of articulation is evolutionary biology. I don’t see how having visceral cognitive anticipations of the past could be adaptive. But it’s still interesting. And even more interesting to think that there is some most-like-me entity within the subspace of entities that do have past-looking anticipations, probably wondering why people don’t have future-looking anticipations right now (in a Big Universe, anyway)
The point I was trying to convey was that there is a huge difference between if I die vs a copy elsewhere of me dying.
Hence me using the programming metaphor. If I am an instance, it matters a hell of a lot if it is this particular instance that gets scrapped vs some other instance of Sly.
My argument is that ageing is more like modifying the variables, whereas the Big Universe copy of sly is a separate instance.
Therefore it makes a lot of sense why I do not consider the copy of Sly to be me. I do not equate the two as other people here really want too. I also reject the idea that ageing identity loss is comparable to death identity loss, this seems to be completely wishful thinking.
You started out by asking how you were supposed to relate in any way to a copy of you . What I’m gathering from our subsequent discussion is that this was a rhetorical question; what you actually meant to express was that you don’t relate in any way to such a copy, and you don’t feel obligated to.
I accept that you don’t, and I agree that you aren’t obligated to.
Identity in the sense we’re discussing is a fact about the mind, not a fact about the world. If you choose to identify solely with your present self and future selves in the same body, and treat everything else that could conceivably exist as not-you, that’s fine. It’s a perfectly reasonable choice, and I’ve no doubt that you can come up with lots of perfectly valid arguments to support making that choice.
The fact that other people make different choices about their identity than you do doesn’t mean that either of you is wrong about what your identity “really” is, or that either of you is ignoring reality in favor of “wishful thinking”.
There are consequences to those choices, of course: if I choose to identify with me-now but not with me-in-ten-years, for example, I will tend to make decisions such that ten years from now I am worse off. If I choose to identify with me-in-this-body but not copies of me, I will tend to make decisions such that copies of me are worse off. (Obviously, this doesn’t actually create consequences in cases where none of my decisions can affect copies of me that exist.) Etc.
Yes, it was rhetorical.
I see now, I was operating on the fact about the world level.