I do not feel any less confusion after reading the post.
This sounds like moral relativism but has nothing to do with it.
Oh, it’s much worse. It is epistemic relativism. You are saying that there is no one true answer to the question and we are free to trust whatever intuitions we have. And you do not provide any particular reason for this state of affairs.
Well, what do you anticipate experiencing? Something or nothing?
I don’t know! That’s the whole point. I do not need a permission to trust my intuitions, I want to understand which intuitions are trustworthy and which are not. There seems to be a genuine question about what happens and which rules govern it, and you are trying to sidestep it by saying “whatever happens—happens”.
I can imagine a universe with such rules that teleportation kills a person and a universe in which it doesn’t. I’d like to know how does our universe work.
Question 1 and 3 are explicitly about values, so I don’t think they do amount to epistemic relativism.
There seems to be a genuine question about what happens and which rules govern it, and you are trying to sidestep it by saying “whatever happens—happens”.
I can imagine a universe with such rules that teleportation kills a person and a universe in which it doesn’t. I’d like to know how does our universe work.
There seems to be a genuine question here, but it is not at all clear that there actually is one. It is pretty hard to characterize what this question amounts to, i.e. what the difference would be between two worlds where the question has different answers. I take OP to be espousing the view that the question isn’t meaningful for this reason (though I do think they could have laid this out more clearly).
Question 1 and 3 are explicitly about values, so I don’t think they do amount to epistemic relativism.
They are formulated as such but the crux is not about values. People tend to agree that one should care about the successor of your subjective experience. The question is whether there will be one or not.And this is the question of fact.
There seems to be a genuine question here, but it is not at all clear that there actually is one. It is pretty hard to characterize what this question amounts to, i.e. what the difference would be between two worlds where the question has different answers.
Not really? We can easily do so if there exist some kind of “soul”. I can conceptualize a world where a soul always stays tied to the initial body, and as soon as its destroyed, its destroyed as well. Or where it always goes to a new one if there is such opportunity, or where it chooses between the two based one some hidden variable so for us it appears to be at random.
Say there is a soul. We inspect a teleportation process, and we find that, just like your body and brain, the soul disappears on the transmitter pad, and an identical soul appears on the receiver. What would this tell you that you don’t already know?
What, in principle, could demonstrate that two souls are in fact the same soul across time?
By “soul” here I mean a carrier for identity across time. A unique verification code of some sort. So that after we conduct a cloning experiment we can check and see that one person has the same code, while the other has a new code. Likewise, after the teleportation we can check and see whether the teleported person has the same code as before.
It really doesn’t seem like our universe works like that, but knowing this doesn’t help much to understand how exactly our reality is working.
What observation could demonstrate that this code indeed corresponded to the metaphysical important sense of continuity across time? What would the difference be between a world where it did or it didn’t?
Consider a simple cloning experiment. You are put to sleep, a clone of you is created, after awakening you are not sure whether you are the original or the clone. Now consider this modification: after the original and the clone awakens they are told their code. And then each of them participate in the next iteration of the same experiment, untill there are 2^n people each of whom has participated in n iterations of experiment.
Before the whole chain of the experiments starts you know your code and that there are 2^n possible paths that your subjective experience can go through this iterated cloning experiment. Only one of these path will be yours in the end. You go through all the chain of the experiments and turns out you have preserved your initial code. Now lets consider two hypothesises: 1) The code does not correspond to your continuity across time; 2) The code does correspond to your continuity across time. Under 1) you’ve experienced a rare event with probability 1/2^n. Under 2) it was the only possibility. Therefore you update in favor of 2).
The original mistake is that feeling of a “carrier for identity across time”—for which upon closer inspection we find no evidence, and which we thus have to let go of. Once you realize that you can explain all we observe and all you feel with merely, at any given time, your current mind, including its memories, and aspirations for the future, but without any further “carrier for identity”, i.e. without any super-material valuable extra soul, there is resolving peace about this question.
With that outlook, do you still plan for tomorrow? From big things like a career, to small things like getting the groceries in. If you do these things just as assiduously after achieving this “resolving peace” as before, it would not seem to have made any difference and was just a philosophical recreation.
it would not seem to have made any difference and was just a philosophical recreation
Mind, in this discussion about cloning thought experiments I’d find it natural that there are not many currently tangible consequences, even if we did find a satisfying answer to some of the puzzling questions around that topic.
That said, I guess I’m not the only one here with a keen intrinsic interest in understanding the nature of self even absent tangible & direct implications, or if these implications may remain rather subtle at this very moment.
I obviously still care for tomorrow, as is perfectly in line with the theory.
I take you to imply that, under the here emphasized hypothesis about self not being a unified long-term self the way we tend to imagine, one would have to logically conclude sth like: “why care then, even about ‘my’ own future?!”. This is absolutely not implied:
The questions around which we can get “resolving peace” (see context above!) refers to things like: If someone came along proposing to clone/transmit/… you, what to do? We may of course find peace about that question (which I’d say I have for now) without giving up to care about the ‘natural’ successors of ours in standard live.
Note how you can still have particular care for your close kin or so after realizing your preferential care about these is just your personal (or our general cultural) preference w/o meaning you’re “unified” with your close kin in any magical way. It is equally all too natural for me to still keep my specific (and excessive) focus & care on the well-being of my ‘natural’ successors, i.e. on what we traditionally call “my tomorrow’s self”, even if I realize that we have no hint at anything magical (no persistent super-natural self) linking me to it; it’s just my ingrained preference.
Note how you can still have particular care for your close kin or so after realizing your preferential care about these is just
The word “just” in the sense used here is always a danger sign. “X is just Y” means “X is Y and is not a certain other thing Z”, but without stating the Z. What is the Z here? What is the thing beyond brute, unanalysed preference, that you are rejecting here? You have to know what it is to be able to reject it with the words “just” and later “magical”, and further on “super-natural”. Why is it your preference? In another comment you express a keen interest in understanding the nature of self, yet there is an aversion here to understanding the sources of your preferences.
It is equally all too natural for me to still keep my specific (and excessive) focus & care on the well-being of my ‘natural’ successors, i.e. on what we traditionally call
Too natural? Excessive focus and care? What we traditionally call? This all sounds to me like you are trying not to know something.
I’m sorry but I find you’re nitpicking on words out of context, rather than to engage with what I mean. Maybe my EN is imperfect but I think not that unreadable:
A)
The word “just” in the sense used here is always a danger sign. “X is just Y” means “X is Y and is not a certain other thing Z”, but without stating the Z.
… ‘just’ might sometimes be used in such abbreviated way, but here, the second part of my very sentence itself readily says what I mean with the ‘just’ (see “w/o meaning you’re …”).
B)
You quoting me: “It is equally all too natural for me to still keep my specific (and excessive) focus & care on the well-being of my ‘natural’ successors, i.e. on what we traditionally call”
You: Too natural? Excessive focus and care? What we traditionally call? This all sounds to me like you are trying not to know something.
Recall, as I wrote in my comment, I try to support “why care [under my stated views], even about ‘my’ own future”. I try to rephrase the sentence you quote, in a paragraph that avoids the 3 elements you criticize. I hope the meaning becomes clear then:
Evolution has ingrained into my mind with a very strong preference to care for the next-period inhabitant(s) X of my body. This deeply ingrained preference to preserve the well-being of X tends to override everything else. So, however much my reflections suggest to me that X is not as unquestionably related to me as I instinctively would have thought before closer examination, I will not be able to give up my commonly observed preferences for doing (mostly) the best for X, in situations where there is no cloning or anything of the like going on.
(you can safely ignore “(and excessive)”. With it, I just meant to casually mention also we tend to be too egoistic; our strong specific focus on (or care for) our own body’s future is not good for the world overall. But this is quite a separate thing.)
I didn’t mean to be nitpicking, and I believe your words have well expressed your thoughts. But I found it striking that you treat preference as a brick wall that cannot be further questioned (or if you do, all you find behind it is “evolution”), while professing the virtue of an examined self.
In our present-day world I am as sure as I need to be that (barring having a stroke in the night) I am going to wake up tomorrow as me, little changed from today. I would find speculations about teleporters much more interesting if such machines actually existed. My preferences are not limited to my likely remaining lifespan, and the fact that I will not be around to have them then does not mean that I cannot have them and act on them now.
People tend to agree that one should care about the successor of your subjective experience. The question is whether there will be one or not.And this is the question of fact.
But the question of “what, if anything, is the successor of your subjective experience” does not obviously have a single factual answer.
I can conceptualize a world where a soul always stays tied to the initial body, and as soon as its destroyed, its destroyed as well.
If souls are real (and the Hard Problem boils down to “it’s the souls, duh”), then a teleporter that doesn’t reattach/reconstruct your soul seems like it doesn’t fit the hypothetical. If the teleporter perfectly reassembles you, that should apply to all components of you, even extraphysical ones.
But the question of “what, if anything, is the successor of your subjective experience” does not obviously have a single factual answer.
Then I’d like to see some explanation why it doesn’t have an answer, which would be adding back to normality. I understand that I’m confused about the matter in some way. But I also understand that just saying “don’t think about it” doesn’t clear my confusion in the slightest.
If souls are real (and the Hard Problem boils down to “it’s the souls, duh”), then a teleporter that doesn’t reattach/reconstruct your soul seems like it doesn’t fit the hypothetical. If the teleporter perfectly reassembles you, that should apply to all components of you, even extraphysical ones.
Nevermind cosciousness and the so called Hard Problem. By “soul” here I simply mean the carrier for identity over time, which may very well be physical. Yes, indeed it may be the case that perfect teleporter/cloning machine is just impossible because of such soul. That would be an appropriate solution to these problems.
Then I’d like to see some explanation why it doesn’t have an answer, which would be adding back to normality.
I’m not saying it doesn’t, I’m saying it’s not obvious that it does. Normalcy requirements don’t mean all our possibly-confused questions have answers, they just put restrictions on what those answers should look like. So, if the idea of successors-of-experience is meaningful at all, our normal intuition gives us desiderata like “chains of sucessorship are continuous across periods of consciousness” and “chains of successorship do not fork or merge with eachother under conditions that we currently observe.”
If you have any particular notion of successorship that meets all the desiderata you think should matter here, whether or not a teleporter creates a successor is a question of fact. But it’s not obvious what the most principled set of desiderata is, and for most sets of desiderata it’s probably not obvious whether there is a unique notion of successorship.
OP is advocating for something along the lines of “There is no uniquely-most-principled notion of successorship; the fact that different people have different desiderata, or that some people arbitrarily choose one idea of succesorship over another that’s just as logical, is a result of normal value differences.” There is no epistemic relativism; given any particular person’s most valued notion of successorship, everyone can, in principle, agree whether any given situation preserves it.
The relativism is in choosing which (whose) notion to use when making any given decision. Even in a world where souls are real and most people agree that continuity-of-consciousness is equivalent to continuity-of-soul-state, which is preserved by those nifty new teleporters, some curmudgeon who thinks that continuity-of-physical-location is also important shouldn’t be forced into a teleporter against their will, since they expect (and all informed observers will agree) that their favored notion of continuity of consciousness will be ended by the teleporter.
Oh, it’s much worse. It is epistemic relativism. You are saying that there is no one true answer to the question and we are free to trust whatever intuitions we have. And you do not provide any particular reason for this state of affairs.
Nice challenge! There’s no “epistemic relativism” here, even if I see where you’re coming from.
First recall the broader altruism analogy: Would you say it’s epistemic relativisim if I tell you, you can simply look inside yourself and see freely, how much you care, how closely connected you feel about people in a faraway country? You sure wouldn’t reproach that to me; you sure agree it’s your own ‘decision’ (or intrinsic inclination or so) that decides how much weight or care you personally put on these persons.
Now, remember the core elements I posit. “You” are (i) your mind of right here and now, including (ii) it’s tendency for deeply felt care & connection to the ‘natural’ successors of yours, and that’s about what there is to be said about you (+ there’s memory). From this everything follows. It is evolution that has shaped us to shortcut the standard physical ‘continuation’ of you in coming periods, as a ‘unique entity’ in our mind, and has made you typically care sort of ’100%′ about your first few sec worth of forthcoming successors of yours [in analogy: Just as nature has shaped you to (usually) care tremendously also for your direct children or siblings]. Now there are (hypothetically) cases, where things are so warped and that are so unusual evolutionarily, that you have no clear tastes: that clone or this clone, if you are/are not destroyed in the process/while asleep or not/blabla—all the puzzles we can come up with. For all these cases, you have no clear taste as to which of the ‘successors’ of yours you care much and which you don’t. In our inner mind’s sloppy speak: we don’t know “who we’ll be”. Equally importantly, you may see it one way, and your best friends may see it very differently. And what I’m explaining is that, given the axiom of “you” being you only right here and now, there simply IS no objective truth to be found about who is you later or not, and so there is no objective answer as to whom of those many clones in all different situations you ought to care how much about: it really does only boil down to how much you care about these. As, on a most fundamental level, “you” are only your mind right now.
And if you find you’re still wondering about how much to care about which potential clone in which circumstances, it’s not the fault of the theory that it does not answer it to you. You’re asking to the outside a question that can only be answered inside you. The same way that, again, I cannot tell you how much you feel (or should feel) for third person x.
I for sure can tell you you ought to behaviorally care more from a moral perspective, and there I might use a specific rule that attributes each conscious clone an equal weight or so, and in that domain you could complain if I don’t give you a clear answer. But that’s exactly not what the discussion here is about.
I can imagine a universe with such rules that teleportation kills a person and a universe in which it doesn’t. I’d like to know how does our universe work.
I propose a specific “self” is a specific mind at a given moment. The usual-speak “killing” X and the relevant harm associated with it means to prevent X’s natural successors, about whom X cares so deeply, from coming into existence. If X cares about his physical-direct-body successors only, disintegrating and teleporting him means we destroy all he cared for, we prevented all he wanted to happen from happening, we have so-to-say killed him, as we prevented his successors from coming to live. If he looked forward to a nice trip to Mars where he is to be teleported to, there’s no reason to think we ‘killed’ anyone in any meaningful sense, as “he”‘s a happy space traveller finding ‘himself’ (well, his successors..) doing just the stuff he anticipated for them to be doing. There’s nothing more objective to be said about our universe ‘functioning’ this or that way. As any self is only ephemeral, and a person is a succession of instantaneous selves linked to one another with memory and with forward-looking preferences, it really is these own preferences that matter for the decision, no outside ‘fact’ about the universe.
I do not feel any less confusion after reading the post.
Oh, it’s much worse. It is epistemic relativism. You are saying that there is no one true answer to the question and we are free to trust whatever intuitions we have. And you do not provide any particular reason for this state of affairs.
I don’t know! That’s the whole point. I do not need a permission to trust my intuitions, I want to understand which intuitions are trustworthy and which are not. There seems to be a genuine question about what happens and which rules govern it, and you are trying to sidestep it by saying “whatever happens—happens”.
I can imagine a universe with such rules that teleportation kills a person and a universe in which it doesn’t. I’d like to know how does our universe work.
Question 1 and 3 are explicitly about values, so I don’t think they do amount to epistemic relativism.
There seems to be a genuine question here, but it is not at all clear that there actually is one. It is pretty hard to characterize what this question amounts to, i.e. what the difference would be between two worlds where the question has different answers. I take OP to be espousing the view that the question isn’t meaningful for this reason (though I do think they could have laid this out more clearly).
They are formulated as such but the crux is not about values. People tend to agree that one should care about the successor of your subjective experience. The question is whether there will be one or not.And this is the question of fact.
Not really? We can easily do so if there exist some kind of “soul”. I can conceptualize a world where a soul always stays tied to the initial body, and as soon as its destroyed, its destroyed as well. Or where it always goes to a new one if there is such opportunity, or where it chooses between the two based one some hidden variable so for us it appears to be at random.
Say there is a soul. We inspect a teleportation process, and we find that, just like your body and brain, the soul disappears on the transmitter pad, and an identical soul appears on the receiver. What would this tell you that you don’t already know?
What, in principle, could demonstrate that two souls are in fact the same soul across time?
By “soul” here I mean a carrier for identity across time. A unique verification code of some sort. So that after we conduct a cloning experiment we can check and see that one person has the same code, while the other has a new code. Likewise, after the teleportation we can check and see whether the teleported person has the same code as before.
It really doesn’t seem like our universe works like that, but knowing this doesn’t help much to understand how exactly our reality is working.
What observation could demonstrate that this code indeed corresponded to the metaphysical important sense of continuity across time? What would the difference be between a world where it did or it didn’t?
Good question.
Consider a simple cloning experiment. You are put to sleep, a clone of you is created, after awakening you are not sure whether you are the original or the clone. Now consider this modification: after the original and the clone awakens they are told their code. And then each of them participate in the next iteration of the same experiment, untill there are 2^n people each of whom has participated in n iterations of experiment.
Before the whole chain of the experiments starts you know your code and that there are 2^n possible paths that your subjective experience can go through this iterated cloning experiment. Only one of these path will be yours in the end. You go through all the chain of the experiments and turns out you have preserved your initial code. Now lets consider two hypothesises: 1) The code does not correspond to your continuity across time; 2) The code does correspond to your continuity across time. Under 1) you’ve experienced a rare event with probability 1/2^n. Under 2) it was the only possibility. Therefore you update in favor of 2).
This is the most interesting answer I’ve ever gotten to this line of questioning. I will think it over!
The original mistake is that feeling of a “carrier for identity across time”—for which upon closer inspection we find no evidence, and which we thus have to let go of. Once you realize that you can explain all we observe and all you feel with merely, at any given time, your current mind, including its memories, and aspirations for the future, but without any further “carrier for identity”, i.e. without any super-material valuable extra soul, there is resolving peace about this question.
With that outlook, do you still plan for tomorrow? From big things like a career, to small things like getting the groceries in. If you do these things just as assiduously after achieving this “resolving peace” as before, it would not seem to have made any difference and was just a philosophical recreation.
Btw, regarding:
Mind, in this discussion about cloning thought experiments I’d find it natural that there are not many currently tangible consequences, even if we did find a satisfying answer to some of the puzzling questions around that topic.
That said, I guess I’m not the only one here with a keen intrinsic interest in understanding the nature of self even absent tangible & direct implications, or if these implications may remain rather subtle at this very moment.
The answer that satisfies me is that I’ll wonder about cloning machines and teleporters when someone actually makes one. 😌
I obviously still care for tomorrow, as is perfectly in line with the theory.
I take you to imply that, under the here emphasized hypothesis about self not being a unified long-term self the way we tend to imagine, one would have to logically conclude sth like: “why care then, even about ‘my’ own future?!”. This is absolutely not implied:
The questions around which we can get “resolving peace” (see context above!) refers to things like: If someone came along proposing to clone/transmit/… you, what to do? We may of course find peace about that question (which I’d say I have for now) without giving up to care about the ‘natural’ successors of ours in standard live.
Note how you can still have particular care for your close kin or so after realizing your preferential care about these is just your personal (or our general cultural) preference w/o meaning you’re “unified” with your close kin in any magical way. It is equally all too natural for me to still keep my specific (and excessive) focus & care on the well-being of my ‘natural’ successors, i.e. on what we traditionally call “my tomorrow’s self”, even if I realize that we have no hint at anything magical (no persistent super-natural self) linking me to it; it’s just my ingrained preference.
The word “just” in the sense used here is always a danger sign. “X is just Y” means “X is Y and is not a certain other thing Z”, but without stating the Z. What is the Z here? What is the thing beyond brute, unanalysed preference, that you are rejecting here? You have to know what it is to be able to reject it with the words “just” and later “magical”, and further on “super-natural”. Why is it your preference? In another comment you express a keen interest in understanding the nature of self, yet there is an aversion here to understanding the sources of your preferences.
Too natural? Excessive focus and care? What we traditionally call? This all sounds to me like you are trying not to know something.
I’m sorry but I find you’re nitpicking on words out of context, rather than to engage with what I mean. Maybe my EN is imperfect but I think not that unreadable:
A)
… ‘just’ might sometimes be used in such abbreviated way, but here, the second part of my very sentence itself readily says what I mean with the ‘just’ (see “w/o meaning you’re …”).
B)
Recall, as I wrote in my comment, I try to support “why care [under my stated views], even about ‘my’ own future”. I try to rephrase the sentence you quote, in a paragraph that avoids the 3 elements you criticize. I hope the meaning becomes clear then:
Evolution has ingrained into my mind with a very strong preference to care for the next-period inhabitant(s) X of my body. This deeply ingrained preference to preserve the well-being of X tends to override everything else. So, however much my reflections suggest to me that X is not as unquestionably related to me as I instinctively would have thought before closer examination, I will not be able to give up my commonly observed preferences for doing (mostly) the best for X, in situations where there is no cloning or anything of the like going on.
(you can safely ignore “(and excessive)”. With it, I just meant to casually mention also we tend to be too egoistic; our strong specific focus on (or care for) our own body’s future is not good for the world overall. But this is quite a separate thing.)
I didn’t mean to be nitpicking, and I believe your words have well expressed your thoughts. But I found it striking that you treat preference as a brick wall that cannot be further questioned (or if you do, all you find behind it is “evolution”), while professing the virtue of an examined self.
In our present-day world I am as sure as I need to be that (barring having a stroke in the night) I am going to wake up tomorrow as me, little changed from today. I would find speculations about teleporters much more interesting if such machines actually existed. My preferences are not limited to my likely remaining lifespan, and the fact that I will not be around to have them then does not mean that I cannot have them and act on them now.
But the question of “what, if anything, is the successor of your subjective experience” does not obviously have a single factual answer.
If souls are real (and the Hard Problem boils down to “it’s the souls, duh”), then a teleporter that doesn’t reattach/reconstruct your soul seems like it doesn’t fit the hypothetical. If the teleporter perfectly reassembles you, that should apply to all components of you, even extraphysical ones.
Then I’d like to see some explanation why it doesn’t have an answer, which would be adding back to normality. I understand that I’m confused about the matter in some way. But I also understand that just saying “don’t think about it” doesn’t clear my confusion in the slightest.
Nevermind cosciousness and the so called Hard Problem. By “soul” here I simply mean the carrier for identity over time, which may very well be physical. Yes, indeed it may be the case that perfect teleporter/cloning machine is just impossible because of such soul. That would be an appropriate solution to these problems.
I’m not saying it doesn’t, I’m saying it’s not obvious that it does. Normalcy requirements don’t mean all our possibly-confused questions have answers, they just put restrictions on what those answers should look like. So, if the idea of successors-of-experience is meaningful at all, our normal intuition gives us desiderata like “chains of sucessorship are continuous across periods of consciousness” and “chains of successorship do not fork or merge with eachother under conditions that we currently observe.”
If you have any particular notion of successorship that meets all the desiderata you think should matter here, whether or not a teleporter creates a successor is a question of fact. But it’s not obvious what the most principled set of desiderata is, and for most sets of desiderata it’s probably not obvious whether there is a unique notion of successorship.
OP is advocating for something along the lines of “There is no uniquely-most-principled notion of successorship; the fact that different people have different desiderata, or that some people arbitrarily choose one idea of succesorship over another that’s just as logical, is a result of normal value differences.” There is no epistemic relativism; given any particular person’s most valued notion of successorship, everyone can, in principle, agree whether any given situation preserves it.
The relativism is in choosing which (whose) notion to use when making any given decision. Even in a world where souls are real and most people agree that continuity-of-consciousness is equivalent to continuity-of-soul-state, which is preserved by those nifty new teleporters, some curmudgeon who thinks that continuity-of-physical-location is also important shouldn’t be forced into a teleporter against their will, since they expect (and all informed observers will agree) that their favored notion of continuity of consciousness will be ended by the teleporter.
Nice challenge! There’s no “epistemic relativism” here, even if I see where you’re coming from.
First recall the broader altruism analogy: Would you say it’s epistemic relativisim if I tell you, you can simply look inside yourself and see freely, how much you care, how closely connected you feel about people in a faraway country? You sure wouldn’t reproach that to me; you sure agree it’s your own ‘decision’ (or intrinsic inclination or so) that decides how much weight or care you personally put on these persons.
Now, remember the core elements I posit. “You” are (i) your mind of right here and now, including (ii) it’s tendency for deeply felt care & connection to the ‘natural’ successors of yours, and that’s about what there is to be said about you (+ there’s memory). From this everything follows. It is evolution that has shaped us to shortcut the standard physical ‘continuation’ of you in coming periods, as a ‘unique entity’ in our mind, and has made you typically care sort of ’100%′ about your first few sec worth of forthcoming successors of yours [in analogy: Just as nature has shaped you to (usually) care tremendously also for your direct children or siblings]. Now there are (hypothetically) cases, where things are so warped and that are so unusual evolutionarily, that you have no clear tastes: that clone or this clone, if you are/are not destroyed in the process/while asleep or not/blabla—all the puzzles we can come up with. For all these cases, you have no clear taste as to which of the ‘successors’ of yours you care much and which you don’t. In our inner mind’s sloppy speak: we don’t know “who we’ll be”. Equally importantly, you may see it one way, and your best friends may see it very differently. And what I’m explaining is that, given the axiom of “you” being you only right here and now, there simply IS no objective truth to be found about who is you later or not, and so there is no objective answer as to whom of those many clones in all different situations you ought to care how much about: it really does only boil down to how much you care about these. As, on a most fundamental level, “you” are only your mind right now.
And if you find you’re still wondering about how much to care about which potential clone in which circumstances, it’s not the fault of the theory that it does not answer it to you. You’re asking to the outside a question that can only be answered inside you. The same way that, again, I cannot tell you how much you feel (or should feel) for third person x.
I for sure can tell you you ought to behaviorally care more from a moral perspective, and there I might use a specific rule that attributes each conscious clone an equal weight or so, and in that domain you could complain if I don’t give you a clear answer. But that’s exactly not what the discussion here is about.
I propose a specific “self” is a specific mind at a given moment. The usual-speak “killing” X and the relevant harm associated with it means to prevent X’s natural successors, about whom X cares so deeply, from coming into existence. If X cares about his physical-direct-body successors only, disintegrating and teleporting him means we destroy all he cared for, we prevented all he wanted to happen from happening, we have so-to-say killed him, as we prevented his successors from coming to live. If he looked forward to a nice trip to Mars where he is to be teleported to, there’s no reason to think we ‘killed’ anyone in any meaningful sense, as “he”‘s a happy space traveller finding ‘himself’ (well, his successors..) doing just the stuff he anticipated for them to be doing. There’s nothing more objective to be said about our universe ‘functioning’ this or that way. As any self is only ephemeral, and a person is a succession of instantaneous selves linked to one another with memory and with forward-looking preferences, it really is these own preferences that matter for the decision, no outside ‘fact’ about the universe.