Oh, it’s much worse. It is epistemic relativism. You are saying that there is no one true answer to the question and we are free to trust whatever intuitions we have. And you do not provide any particular reason for this state of affairs.
Nice challenge! There’s no “epistemic relativism” here, even if I see where you’re coming from.
First recall the broader altruism analogy: Would you say it’s epistemic relativisim if I tell you, you can simply look inside yourself and see freely, how much you care, how closely connected you feel about people in a faraway country? You sure wouldn’t reproach that to me; you sure agree it’s your own ‘decision’ (or intrinsic inclination or so) that decides how much weight or care you personally put on these persons.
Now, remember the core elements I posit. “You” are (i) your mind of right here and now, including (ii) it’s tendency for deeply felt care & connection to the ‘natural’ successors of yours, and that’s about what there is to be said about you (+ there’s memory). From this everything follows. It is evolution that has shaped us to shortcut the standard physical ‘continuation’ of you in coming periods, as a ‘unique entity’ in our mind, and has made you typically care sort of ’100%′ about your first few sec worth of forthcoming successors of yours [in analogy: Just as nature has shaped you to (usually) care tremendously also for your direct children or siblings]. Now there are (hypothetically) cases, where things are so warped and that are so unusual evolutionarily, that you have no clear tastes: that clone or this clone, if you are/are not destroyed in the process/while asleep or not/blabla—all the puzzles we can come up with. For all these cases, you have no clear taste as to which of the ‘successors’ of yours you care much and which you don’t. In our inner mind’s sloppy speak: we don’t know “who we’ll be”. Equally importantly, you may see it one way, and your best friends may see it very differently. And what I’m explaining is that, given the axiom of “you” being you only right here and now, there simply IS no objective truth to be found about who is you later or not, and so there is no objective answer as to whom of those many clones in all different situations you ought to care how much about: it really does only boil down to how much you care about these. As, on a most fundamental level, “you” are only your mind right now.
And if you find you’re still wondering about how much to care about which potential clone in which circumstances, it’s not the fault of the theory that it does not answer it to you. You’re asking to the outside a question that can only be answered inside you. The same way that, again, I cannot tell you how much you feel (or should feel) for third person x.
I for sure can tell you you ought to behaviorally care more from a moral perspective, and there I might use a specific rule that attributes each conscious clone an equal weight or so, and in that domain you could complain if I don’t give you a clear answer. But that’s exactly not what the discussion here is about.
I can imagine a universe with such rules that teleportation kills a person and a universe in which it doesn’t. I’d like to know how does our universe work.
I propose a specific “self” is a specific mind at a given moment. The usual-speak “killing” X and the relevant harm associated with it means to prevent X’s natural successors, about whom X cares so deeply, from coming into existence. If X cares about his physical-direct-body successors only, disintegrating and teleporting him means we destroy all he cared for, we prevented all he wanted to happen from happening, we have so-to-say killed him, as we prevented his successors from coming to live. If he looked forward to a nice trip to Mars where he is to be teleported to, there’s no reason to think we ‘killed’ anyone in any meaningful sense, as “he”‘s a happy space traveller finding ‘himself’ (well, his successors..) doing just the stuff he anticipated for them to be doing. There’s nothing more objective to be said about our universe ‘functioning’ this or that way. As any self is only ephemeral, and a person is a succession of instantaneous selves linked to one another with memory and with forward-looking preferences, it really is these own preferences that matter for the decision, no outside ‘fact’ about the universe.
Nice challenge! There’s no “epistemic relativism” here, even if I see where you’re coming from.
First recall the broader altruism analogy: Would you say it’s epistemic relativisim if I tell you, you can simply look inside yourself and see freely, how much you care, how closely connected you feel about people in a faraway country? You sure wouldn’t reproach that to me; you sure agree it’s your own ‘decision’ (or intrinsic inclination or so) that decides how much weight or care you personally put on these persons.
Now, remember the core elements I posit. “You” are (i) your mind of right here and now, including (ii) it’s tendency for deeply felt care & connection to the ‘natural’ successors of yours, and that’s about what there is to be said about you (+ there’s memory). From this everything follows. It is evolution that has shaped us to shortcut the standard physical ‘continuation’ of you in coming periods, as a ‘unique entity’ in our mind, and has made you typically care sort of ’100%′ about your first few sec worth of forthcoming successors of yours [in analogy: Just as nature has shaped you to (usually) care tremendously also for your direct children or siblings]. Now there are (hypothetically) cases, where things are so warped and that are so unusual evolutionarily, that you have no clear tastes: that clone or this clone, if you are/are not destroyed in the process/while asleep or not/blabla—all the puzzles we can come up with. For all these cases, you have no clear taste as to which of the ‘successors’ of yours you care much and which you don’t. In our inner mind’s sloppy speak: we don’t know “who we’ll be”. Equally importantly, you may see it one way, and your best friends may see it very differently. And what I’m explaining is that, given the axiom of “you” being you only right here and now, there simply IS no objective truth to be found about who is you later or not, and so there is no objective answer as to whom of those many clones in all different situations you ought to care how much about: it really does only boil down to how much you care about these. As, on a most fundamental level, “you” are only your mind right now.
And if you find you’re still wondering about how much to care about which potential clone in which circumstances, it’s not the fault of the theory that it does not answer it to you. You’re asking to the outside a question that can only be answered inside you. The same way that, again, I cannot tell you how much you feel (or should feel) for third person x.
I for sure can tell you you ought to behaviorally care more from a moral perspective, and there I might use a specific rule that attributes each conscious clone an equal weight or so, and in that domain you could complain if I don’t give you a clear answer. But that’s exactly not what the discussion here is about.
I propose a specific “self” is a specific mind at a given moment. The usual-speak “killing” X and the relevant harm associated with it means to prevent X’s natural successors, about whom X cares so deeply, from coming into existence. If X cares about his physical-direct-body successors only, disintegrating and teleporting him means we destroy all he cared for, we prevented all he wanted to happen from happening, we have so-to-say killed him, as we prevented his successors from coming to live. If he looked forward to a nice trip to Mars where he is to be teleported to, there’s no reason to think we ‘killed’ anyone in any meaningful sense, as “he”‘s a happy space traveller finding ‘himself’ (well, his successors..) doing just the stuff he anticipated for them to be doing. There’s nothing more objective to be said about our universe ‘functioning’ this or that way. As any self is only ephemeral, and a person is a succession of instantaneous selves linked to one another with memory and with forward-looking preferences, it really is these own preferences that matter for the decision, no outside ‘fact’ about the universe.