So you knowingly execute an inference process that has no correlation with the truth, because the possible-universes where it’s wrong aren’t morally significant, so you don’t care what consequences the incorrect conclusions have here/there? (Is there a demonstrative pronoun that doesn’t specify whether the object is near or far?) (“You” refers to whatever replies to this comment, not any epiphenomenon that may or may not be associated with it.) In the absence of a causal relation, where did you get the idea that your morality has anything to do with STCs?
If you have a single STC, why do you hypothesize that the bridging law associates it with the copy of you that’s going to die rather than one of the ones that survives? Or would you decline in the thought-experiment, not due to certainty of death, but due to uncertainty?
What about consciousness that isn’t arranged in threads? e.g. a branching structure, like the causal graph of the physics involved. If you frequently branch and rarely or never merge, then any given instance of you will still remember only a single history, so introspection (even epiphenomenal introspection, if you think there is such a thing, let alone introspection by a biological instantiation) can’t distinguish this from the STC theory.
So you knowingly execute an inference process that has no correlation with the truth, because the possible-universes where it’s wrong aren’t morally significant, so you don’t care what consequences the incorrect conclusions have here/there?
I don’t see what morals have to do with it. I didn’t talk about morals.
If you have a single STC, why do you hypothesize that the bridging law associates it with the copy of you that’s going to die rather than one of the ones that survives? Or would you decline in the thought-experiment, not due to certainty of death, but due to uncertainty?
I am indeed uncertain, and so won’t risk death. However, I do hypothesize that it’s much more likely that, if a copy is created, my STC will remain with the original—inasfar as an original is identifiable. And when it is not identifiable, I fear that my STC will die entirely and not posses any of the clones.
What about consciousness that isn’t arranged in threads? e.g. a branching structure, like the causal graph of the physics involved.
This is perfectly possible. In fact it’s very likely. Because, if we create a clone, and if it necessarily has an STC of its own that is (in terms of memories and personality) a clone of the original’s STC, then it makes at least as much sense to say that the STC branched than to say we somehow “created” a new STC to specification.
In this case I would anticipate becoming some one of the branches (clones). However, I do not know yet the probability weight of becoming each one, and AFAICS this can only be established empirically—and to test it at all I would have to risk the aforementioned chance of death.
In this scenario I wouldn’t want people to torture clones of me in case I became them—but as long as a clearly identifiable original body remains intact, I very strongly expect my STC to remain associated with it, and not pass to a random clone, so if I’m not threatened I can mistreat my clones. And I certainly would never accept destruction of the original body no matter how many clones were created in exchange.
So you knowingly execute an inference process that has no correlation with the truth, because the possible-universes where it’s wrong aren’t morally significant, so you don’t care what consequences the incorrect conclusions have here/there? (Is there a demonstrative pronoun that doesn’t specify whether the object is near or far?) (“You” refers to whatever replies to this comment, not any epiphenomenon that may or may not be associated with it.) In the absence of a causal relation, where did you get the idea that your morality has anything to do with STCs?
If you have a single STC, why do you hypothesize that the bridging law associates it with the copy of you that’s going to die rather than one of the ones that survives? Or would you decline in the thought-experiment, not due to certainty of death, but due to uncertainty?
What about consciousness that isn’t arranged in threads? e.g. a branching structure, like the causal graph of the physics involved. If you frequently branch and rarely or never merge, then any given instance of you will still remember only a single history, so introspection (even epiphenomenal introspection, if you think there is such a thing, let alone introspection by a biological instantiation) can’t distinguish this from the STC theory.
I don’t see what morals have to do with it. I didn’t talk about morals.
I am indeed uncertain, and so won’t risk death. However, I do hypothesize that it’s much more likely that, if a copy is created, my STC will remain with the original—inasfar as an original is identifiable. And when it is not identifiable, I fear that my STC will die entirely and not posses any of the clones.
This is perfectly possible. In fact it’s very likely. Because, if we create a clone, and if it necessarily has an STC of its own that is (in terms of memories and personality) a clone of the original’s STC, then it makes at least as much sense to say that the STC branched than to say we somehow “created” a new STC to specification.
In this case I would anticipate becoming some one of the branches (clones). However, I do not know yet the probability weight of becoming each one, and AFAICS this can only be established empirically—and to test it at all I would have to risk the aforementioned chance of death.
In this scenario I wouldn’t want people to torture clones of me in case I became them—but as long as a clearly identifiable original body remains intact, I very strongly expect my STC to remain associated with it, and not pass to a random clone, so if I’m not threatened I can mistreat my clones. And I certainly would never accept destruction of the original body no matter how many clones were created in exchange.