If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai).
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing?
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.