of course not, you’re not destroying the primary copy of me. But that’s changing the case you’re making; you specifically said that killing now is preferable. I would not be ok with that.
Correct, that is different from the initial question, you made your position on that topic clear.
Would the copy on the satellite disagree about the primacy of the copy not in the torture sim? Would a copt have the right to disagree? Is it morally wrong for me to spin up a dozen copies of myself and force them to fight to the death for my amusement?
I’m guessing based on your responses that you would agree with the statement ‘copies of the same root individual are property of the copy with the oldest timestamped date of creation, and may be created, destroyed, and abused at the whims of that first copy, and no one else’
If you copy yourself, and that copy commits a crime, are all copies held responsible, just the ‘root’ copy, or just the ‘leaf’ copy?
no. copies are all equally me until they diverge greatly; I wouldn’t mind 10 copies existing for 10 minutes and then being deleted any more than I would mind forgetting an hour. the “primary copy” is maybe a bad way to put it; I only meant that colloquially, in the sense that looking at that world from the outside, the structure is obvious.
copy on the satellite would not disagree
yes would have the right, but as an FDT agent a copy would not disagree except for straight up noise in the implementation of me; I might make a mistake if I can’t propagate information between all parts of myself but that’s different
that sounds kind of disgusting to experience as the remaining agent, but I don’t see an obvious reason it should be a moral thing. if you’re the kind of agent that would do that, I might avoid you
copies are not property, they’re equal
that’s very complicated based on what the crime is and the intent of the punishment/retribution/restorative justice/etc
I read this as assuming that all copies deterministically demonstrate absolute allegiance to the collective self. I question that assertion, but have no clear way of proving the argument one way or another. If ‘re-merging’ is possible, mergeable copies intending to merge should probably be treated as a unitary entity rather than individuals for the sake of this discussion.
Ultimately, I read your position as stating that suicide is a human right, but that secure deletion of an individual is not acceptable to prevent ultimate harm to that individual, but is acceptable to prevent harm caused by that individual to others.
This is far from a settled issue, and has analogy in the question ‘should you terminate an uncomplicated preganancy with terminal birth defects?’ Anencephaly is a good example of this situation. The argument presented in the OP is consistent with a ‘yes’, and I read your line of argument as consistent with a clear ‘no’.
I acausally cooperate with agents who I evaluate to be similar to me. That includes most humans, but it includes myself REALLY HARD, and doesn’t include an unborn baby. (because babies are just templates, and the thing that makes them like me is being in the world for a year ish.)
Is your position consistent with effective altruism?
The trap expressed in the OP is essentially a statement that approaching a particular problem involving uploaded consciousness using the framework of effective altruism to drive decision-making led to a perverse (brains in blenders!) incentive. The options at this point are a) the perverse act is not perverse b) effective altruism does not lead to that perverse act c) effective altruism is flawed, try something else (like ‘ideological kin’ selection?)
You are unequivocal about your disinterest in being on the receiving of this brand of altruism, and have also asserted that you cooperate acausally with agents similar to you, (based on degree of similarity?) and previously asserted that an agent who shares the sum total of your life experience, less the most recent year, can be cast aside and destroyed without thought or consequence. So...do I mark you down for option c?
of course not, you’re not destroying the primary copy of me. But that’s changing the case you’re making; you specifically said that killing now is preferable. I would not be ok with that.
Correct, that is different from the initial question, you made your position on that topic clear.
Would the copy on the satellite disagree about the primacy of the copy not in the torture sim? Would a copt have the right to disagree? Is it morally wrong for me to spin up a dozen copies of myself and force them to fight to the death for my amusement?
I’m guessing based on your responses that you would agree with the statement ‘copies of the same root individual are property of the copy with the oldest timestamped date of creation, and may be created, destroyed, and abused at the whims of that first copy, and no one else’
If you copy yourself, and that copy commits a crime, are all copies held responsible, just the ‘root’ copy, or just the ‘leaf’ copy?
Thank you for the challenging responses!
no. copies are all equally me until they diverge greatly; I wouldn’t mind 10 copies existing for 10 minutes and then being deleted any more than I would mind forgetting an hour. the “primary copy” is maybe a bad way to put it; I only meant that colloquially, in the sense that looking at that world from the outside, the structure is obvious.
copy on the satellite would not disagree
yes would have the right, but as an FDT agent a copy would not disagree except for straight up noise in the implementation of me; I might make a mistake if I can’t propagate information between all parts of myself but that’s different
that sounds kind of disgusting to experience as the remaining agent, but I don’t see an obvious reason it should be a moral thing. if you’re the kind of agent that would do that, I might avoid you
copies are not property, they’re equal
that’s very complicated based on what the crime is and the intent of the punishment/retribution/restorative justice/etc
I read this as assuming that all copies deterministically demonstrate absolute allegiance to the collective self. I question that assertion, but have no clear way of proving the argument one way or another. If ‘re-merging’ is possible, mergeable copies intending to merge should probably be treated as a unitary entity rather than individuals for the sake of this discussion.
Ultimately, I read your position as stating that suicide is a human right, but that secure deletion of an individual is not acceptable to prevent ultimate harm to that individual, but is acceptable to prevent harm caused by that individual to others.
This is far from a settled issue, and has analogy in the question ‘should you terminate an uncomplicated preganancy with terminal birth defects?’ Anencephaly is a good example of this situation. The argument presented in the OP is consistent with a ‘yes’, and I read your line of argument as consistent with a clear ‘no’.
Thanks again for the food for thought.
I acausally cooperate with agents who I evaluate to be similar to me. That includes most humans, but it includes myself REALLY HARD, and doesn’t include an unborn baby. (because babies are just templates, and the thing that makes them like me is being in the world for a year ish.)
Is your position consistent with effective altruism?
The trap expressed in the OP is essentially a statement that approaching a particular problem involving uploaded consciousness using the framework of effective altruism to drive decision-making led to a perverse (brains in blenders!) incentive. The options at this point are a) the perverse act is not perverse b) effective altruism does not lead to that perverse act c) effective altruism is flawed, try something else (like ‘ideological kin’ selection?)
You are unequivocal about your disinterest in being on the receiving of this brand of altruism, and have also asserted that you cooperate acausally with agents similar to you, (based on degree of similarity?) and previously asserted that an agent who shares the sum total of your life experience, less the most recent year, can be cast aside and destroyed without thought or consequence. So...do I mark you down for option c?