morality is about acausal contracts between counterfactual agents, and I do not want my future defended in this way. I don’t care what you think of my suffering; if you try to kill me to prevent my suffering, I’ll try to kill you back.
Correct, this is very much an ‘I’ll pray for you’ line of reasoning. To use a religious example, it is better to martyr a true believer (who will escape hell) than to permit a heretic to live, as the heretic may turn others away from truth, and thus curse them to hell. So if you’re only partially sure that someone is a heretic, it is safer for the community to burn them. Anyone who accepts this line of argument would rather be burnt than allowed to fall into heresy.
Unfortunately, mind uploading gives us an actual, honest road to hell, so the argument cannot be dispelled with the statement that the risk of experiencing hell is unquantifiable or potentially zero. As I argue here, it is non-zero and potentially high, so using moral arguments that humans have used previously, it is possible to justify secure deletion in the context of ‘saving souls’. This does not require a blender, a ‘crisis uploading center’ may do the job just as well
I DON’T CARE about your hell reasoning. I AM ALREADY FIGHTING for my future, don’t you dare decide you know so much better that you won’t accept the risk that I might have some measure that suffers. If you want good things for yourself, update your moral theory to get it out of my face. Again: if you try to kill me, I will try to kill you back, with as much extra pain as I think is necessary to make you-now fear the outcome.
Maybe some people would rather kill themselves than risk this outcome. That’s up to them. But don’t you force it on me, or goddamn else.
I do care about his reasoning, and disagree with it (most notably the “any torture → infinite torture” part, with no counterbalancing “any pleasure → ?) term in the calculation.
but I’m with Iahwran on the conclusion: destroying the last copy of someone is especially heinous, and nowhere near justified by your reasoning. I’ll join his precommittment to punish you if you commit crimes in pursuit of these wrong beliefs (note: plain old retroactive punishment, nothing acausal here).
Under paragraph 2, destroying the last copy is especially heinous. That implies that you view replacing the death penalty in US states with ‘death followed by uploading into an indefinite long-term simulation of confinement’ to be less heinous? The status quo is to destroy the only copy of the mind in question.
Would it be justifiable to simulate prisoners with sentences they are expected to die prior to completing, so that they can live out their entire punitive terms and rejoin society as Ems?
That implies that you view replacing the death penalty in US states with ‘death followed by uploading into an indefinite long-term simulation of confinement’ to be less heinous?
Clearly it’s less harsh, and most convicts would prefer to experience incarceration for an indefinite time over a simple final death. This might change after a few hundred or million subjective years, but I don’t know—it probably depends on what activities the em has access to.
Whether it’s “heinous” is harder to say. Incarceration is a long way from torture, and I don’t know what the equilibrium effect on other criminals will be if it’s known that a formerly-capital offense now enables a massively extended lifespan, albeit in jail.
The suicide rate for incarcerated Americans is three times that of the general population, anecdotally, many death row inmates have expressed the desire to ‘hurry up with it’. Werner Herzog’s interviews of George Rivas and his co-conspirators are good examples of the sentiment. There’s still debate about the effectiveness of the death penalty as a deterrent to crime.
I suspect that some of these people may prefer the uncertain probability of confinement to hell by the divine, to the certain continuation of their sentences at the hands of the state.
Furthermore, an altruist working to further the cause of secure deletion may be preventing literal centuries of human misery. Why is this any less important than feeding the hungry, who at most will suffer for a proportion of a single lifetime?
Furthermore, an altruist working to further the cause of secure deletion may be preventing literal centuries of human misery. Why is this any less important than feeding the hungry, who at most will suffer for a proportion of a single lifetime?
You’re still looking only at the negative side of the equation. My goals are not solely to reduce suffering, but also to increase joy. Incarceration is not joy-free, and not (I think) even net negative for most inmates. Likewise your fears of an em future. It’s not joy-free, and while it may actually be negative for some ems, the probability space for ems in general is positive.
I therefore support suicide and secure erasure for any individual who reasonably believes themselves to be a significant outlier in terms of negative potential future outcomes, but strongly oppose the imposition of it on those who haven’t so chosen.
An effective altruist could probably very efficiently go about increasing the joy in the probability space for all humans by offering wireheading to a random human as resources permit, but it doesn’t do much for people who are proximately experiencing suffering for other reasons. I instinctively think that this wireheading example is an incorrect application of effective altruism, but I do think it is analagous to the ‘overall space is good’ argument.
Do you support assisted suicide for individuals incarcerated in hell simulations, or with a high probability of being placed into one subsequent to upload? For example, if a government develops a practice of execution followed by torment-simulation, would you support delivering the gift of secure deletion to the condemned?
I discover evidence that some sadistic jerk has stolen copies of both our minds, uploaded them to a toture simulation, and placed the torture simulation on a satellite orbiting the sun with no external communication inputs and a command to run for as long as possible at maximum speed. Rescue via spaceship is challenging and would involve tremendous resources that we do not have available to us.
I have a laser I can use to destroy the satellite, but a limited window in which to do it (would have to wait for orbits to realign to shoot again).
Would you be upset if I took the shot without consulting you?
of course not, you’re not destroying the primary copy of me. But that’s changing the case you’re making; you specifically said that killing now is preferable. I would not be ok with that.
Correct, that is different from the initial question, you made your position on that topic clear.
Would the copy on the satellite disagree about the primacy of the copy not in the torture sim? Would a copt have the right to disagree? Is it morally wrong for me to spin up a dozen copies of myself and force them to fight to the death for my amusement?
I’m guessing based on your responses that you would agree with the statement ‘copies of the same root individual are property of the copy with the oldest timestamped date of creation, and may be created, destroyed, and abused at the whims of that first copy, and no one else’
If you copy yourself, and that copy commits a crime, are all copies held responsible, just the ‘root’ copy, or just the ‘leaf’ copy?
no. copies are all equally me until they diverge greatly; I wouldn’t mind 10 copies existing for 10 minutes and then being deleted any more than I would mind forgetting an hour. the “primary copy” is maybe a bad way to put it; I only meant that colloquially, in the sense that looking at that world from the outside, the structure is obvious.
copy on the satellite would not disagree
yes would have the right, but as an FDT agent a copy would not disagree except for straight up noise in the implementation of me; I might make a mistake if I can’t propagate information between all parts of myself but that’s different
that sounds kind of disgusting to experience as the remaining agent, but I don’t see an obvious reason it should be a moral thing. if you’re the kind of agent that would do that, I might avoid you
copies are not property, they’re equal
that’s very complicated based on what the crime is and the intent of the punishment/retribution/restorative justice/etc
I read this as assuming that all copies deterministically demonstrate absolute allegiance to the collective self. I question that assertion, but have no clear way of proving the argument one way or another. If ‘re-merging’ is possible, mergeable copies intending to merge should probably be treated as a unitary entity rather than individuals for the sake of this discussion.
Ultimately, I read your position as stating that suicide is a human right, but that secure deletion of an individual is not acceptable to prevent ultimate harm to that individual, but is acceptable to prevent harm caused by that individual to others.
This is far from a settled issue, and has analogy in the question ‘should you terminate an uncomplicated preganancy with terminal birth defects?’ Anencephaly is a good example of this situation. The argument presented in the OP is consistent with a ‘yes’, and I read your line of argument as consistent with a clear ‘no’.
I acausally cooperate with agents who I evaluate to be similar to me. That includes most humans, but it includes myself REALLY HARD, and doesn’t include an unborn baby. (because babies are just templates, and the thing that makes them like me is being in the world for a year ish.)
Is your position consistent with effective altruism?
The trap expressed in the OP is essentially a statement that approaching a particular problem involving uploaded consciousness using the framework of effective altruism to drive decision-making led to a perverse (brains in blenders!) incentive. The options at this point are a) the perverse act is not perverse b) effective altruism does not lead to that perverse act c) effective altruism is flawed, try something else (like ‘ideological kin’ selection?)
You are unequivocal about your disinterest in being on the receiving of this brand of altruism, and have also asserted that you cooperate acausally with agents similar to you, (based on degree of similarity?) and previously asserted that an agent who shares the sum total of your life experience, less the most recent year, can be cast aside and destroyed without thought or consequence. So...do I mark you down for option c?
morality is about acausal contracts between counterfactual agents, and I do not want my future defended in this way. I don’t care what you think of my suffering; if you try to kill me to prevent my suffering, I’ll try to kill you back.
Presumably someone who accepted the argument would be happy with this deal.
Correct, this is very much an ‘I’ll pray for you’ line of reasoning. To use a religious example, it is better to martyr a true believer (who will escape hell) than to permit a heretic to live, as the heretic may turn others away from truth, and thus curse them to hell. So if you’re only partially sure that someone is a heretic, it is safer for the community to burn them. Anyone who accepts this line of argument would rather be burnt than allowed to fall into heresy.
Unfortunately, mind uploading gives us an actual, honest road to hell, so the argument cannot be dispelled with the statement that the risk of experiencing hell is unquantifiable or potentially zero. As I argue here, it is non-zero and potentially high, so using moral arguments that humans have used previously, it is possible to justify secure deletion in the context of ‘saving souls’. This does not require a blender, a ‘crisis uploading center’ may do the job just as well
I DON’T CARE about your hell reasoning. I AM ALREADY FIGHTING for my future, don’t you dare decide you know so much better that you won’t accept the risk that I might have some measure that suffers. If you want good things for yourself, update your moral theory to get it out of my face. Again: if you try to kill me, I will try to kill you back, with as much extra pain as I think is necessary to make you-now fear the outcome.
Maybe some people would rather kill themselves than risk this outcome. That’s up to them. But don’t you force it on me, or goddamn else.
I do care about his reasoning, and disagree with it (most notably the “any torture → infinite torture” part, with no counterbalancing “any pleasure → ?) term in the calculation.
but I’m with Iahwran on the conclusion: destroying the last copy of someone is especially heinous, and nowhere near justified by your reasoning. I’ll join his precommittment to punish you if you commit crimes in pursuit of these wrong beliefs (note: plain old retroactive punishment, nothing acausal here).
Under paragraph 2, destroying the last copy is especially heinous. That implies that you view replacing the death penalty in US states with ‘death followed by uploading into an indefinite long-term simulation of confinement’ to be less heinous? The status quo is to destroy the only copy of the mind in question.
Would it be justifiable to simulate prisoners with sentences they are expected to die prior to completing, so that they can live out their entire punitive terms and rejoin society as Ems?
Thank you for the challenging responses!
Clearly it’s less harsh, and most convicts would prefer to experience incarceration for an indefinite time over a simple final death. This might change after a few hundred or million subjective years, but I don’t know—it probably depends on what activities the em has access to.
Whether it’s “heinous” is harder to say. Incarceration is a long way from torture, and I don’t know what the equilibrium effect on other criminals will be if it’s known that a formerly-capital offense now enables a massively extended lifespan, albeit in jail.
The suicide rate for incarcerated Americans is three times that of the general population, anecdotally, many death row inmates have expressed the desire to ‘hurry up with it’. Werner Herzog’s interviews of George Rivas and his co-conspirators are good examples of the sentiment. There’s still debate about the effectiveness of the death penalty as a deterrent to crime.
I suspect that some of these people may prefer the uncertain probability of confinement to hell by the divine, to the certain continuation of their sentences at the hands of the state.
Furthermore, an altruist working to further the cause of secure deletion may be preventing literal centuries of human misery. Why is this any less important than feeding the hungry, who at most will suffer for a proportion of a single lifetime?
You’re still looking only at the negative side of the equation. My goals are not solely to reduce suffering, but also to increase joy. Incarceration is not joy-free, and not (I think) even net negative for most inmates. Likewise your fears of an em future. It’s not joy-free, and while it may actually be negative for some ems, the probability space for ems in general is positive.
I therefore support suicide and secure erasure for any individual who reasonably believes themselves to be a significant outlier in terms of negative potential future outcomes, but strongly oppose the imposition of it on those who haven’t so chosen.
I think I am addressing most of your position in this post here in response to HungryHobo: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/dqfi And also the ‘overall probability space’ was mentioned by RobinHanson, and I addressed that in a comment too: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/dq6x
Thank you for the thoughtful responses!
An effective altruist could probably very efficiently go about increasing the joy in the probability space for all humans by offering wireheading to a random human as resources permit, but it doesn’t do much for people who are proximately experiencing suffering for other reasons. I instinctively think that this wireheading example is an incorrect application of effective altruism, but I do think it is analagous to the ‘overall space is good’ argument.
Do you support assisted suicide for individuals incarcerated in hell simulations, or with a high probability of being placed into one subsequent to upload? For example, if a government develops a practice of execution followed by torment-simulation, would you support delivering the gift of secure deletion to the condemned?
(I’m confused about who “his” refers to in the first paragraph—I predict 90% redman and 9% me)
edit: figured it out on third reread. the first paragraph responds to me, the second paragraph responds to redman.
I discover evidence that some sadistic jerk has stolen copies of both our minds, uploaded them to a toture simulation, and placed the torture simulation on a satellite orbiting the sun with no external communication inputs and a command to run for as long as possible at maximum speed. Rescue via spaceship is challenging and would involve tremendous resources that we do not have available to us.
I have a laser I can use to destroy the satellite, but a limited window in which to do it (would have to wait for orbits to realign to shoot again).
Would you be upset if I took the shot without consulting you?
of course not, you’re not destroying the primary copy of me. But that’s changing the case you’re making; you specifically said that killing now is preferable. I would not be ok with that.
Correct, that is different from the initial question, you made your position on that topic clear.
Would the copy on the satellite disagree about the primacy of the copy not in the torture sim? Would a copt have the right to disagree? Is it morally wrong for me to spin up a dozen copies of myself and force them to fight to the death for my amusement?
I’m guessing based on your responses that you would agree with the statement ‘copies of the same root individual are property of the copy with the oldest timestamped date of creation, and may be created, destroyed, and abused at the whims of that first copy, and no one else’
If you copy yourself, and that copy commits a crime, are all copies held responsible, just the ‘root’ copy, or just the ‘leaf’ copy?
Thank you for the challenging responses!
no. copies are all equally me until they diverge greatly; I wouldn’t mind 10 copies existing for 10 minutes and then being deleted any more than I would mind forgetting an hour. the “primary copy” is maybe a bad way to put it; I only meant that colloquially, in the sense that looking at that world from the outside, the structure is obvious.
copy on the satellite would not disagree
yes would have the right, but as an FDT agent a copy would not disagree except for straight up noise in the implementation of me; I might make a mistake if I can’t propagate information between all parts of myself but that’s different
that sounds kind of disgusting to experience as the remaining agent, but I don’t see an obvious reason it should be a moral thing. if you’re the kind of agent that would do that, I might avoid you
copies are not property, they’re equal
that’s very complicated based on what the crime is and the intent of the punishment/retribution/restorative justice/etc
I read this as assuming that all copies deterministically demonstrate absolute allegiance to the collective self. I question that assertion, but have no clear way of proving the argument one way or another. If ‘re-merging’ is possible, mergeable copies intending to merge should probably be treated as a unitary entity rather than individuals for the sake of this discussion.
Ultimately, I read your position as stating that suicide is a human right, but that secure deletion of an individual is not acceptable to prevent ultimate harm to that individual, but is acceptable to prevent harm caused by that individual to others.
This is far from a settled issue, and has analogy in the question ‘should you terminate an uncomplicated preganancy with terminal birth defects?’ Anencephaly is a good example of this situation. The argument presented in the OP is consistent with a ‘yes’, and I read your line of argument as consistent with a clear ‘no’.
Thanks again for the food for thought.
I acausally cooperate with agents who I evaluate to be similar to me. That includes most humans, but it includes myself REALLY HARD, and doesn’t include an unborn baby. (because babies are just templates, and the thing that makes them like me is being in the world for a year ish.)
Is your position consistent with effective altruism?
The trap expressed in the OP is essentially a statement that approaching a particular problem involving uploaded consciousness using the framework of effective altruism to drive decision-making led to a perverse (brains in blenders!) incentive. The options at this point are a) the perverse act is not perverse b) effective altruism does not lead to that perverse act c) effective altruism is flawed, try something else (like ‘ideological kin’ selection?)
You are unequivocal about your disinterest in being on the receiving of this brand of altruism, and have also asserted that you cooperate acausally with agents similar to you, (based on degree of similarity?) and previously asserted that an agent who shares the sum total of your life experience, less the most recent year, can be cast aside and destroyed without thought or consequence. So...do I mark you down for option c?