So, I agree that mind uploads being tortured indefinitely is a very scary possibility. And it seems very plausible that some of that is going to happen in a world with mind uploads, especially since it’s going to be impossible to detect from the outside, unless you are going to check all the computations that anyone is running.
On the other hand, we don’t know for sure what that world is going to be like. Maybe there will be some kind of AI in charge that does check everyone’s computations, maybe all the hardware that gets sold is equipped with built-in suffering-detectors that disallow people from running torture simulations, or something. I’ll admit that both of these seem somewhat unlikely or even far-fetched, but then again, someone might come up with a really clever solution that I just haven’t thought of.
Your argument also seemed to me to have some flaws:
Over a long enough timeline, the probability of a copy of any given uploaded mind falling into the power of a sadistic jerk approaches unity. Once an uploaded mind has fallen under the power of a sadistic jerk, there is no guarantee that it will ever be ‘free’,
You can certainly make the argument that, for any event with non-zero probability, then over a sufficiently long lifetime that event will happen at some point. But if you are using that to argue that an upload will be captured by someone sadistic eventually, shouldn’t you also hold that they will also escape eventually?
This argument also doesn’t seem to be unique to mind uploading. Suppose that we achieved biological immortality and never uploaded. You could also make the argument that, now that people can live until the heat-death of the universe (or at least until our sun goes out), then their lifetimes are sufficiently long that at some point in their lives they are going to be kidnapped and tortured indefinitely by someone sadistic, so therefore we should kill everyone before we get radical life extension.
But for biological people, this argument doesn’t feel anywhere near as compelling. In particular, this scenario highlights the fact that even though there might be a non-zero probability for any given person to be kidnapped and tortured during their lifetimes, that probability can be low enough that it’s still unlikely to happen even during a very long lifetime.
You could reasonably argue that for uploads, it’s different, since it’s easier to make a copy of an upload undetected etc., so the probability of being captured during one’s lifetime is larger. But note that there have been times in history that there actually was a reasonable chance for a biological human to be captured and enslaved during their lifetime! Back during the era of tribal warfare, for example. But we’ve come a long way from those times, and in large parts of the world, society has developed in such a way to almost eliminate that risk.
That, in turn, highlights the point that it’s too simple to just look at whether we are biological or uploads. It all depends on how exactly society is set up, and how strong are the defenses and protections that society provides to the common person. Given that we’ve developed to the point where biological persons have pretty good defenses against being kidnapped and enslaved, to the point where we don’t think that even a very long lifetime would be likely to lead to such a fate, shouldn’t we also assume that upload societies could develop similar defenses and reduce the risk to be similarly small?
I agree that the argument you advance here is the sane one, but I have trouble reconciling it with my interpretation of Effective Altruism: ‘effort should be made to expend resources on preventing suffering, maximize the ratio of suffering avoided to cost expended’
I interpret your paper as rejecting the argument advanced by prof Hansen that if of all future variants of you, the number enjoying ‘heaven’ vastly outnumber the copies suffering ‘hell’, on balance, uploading is a good. Based on your paper’s citation of Omelas, I assert that you would weight ‘all future heaven copies’ in aggregate, and all future hell copies individually.
So if the probability of one or more hell copies of an upload coming into existence for as long as any heaven copy exceeds the probability of a single heaven copy existing long enough to outlast all the hell copies, that person’s future suffering will eventually exceed all suffering previously experienced by biological humans. Under the EA philosophy described above, this creates a moral imperative to prevent that scenario, possibly with a blender.
If uploading tech takes the form of common connection and uploading to an ‘overmind’, this can go away—if everyone is Borg, there’s no way for a non-Borg to put Borg into a hell copy, only Borg can do that to itself, which is, at least from an EA standpoint, probably an acceptable risk.
At the end of the day, I was hoping to adjust my understanding of EA axioms, not be talked down from chasing my friends around with a blender, but that isn’t how things went down.
SF is a tolerant place, and EAs are sincere about having consistent beliefs, but I don’t think my talk title “You helped someone avoid starvation with EA and a large grant. I prevented infinity genocides with a blender” would be accepted at the next convention.
I agree that the argument you advance here is the sane one, but I have trouble reconciling it with my interpretation of Effective Altruism: ‘effort should be made to expend resources on preventing suffering, maximize the ratio of suffering avoided to cost expended’
I interpret your paper as rejecting the argument advanced by prof Hansen that if of all future variants of you, the number enjoying ‘heaven’ vastly outnumber the copies suffering ‘hell’, on balance, uploading is a good. Based on your paper’s citation of Omelas, I assert that you would weight ‘all future heaven copies’ in aggregate, and all future hell copies individually.
Well, our paper doesn’t really endorse any particular moral theory: we just mention a number of them, without saying anything about which one is true. As we note, if one is e.g. something like a classical utilitarian, then one would take the view by Hanson that you mention. The only way to really “refute” this is to say that you don’t agree with that view, but that’s an opinion-based view rather than a refutation.
Similarly, some people accept the various suffering-focused intuitions that we mention, while others reject them. For example, Toby Ord rejects the Omelas argument, and gives a pretty strong argument for why, in this essay (under the part about “Lexical Threshold NU”, which is his term for it). Personally I find the Omelas argument very intuitively compelling, but at the same time I have to admit that Ord also makes a compelling argument against it.
That said, it’s still possible and reasonable to end up accepting the Omelas argument anyway; as I said, I find it very compelling myself.
(As an aside, I tend to think that personal identity is not ontologically basic, so I don’t think that it matters whose copy ends up getting tortured; but that doesn’t really help with your dilemma.)
If you do end up with that result, my advice would be for you to think a few steps forward from the brain-shedding argument. Suppose that your argument is correct, and that nothing could justify some minds being subjected to torture. Does that imply that you should go around killing people? (The blender thing seems unnecessary; just plain ordinary death already destroys brains quite quickly.)
I really don’t think so. First, I’m pretty sure that your instincts tell you that killing people who don’t want to be killed, when that doesn’t save any other lives, is something you really don’t want to do. That’s something that’s at least worth treating as a strong ethical injunction, to only be overridden if there’s a really really really compelling reason to do so.
And second, even if you didn’t care about ethical injunctions, it looks pretty clear that going around killing people wouldn’t actually serve your goal much—you’d just get thrown in prison pretty quickly, and also cause enormous backlash against the whole movement of suffering-focused ethics; anyone even talking about Omelas arguments would from that moment on get branded as “one of those crazy murderers” and everyone would try to distance themselves from them; which might just increase the risk of lots of people suffering from torture-like conditions, since a movement that was trying to prevent them would get discredited.
Instead, if you take this argument seriously, then what you should instead be doing is to try to minimize s-risks in general: if any given person ending up tortured would be one of the worst things that could happen, then large numbers of people ending up tortured would be even worse. We listed a number of promising-seeming approaches for preventing s-risks in our paper: none of them involve blenders, and several of them—like supporting AI alignment research—are already perfectly reputable within EA circles. :)
You may also want to read Gains from Trade through Compromise, for reasons to try to compromise and find mutually-acceptable solutions with people who don’t buy the Omelas argument.
(Also, I have an older paper which suggests that a borg-like outcome may be relatively plausible, given that it looks like linking brains together into a borg could be relatively straightforward once we did have uploading—or maybe even before, if an exocortex prosthesis that could be used for mind-melding was also the primary uploading method.)
An ethical injunction doesn’t work for me in this context, killing can be justified with lots of more base motives than ‘preventing infinity suffering’.
So, instead of a blender, I could sell hats with tiny brain pulping shaped charges that will be remotely detonated when mind uploading is proven to be possible, or when the wearer dies of some other cause. As long as my marketing reaches some percentage of people who might plausibly be interested, then I’ve done my part.
I assess that the number is small, and that anyone seriously interested in such a device likely reads lesswrong, and may be capable of making some arrangement for brain destruction themselves. So, by making this post and encouraging a potential upload to pulp themselves prior to upload.
I have some > 0 probability of preventing infinity suffering.
I’m pretty effectively altruistic, dang. It’s not even February.
I prefer your borg scenarios to individualized uploading. I feel like it’s technically feasible using extant technology, but I’m not sure how much interest there really is in mechanical telepathy.
Curious about your take on my question here: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/ Awesome paper.
Thank you very much!
So, I agree that mind uploads being tortured indefinitely is a very scary possibility. And it seems very plausible that some of that is going to happen in a world with mind uploads, especially since it’s going to be impossible to detect from the outside, unless you are going to check all the computations that anyone is running.
On the other hand, we don’t know for sure what that world is going to be like. Maybe there will be some kind of AI in charge that does check everyone’s computations, maybe all the hardware that gets sold is equipped with built-in suffering-detectors that disallow people from running torture simulations, or something. I’ll admit that both of these seem somewhat unlikely or even far-fetched, but then again, someone might come up with a really clever solution that I just haven’t thought of.
Your argument also seemed to me to have some flaws:
You can certainly make the argument that, for any event with non-zero probability, then over a sufficiently long lifetime that event will happen at some point. But if you are using that to argue that an upload will be captured by someone sadistic eventually, shouldn’t you also hold that they will also escape eventually?
This argument also doesn’t seem to be unique to mind uploading. Suppose that we achieved biological immortality and never uploaded. You could also make the argument that, now that people can live until the heat-death of the universe (or at least until our sun goes out), then their lifetimes are sufficiently long that at some point in their lives they are going to be kidnapped and tortured indefinitely by someone sadistic, so therefore we should kill everyone before we get radical life extension.
But for biological people, this argument doesn’t feel anywhere near as compelling. In particular, this scenario highlights the fact that even though there might be a non-zero probability for any given person to be kidnapped and tortured during their lifetimes, that probability can be low enough that it’s still unlikely to happen even during a very long lifetime.
You could reasonably argue that for uploads, it’s different, since it’s easier to make a copy of an upload undetected etc., so the probability of being captured during one’s lifetime is larger. But note that there have been times in history that there actually was a reasonable chance for a biological human to be captured and enslaved during their lifetime! Back during the era of tribal warfare, for example. But we’ve come a long way from those times, and in large parts of the world, society has developed in such a way to almost eliminate that risk.
That, in turn, highlights the point that it’s too simple to just look at whether we are biological or uploads. It all depends on how exactly society is set up, and how strong are the defenses and protections that society provides to the common person. Given that we’ve developed to the point where biological persons have pretty good defenses against being kidnapped and enslaved, to the point where we don’t think that even a very long lifetime would be likely to lead to such a fate, shouldn’t we also assume that upload societies could develop similar defenses and reduce the risk to be similarly small?
Thank you for the detailed response!
I agree that the argument you advance here is the sane one, but I have trouble reconciling it with my interpretation of Effective Altruism: ‘effort should be made to expend resources on preventing suffering, maximize the ratio of suffering avoided to cost expended’
I interpret your paper as rejecting the argument advanced by prof Hansen that if of all future variants of you, the number enjoying ‘heaven’ vastly outnumber the copies suffering ‘hell’, on balance, uploading is a good. Based on your paper’s citation of Omelas, I assert that you would weight ‘all future heaven copies’ in aggregate, and all future hell copies individually.
So if the probability of one or more hell copies of an upload coming into existence for as long as any heaven copy exceeds the probability of a single heaven copy existing long enough to outlast all the hell copies, that person’s future suffering will eventually exceed all suffering previously experienced by biological humans. Under the EA philosophy described above, this creates a moral imperative to prevent that scenario, possibly with a blender.
If uploading tech takes the form of common connection and uploading to an ‘overmind’, this can go away—if everyone is Borg, there’s no way for a non-Borg to put Borg into a hell copy, only Borg can do that to itself, which is, at least from an EA standpoint, probably an acceptable risk.
At the end of the day, I was hoping to adjust my understanding of EA axioms, not be talked down from chasing my friends around with a blender, but that isn’t how things went down.
SF is a tolerant place, and EAs are sincere about having consistent beliefs, but I don’t think my talk title “You helped someone avoid starvation with EA and a large grant. I prevented infinity genocides with a blender” would be accepted at the next convention.
Well, our paper doesn’t really endorse any particular moral theory: we just mention a number of them, without saying anything about which one is true. As we note, if one is e.g. something like a classical utilitarian, then one would take the view by Hanson that you mention. The only way to really “refute” this is to say that you don’t agree with that view, but that’s an opinion-based view rather than a refutation.
Similarly, some people accept the various suffering-focused intuitions that we mention, while others reject them. For example, Toby Ord rejects the Omelas argument, and gives a pretty strong argument for why, in this essay (under the part about “Lexical Threshold NU”, which is his term for it). Personally I find the Omelas argument very intuitively compelling, but at the same time I have to admit that Ord also makes a compelling argument against it.
That said, it’s still possible and reasonable to end up accepting the Omelas argument anyway; as I said, I find it very compelling myself.
(As an aside, I tend to think that personal identity is not ontologically basic, so I don’t think that it matters whose copy ends up getting tortured; but that doesn’t really help with your dilemma.)
If you do end up with that result, my advice would be for you to think a few steps forward from the brain-shedding argument. Suppose that your argument is correct, and that nothing could justify some minds being subjected to torture. Does that imply that you should go around killing people? (The blender thing seems unnecessary; just plain ordinary death already destroys brains quite quickly.)
I really don’t think so. First, I’m pretty sure that your instincts tell you that killing people who don’t want to be killed, when that doesn’t save any other lives, is something you really don’t want to do. That’s something that’s at least worth treating as a strong ethical injunction, to only be overridden if there’s a really really really compelling reason to do so.
And second, even if you didn’t care about ethical injunctions, it looks pretty clear that going around killing people wouldn’t actually serve your goal much—you’d just get thrown in prison pretty quickly, and also cause enormous backlash against the whole movement of suffering-focused ethics; anyone even talking about Omelas arguments would from that moment on get branded as “one of those crazy murderers” and everyone would try to distance themselves from them; which might just increase the risk of lots of people suffering from torture-like conditions, since a movement that was trying to prevent them would get discredited.
Instead, if you take this argument seriously, then what you should instead be doing is to try to minimize s-risks in general: if any given person ending up tortured would be one of the worst things that could happen, then large numbers of people ending up tortured would be even worse. We listed a number of promising-seeming approaches for preventing s-risks in our paper: none of them involve blenders, and several of them—like supporting AI alignment research—are already perfectly reputable within EA circles. :)
You may also want to read Gains from Trade through Compromise, for reasons to try to compromise and find mutually-acceptable solutions with people who don’t buy the Omelas argument.
(Also, I have an older paper which suggests that a borg-like outcome may be relatively plausible, given that it looks like linking brains together into a borg could be relatively straightforward once we did have uploading—or maybe even before, if an exocortex prosthesis that could be used for mind-melding was also the primary uploading method.)
An ethical injunction doesn’t work for me in this context, killing can be justified with lots of more base motives than ‘preventing infinity suffering’.
So, instead of a blender, I could sell hats with tiny brain pulping shaped charges that will be remotely detonated when mind uploading is proven to be possible, or when the wearer dies of some other cause. As long as my marketing reaches some percentage of people who might plausibly be interested, then I’ve done my part.
I assess that the number is small, and that anyone seriously interested in such a device likely reads lesswrong, and may be capable of making some arrangement for brain destruction themselves. So, by making this post and encouraging a potential upload to pulp themselves prior to upload. I have some > 0 probability of preventing infinity suffering.
I’m pretty effectively altruistic, dang. It’s not even February.
I prefer your borg scenarios to individualized uploading. I feel like it’s technically feasible using extant technology, but I’m not sure how much interest there really is in mechanical telepathy.