I agree that the argument you advance here is the sane one, but I have trouble reconciling it with my interpretation of Effective Altruism: ‘effort should be made to expend resources on preventing suffering, maximize the ratio of suffering avoided to cost expended’
I interpret your paper as rejecting the argument advanced by prof Hansen that if of all future variants of you, the number enjoying ‘heaven’ vastly outnumber the copies suffering ‘hell’, on balance, uploading is a good. Based on your paper’s citation of Omelas, I assert that you would weight ‘all future heaven copies’ in aggregate, and all future hell copies individually.
Well, our paper doesn’t really endorse any particular moral theory: we just mention a number of them, without saying anything about which one is true. As we note, if one is e.g. something like a classical utilitarian, then one would take the view by Hanson that you mention. The only way to really “refute” this is to say that you don’t agree with that view, but that’s an opinion-based view rather than a refutation.
Similarly, some people accept the various suffering-focused intuitions that we mention, while others reject them. For example, Toby Ord rejects the Omelas argument, and gives a pretty strong argument for why, in this essay (under the part about “Lexical Threshold NU”, which is his term for it). Personally I find the Omelas argument very intuitively compelling, but at the same time I have to admit that Ord also makes a compelling argument against it.
That said, it’s still possible and reasonable to end up accepting the Omelas argument anyway; as I said, I find it very compelling myself.
(As an aside, I tend to think that personal identity is not ontologically basic, so I don’t think that it matters whose copy ends up getting tortured; but that doesn’t really help with your dilemma.)
If you do end up with that result, my advice would be for you to think a few steps forward from the brain-shedding argument. Suppose that your argument is correct, and that nothing could justify some minds being subjected to torture. Does that imply that you should go around killing people? (The blender thing seems unnecessary; just plain ordinary death already destroys brains quite quickly.)
I really don’t think so. First, I’m pretty sure that your instincts tell you that killing people who don’t want to be killed, when that doesn’t save any other lives, is something you really don’t want to do. That’s something that’s at least worth treating as a strong ethical injunction, to only be overridden if there’s a really really really compelling reason to do so.
And second, even if you didn’t care about ethical injunctions, it looks pretty clear that going around killing people wouldn’t actually serve your goal much—you’d just get thrown in prison pretty quickly, and also cause enormous backlash against the whole movement of suffering-focused ethics; anyone even talking about Omelas arguments would from that moment on get branded as “one of those crazy murderers” and everyone would try to distance themselves from them; which might just increase the risk of lots of people suffering from torture-like conditions, since a movement that was trying to prevent them would get discredited.
Instead, if you take this argument seriously, then what you should instead be doing is to try to minimize s-risks in general: if any given person ending up tortured would be one of the worst things that could happen, then large numbers of people ending up tortured would be even worse. We listed a number of promising-seeming approaches for preventing s-risks in our paper: none of them involve blenders, and several of them—like supporting AI alignment research—are already perfectly reputable within EA circles. :)
You may also want to read Gains from Trade through Compromise, for reasons to try to compromise and find mutually-acceptable solutions with people who don’t buy the Omelas argument.
(Also, I have an older paper which suggests that a borg-like outcome may be relatively plausible, given that it looks like linking brains together into a borg could be relatively straightforward once we did have uploading—or maybe even before, if an exocortex prosthesis that could be used for mind-melding was also the primary uploading method.)
An ethical injunction doesn’t work for me in this context, killing can be justified with lots of more base motives than ‘preventing infinity suffering’.
So, instead of a blender, I could sell hats with tiny brain pulping shaped charges that will be remotely detonated when mind uploading is proven to be possible, or when the wearer dies of some other cause. As long as my marketing reaches some percentage of people who might plausibly be interested, then I’ve done my part.
I assess that the number is small, and that anyone seriously interested in such a device likely reads lesswrong, and may be capable of making some arrangement for brain destruction themselves. So, by making this post and encouraging a potential upload to pulp themselves prior to upload.
I have some > 0 probability of preventing infinity suffering.
I’m pretty effectively altruistic, dang. It’s not even February.
I prefer your borg scenarios to individualized uploading. I feel like it’s technically feasible using extant technology, but I’m not sure how much interest there really is in mechanical telepathy.
Well, our paper doesn’t really endorse any particular moral theory: we just mention a number of them, without saying anything about which one is true. As we note, if one is e.g. something like a classical utilitarian, then one would take the view by Hanson that you mention. The only way to really “refute” this is to say that you don’t agree with that view, but that’s an opinion-based view rather than a refutation.
Similarly, some people accept the various suffering-focused intuitions that we mention, while others reject them. For example, Toby Ord rejects the Omelas argument, and gives a pretty strong argument for why, in this essay (under the part about “Lexical Threshold NU”, which is his term for it). Personally I find the Omelas argument very intuitively compelling, but at the same time I have to admit that Ord also makes a compelling argument against it.
That said, it’s still possible and reasonable to end up accepting the Omelas argument anyway; as I said, I find it very compelling myself.
(As an aside, I tend to think that personal identity is not ontologically basic, so I don’t think that it matters whose copy ends up getting tortured; but that doesn’t really help with your dilemma.)
If you do end up with that result, my advice would be for you to think a few steps forward from the brain-shedding argument. Suppose that your argument is correct, and that nothing could justify some minds being subjected to torture. Does that imply that you should go around killing people? (The blender thing seems unnecessary; just plain ordinary death already destroys brains quite quickly.)
I really don’t think so. First, I’m pretty sure that your instincts tell you that killing people who don’t want to be killed, when that doesn’t save any other lives, is something you really don’t want to do. That’s something that’s at least worth treating as a strong ethical injunction, to only be overridden if there’s a really really really compelling reason to do so.
And second, even if you didn’t care about ethical injunctions, it looks pretty clear that going around killing people wouldn’t actually serve your goal much—you’d just get thrown in prison pretty quickly, and also cause enormous backlash against the whole movement of suffering-focused ethics; anyone even talking about Omelas arguments would from that moment on get branded as “one of those crazy murderers” and everyone would try to distance themselves from them; which might just increase the risk of lots of people suffering from torture-like conditions, since a movement that was trying to prevent them would get discredited.
Instead, if you take this argument seriously, then what you should instead be doing is to try to minimize s-risks in general: if any given person ending up tortured would be one of the worst things that could happen, then large numbers of people ending up tortured would be even worse. We listed a number of promising-seeming approaches for preventing s-risks in our paper: none of them involve blenders, and several of them—like supporting AI alignment research—are already perfectly reputable within EA circles. :)
You may also want to read Gains from Trade through Compromise, for reasons to try to compromise and find mutually-acceptable solutions with people who don’t buy the Omelas argument.
(Also, I have an older paper which suggests that a borg-like outcome may be relatively plausible, given that it looks like linking brains together into a borg could be relatively straightforward once we did have uploading—or maybe even before, if an exocortex prosthesis that could be used for mind-melding was also the primary uploading method.)
An ethical injunction doesn’t work for me in this context, killing can be justified with lots of more base motives than ‘preventing infinity suffering’.
So, instead of a blender, I could sell hats with tiny brain pulping shaped charges that will be remotely detonated when mind uploading is proven to be possible, or when the wearer dies of some other cause. As long as my marketing reaches some percentage of people who might plausibly be interested, then I’ve done my part.
I assess that the number is small, and that anyone seriously interested in such a device likely reads lesswrong, and may be capable of making some arrangement for brain destruction themselves. So, by making this post and encouraging a potential upload to pulp themselves prior to upload. I have some > 0 probability of preventing infinity suffering.
I’m pretty effectively altruistic, dang. It’s not even February.
I prefer your borg scenarios to individualized uploading. I feel like it’s technically feasible using extant technology, but I’m not sure how much interest there really is in mechanical telepathy.