Easy—if you believe in MWI, but your utility function assigns value to the amount of measure you exist in, then you don’t believe in quantum suicide. This is the most rational position, IMO.
I am absolutely uninterested in the amount of measure I exist in, per se. (*) I am interested in the emotional pain a quantum suicide would inflict on measure 0.9999999 of my friends and relatives.
(*) If God builds a perfect copy of the whole universe, this will not increase my utility the slightest.
I am absolutely uninterested in the amount of measure I exist in, per se. (*) I am interested in the emotional pain a quantum suicide would inflict on measure 0.9999999 of my friends and relatives.
The is a potentially coherent value system but I note that it contains a distinct hint of arbitrariness. You could, technically, like life, dislike death, like happy relatives and care about everything in the branches in which you live but only care about everything except yourself in branches in which you die. But that seems likely to be just a patch job on the intuitions.
Are you sure about this? Isn’t my preference simply a result of a value system that values the happiness of living beings in every branch? (Possibly weighted with how similar / emotionally close they are to me, but that’s not really necessary.) If I kill myself in every branch except in those where I win the lottery, then there will be many branches with (N-1) sad relatives, and a few branches with 1 happy me and (N-1) neutral relatives. So I don’t do that. Is there really anything arbitrary about this?
The part that surprises me is that you do care about all the branches (relatives, etc) yet in those branches you don’t care if you die. You’ll note that I assumed you preferred death to life? In those worlds you seem to have a preference for happy vs sad relatives but have somehow (and here is where I would say ‘arbitrarily’) decided you don’t care whether you live or die.
Say, for example, that you have a moderate aversion to having one of your little toes broken. You set up a quantum lottery where in the ‘lose’ branches have your little toe broken instead of you being killed. Does that seem better or worse to you? I mean, there is suffering of someone near and dear to you so I assume that seems bad to you. Yet it seems to me that if you care about the branch at all then you would prefer ‘sore toe’ to ‘death’ when you lose!
You are right that my proposed value system does not incorporate survival instinct, and this makes it sound weird, as survival instinct is an important part of every actual human value system, including mine. Your broken toe example shows this nicely.
So why did I get rid of survival instinct? Because you argued that what I wrote “contains a distinct hint of arbitrariness”. I think it doesn’t. I care for everyone’s preferences, and a dead body has no preferences. And to decide against quantum suicide, that is all what is needed. In place of survival instinct we basically have the disincentive of grieving relatives.
When we explicitly add survival instinct, the ingredient you rightfully miss, then yes, the result will indeed become somewhat messy. But the reason for this mess is the added ingredient in itself, not the other, clean part, nor the interrelation with the other part. I just don’t think survival instinct can be turned into a coherent, formalized value. So the bug is not in my proposed idealized value system, the bug is in my actual messy human value system.
This approach, by the way, affects my views on cryonics, too.
I think it doesn’t. I care for everyone’s preferences, and a dead body has no preferences. And to decide against quantum suicide, that is all what is needed. In place of survival instinct we basically have the disincentive of grieving relatives.
This is a handy way to rationalise against quantum suicide. Until you consider Quantum suicide on a global level. People who have been vaporised along with their entire planet have no preferences… Would you bite that bullet and quantum planetary-suicide?
As I already wrote, the above is not my actual value system, but rather a streamlined version of it. My actual value system does incorporate survival instinct. You intend to show with quantum planetary suicide that the streamlined value system leads to nonsensical results. I don’t really find the results nonsensical. In this sense, I would bite the bullet.
Actually, I wouldn’t, but for a reason not directly related to our current discussion. I don’t have too much faith in the literal truth of the MWI. I am quite confused about quantum mechanics, but I have a gut feeling that single-world is not totally out of the question, and not-every-world is quite likely. This is because as a compatibilist, I am willing to bite some bullets about free will most others will not bite. I believe that the full space-time continuum is very finely tuned in every direction (*), so it is totally plausible to me that some of those many worlds are simply locked from us by fine-tuning. There are already some crankish attempts in this direction under the name superdeterminism. I don’t think these are successful so far, but I surely would not bet my whole planet against the possibility.
(*) This sentence might sound fuzzy or even pseudo-science. All I have is an analogy to make it more concrete: Our world is not a Gold Universe, but I am talking about the sort of fine-tuning found in a Gold Universe.
You intend to show with quantum planetary suicide that the streamlined value system leads to nonsensical results.
Not nonsensical, no. It would be not liking the idea of planetary suicide that would be nonsensical, given your other expressed preferences. I can even see a perverse logic behind your way of carving which parts of the universal wavefunction you care about, based on the kind of understanding you express of QM.
Just… if you are ever exposed to accessible quantum randomness then please stay away from anyone I care about. These values are, by my way of looking at things, exactly as insane as those parents who kill their children and spouse before offing themselves as well. I’m not saying you are evil or anything. It’s not like you are really going to act on any of this so you fall under Mostly Harmless. But the step from mostly killing yourself to evaluating it as preferable for other people to be dead too takes things from none of my business to threat to human life.
Strange as may seem we are talking about the real world here!
wedrifid, please don’t use me as a straw-man. I already told you that my actual value system does contain survival instinct, and I already told you why I omitted it here anyway. Here it is, spelled out even more clearly:
You wanted a clean value system that decides against quantum suicide. (I use ‘clean’ as a synonym of nonarbritrary, low-complexity, aesthetically pleasing.) I proposed a clean value system that is already strong enough to decide against many forms of quantum suicide. You correctly point out that it is not immune against every form.
Incorporating any version of survival instinct makes the value system immune to quantum suicide by definition. I claimed that any value system incorporating survival instincts is necessarily not clean, at least if it has to consistently deal with issues of quantum lottery, mind uploads and such. I don’t have a problem with that, and I choose survival over cleanness. And don’t worry for my children and spouse. I will spell it out very explicitly, just in case: I don’t value the wishes of dead people, because they don’t have any. I value the wishes of living people, most importantly their wish to stay alive.
You completely ignored the physics angle to concentrate on the ethics angle. I think the former is more interesting, and frankly, I am more interested in your clever insights there. I already mentioned that I don’t have too much faith in MWI. Let me add some more detail to this. I believe that if you want to find out the real reason why quantum suicide is a bad idea, you will have to look at physics rather than values. My common sense tells me that if I put a (quantum or other) gun in my mouth right now, and pull the trigger many times, then the next thing I will feel is not that I am very lucky. Rather, I will not feel anything at all because I will be dead. I am quite sure about this instinct, and let us assume for a minute that it is indeed correct. This can mean two things. One possible conclusion is that MWI must be wrong. Another possible conclusion is that MWI is right but we make some error when we try to apply MWI to this situation. I give high probability to both of this possibilities, and I am very interested in any new insights.
Let me now summarize my position on quantum suicide: I endorse it
IF MWI is literally correct. (I don’t believe so.)
IF the interface between MWI and consciousness works as our naive interpretation suggests. (I don’t believe so.)
IF the quantum suicide is planetary, more exactly, if it affects a system that is value-wise isolated from the rest of the universe. (Very hard or impossible to achieve.)
IF survival instinct as a preference of others is taken into account, more concretely, if your mental image of me, the Mad Scientist with the Doomsday Machine, gets the consent of the whole population of the planet. (Very hard or impossible to achieve.)
Easy—if you believe in MWI, but your utility function assigns value to the amount of measure you exist in, then you don’t believe in quantum suicide. This is the most rational position, IMO.
I am absolutely uninterested in the amount of measure I exist in, per se. (*) I am interested in the emotional pain a quantum suicide would inflict on measure 0.9999999 of my friends and relatives.
(*) If God builds a perfect copy of the whole universe, this will not increase my utility the slightest.
The is a potentially coherent value system but I note that it contains a distinct hint of arbitrariness. You could, technically, like life, dislike death, like happy relatives and care about everything in the branches in which you live but only care about everything except yourself in branches in which you die. But that seems likely to be just a patch job on the intuitions.
Are you sure about this? Isn’t my preference simply a result of a value system that values the happiness of living beings in every branch? (Possibly weighted with how similar / emotionally close they are to me, but that’s not really necessary.) If I kill myself in every branch except in those where I win the lottery, then there will be many branches with (N-1) sad relatives, and a few branches with 1 happy me and (N-1) neutral relatives. So I don’t do that. Is there really anything arbitrary about this?
The part that surprises me is that you do care about all the branches (relatives, etc) yet in those branches you don’t care if you die. You’ll note that I assumed you preferred death to life? In those worlds you seem to have a preference for happy vs sad relatives but have somehow (and here is where I would say ‘arbitrarily’) decided you don’t care whether you live or die.
Say, for example, that you have a moderate aversion to having one of your little toes broken. You set up a quantum lottery where in the ‘lose’ branches have your little toe broken instead of you being killed. Does that seem better or worse to you? I mean, there is suffering of someone near and dear to you so I assume that seems bad to you. Yet it seems to me that if you care about the branch at all then you would prefer ‘sore toe’ to ‘death’ when you lose!
You are right that my proposed value system does not incorporate survival instinct, and this makes it sound weird, as survival instinct is an important part of every actual human value system, including mine. Your broken toe example shows this nicely.
So why did I get rid of survival instinct? Because you argued that what I wrote “contains a distinct hint of arbitrariness”. I think it doesn’t. I care for everyone’s preferences, and a dead body has no preferences. And to decide against quantum suicide, that is all what is needed. In place of survival instinct we basically have the disincentive of grieving relatives.
When we explicitly add survival instinct, the ingredient you rightfully miss, then yes, the result will indeed become somewhat messy. But the reason for this mess is the added ingredient in itself, not the other, clean part, nor the interrelation with the other part. I just don’t think survival instinct can be turned into a coherent, formalized value. So the bug is not in my proposed idealized value system, the bug is in my actual messy human value system.
This approach, by the way, affects my views on cryonics, too.
This is a handy way to rationalise against quantum suicide. Until you consider Quantum suicide on a global level. People who have been vaporised along with their entire planet have no preferences… Would you bite that bullet and quantum planetary-suicide?
As I already wrote, the above is not my actual value system, but rather a streamlined version of it. My actual value system does incorporate survival instinct. You intend to show with quantum planetary suicide that the streamlined value system leads to nonsensical results. I don’t really find the results nonsensical. In this sense, I would bite the bullet.
Actually, I wouldn’t, but for a reason not directly related to our current discussion. I don’t have too much faith in the literal truth of the MWI. I am quite confused about quantum mechanics, but I have a gut feeling that single-world is not totally out of the question, and not-every-world is quite likely. This is because as a compatibilist, I am willing to bite some bullets about free will most others will not bite. I believe that the full space-time continuum is very finely tuned in every direction (*), so it is totally plausible to me that some of those many worlds are simply locked from us by fine-tuning. There are already some crankish attempts in this direction under the name superdeterminism. I don’t think these are successful so far, but I surely would not bet my whole planet against the possibility.
(*) This sentence might sound fuzzy or even pseudo-science. All I have is an analogy to make it more concrete: Our world is not a Gold Universe, but I am talking about the sort of fine-tuning found in a Gold Universe.
Not nonsensical, no. It would be not liking the idea of planetary suicide that would be nonsensical, given your other expressed preferences. I can even see a perverse logic behind your way of carving which parts of the universal wavefunction you care about, based on the kind of understanding you express of QM.
Just… if you are ever exposed to accessible quantum randomness then please stay away from anyone I care about. These values are, by my way of looking at things, exactly as insane as those parents who kill their children and spouse before offing themselves as well. I’m not saying you are evil or anything. It’s not like you are really going to act on any of this so you fall under Mostly Harmless. But the step from mostly killing yourself to evaluating it as preferable for other people to be dead too takes things from none of my business to threat to human life.
Strange as may seem we are talking about the real world here!
wedrifid, please don’t use me as a straw-man. I already told you that my actual value system does contain survival instinct, and I already told you why I omitted it here anyway. Here it is, spelled out even more clearly:
You wanted a clean value system that decides against quantum suicide. (I use ‘clean’ as a synonym of nonarbritrary, low-complexity, aesthetically pleasing.) I proposed a clean value system that is already strong enough to decide against many forms of quantum suicide. You correctly point out that it is not immune against every form.
Incorporating any version of survival instinct makes the value system immune to quantum suicide by definition. I claimed that any value system incorporating survival instincts is necessarily not clean, at least if it has to consistently deal with issues of quantum lottery, mind uploads and such. I don’t have a problem with that, and I choose survival over cleanness. And don’t worry for my children and spouse. I will spell it out very explicitly, just in case: I don’t value the wishes of dead people, because they don’t have any. I value the wishes of living people, most importantly their wish to stay alive.
You completely ignored the physics angle to concentrate on the ethics angle. I think the former is more interesting, and frankly, I am more interested in your clever insights there. I already mentioned that I don’t have too much faith in MWI. Let me add some more detail to this. I believe that if you want to find out the real reason why quantum suicide is a bad idea, you will have to look at physics rather than values. My common sense tells me that if I put a (quantum or other) gun in my mouth right now, and pull the trigger many times, then the next thing I will feel is not that I am very lucky. Rather, I will not feel anything at all because I will be dead. I am quite sure about this instinct, and let us assume for a minute that it is indeed correct. This can mean two things. One possible conclusion is that MWI must be wrong. Another possible conclusion is that MWI is right but we make some error when we try to apply MWI to this situation. I give high probability to both of this possibilities, and I am very interested in any new insights.
Let me now summarize my position on quantum suicide: I endorse it
IF MWI is literally correct. (I don’t believe so.)
IF the interface between MWI and consciousness works as our naive interpretation suggests. (I don’t believe so.)
IF the quantum suicide is planetary, more exactly, if it affects a system that is value-wise isolated from the rest of the universe. (Very hard or impossible to achieve.)
IF survival instinct as a preference of others is taken into account, more concretely, if your mental image of me, the Mad Scientist with the Doomsday Machine, gets the consent of the whole population of the planet. (Very hard or impossible to achieve.)
End of conversation. I did not read beyond that sentence.
I am sorry to hear this, and I don’t really understand it.