I thought quantum suicide is not controversial since MWI is obviously correct?
I agree MWI is solid, I’m not suggesting that be flagged. But it does not in any way imply quantum suicide; the latter is somewhere between fringe and crackpot, and a proven memetic hazard with at least one recorded death to its credit.
And the AI section? Well, the list is supposed to reflect the opinions hold in the LW community, especially by EY and the SIAI. I’m trying my best to do so and by that standard, how controversial is AI going FOOM etc.?
Well, AI go FOOM etc is again somewhere in the area between fringe and crackpot, as judged by people who actually know about the subject. If the list were specifically supposed to represent the opinions of the SIAI, then it would belong on the SIAI website, not on LW.
Eliezer’s Permutation City crossover story? It is on the list for some time, if you are talking about the ‘The Finale of the Ultimate Meta Mega Crossover’ story.
Not even the most optimistic interpretations of quantum immortality/quantum suicide think it can bring other people back from the dead. Does it count as a memetic hazard if only a very mistaken version of it is hazardous?
Why not? If you kill yourself in any branch that lacks the structure that is your father, then the only copies of you that will be alive are those that don’t care or those that live in the unlikely universes where your father is alive (even if it means life extension breakthroughs or that he applied for cryonics.)
ETA: I guess you don’t need life extension. After all it is physical possible to grow 1000 years old, if unlikely. Have I misunderstood something here?
Why not? If you kill yourself in any branch that lacks the structure that is your father, then the only copies of you that will be alive are those that don’t care or those that live in the unlikely universes where your father is alive (even if it means life extension breakthroughs or that he applied for cryonics.)
No, that’s not what would happen. Rather, being faithful to your commitment, you would go on a practically infinite suicide spree (*) searching for your father. A long and melancholic story with a suprise happy ending.
(*) I googled it and was sad to see that the phrase “suicide spree” is already taken for a different concept.
I’m not sure where you think we disagree? Personally if I was going to take MWI and quantum suicide absolutely seriously I’d still make the best out of every branch. All you do by quantum suicide is to cancel out the copies you deem having unworthy experiences. But why would I do that if I do not change anything about the positive branches.
My reply wasn’t meant to be taken seriously, and I don’t take the idea of quantum suicide seriously. But to answer your question, here is the disagreement, or really, me nitpicking for the sake of comedic effect:
In your scenario, most of the copies will NOT be in universes with your father. Most of them will be in the process of committing suicide. This is because—at least the way I interpreted your wording—your scenario differs from the classic quantum lottery scenario in that here it is you who evaluates whether you are in the right universe or not.
Yes, we agree. So how serious do you take MWI? I’m not sure I understand how someone could take MWI seriously but not quantum suicide. I haven’t read the sequence on it yet though.
Easy—if you believe in MWI, but your utility function assigns value to the amount of measure you exist in, then you don’t believe in quantum suicide. This is the most rational position, IMO.
I am absolutely uninterested in the amount of measure I exist in, per se. (*) I am interested in the emotional pain a quantum suicide would inflict on measure 0.9999999 of my friends and relatives.
(*) If God builds a perfect copy of the whole universe, this will not increase my utility the slightest.
I am absolutely uninterested in the amount of measure I exist in, per se. (*) I am interested in the emotional pain a quantum suicide would inflict on measure 0.9999999 of my friends and relatives.
The is a potentially coherent value system but I note that it contains a distinct hint of arbitrariness. You could, technically, like life, dislike death, like happy relatives and care about everything in the branches in which you live but only care about everything except yourself in branches in which you die. But that seems likely to be just a patch job on the intuitions.
Are you sure about this? Isn’t my preference simply a result of a value system that values the happiness of living beings in every branch? (Possibly weighted with how similar / emotionally close they are to me, but that’s not really necessary.) If I kill myself in every branch except in those where I win the lottery, then there will be many branches with (N-1) sad relatives, and a few branches with 1 happy me and (N-1) neutral relatives. So I don’t do that. Is there really anything arbitrary about this?
The part that surprises me is that you do care about all the branches (relatives, etc) yet in those branches you don’t care if you die. You’ll note that I assumed you preferred death to life? In those worlds you seem to have a preference for happy vs sad relatives but have somehow (and here is where I would say ‘arbitrarily’) decided you don’t care whether you live or die.
Say, for example, that you have a moderate aversion to having one of your little toes broken. You set up a quantum lottery where in the ‘lose’ branches have your little toe broken instead of you being killed. Does that seem better or worse to you? I mean, there is suffering of someone near and dear to you so I assume that seems bad to you. Yet it seems to me that if you care about the branch at all then you would prefer ‘sore toe’ to ‘death’ when you lose!
You are right that my proposed value system does not incorporate survival instinct, and this makes it sound weird, as survival instinct is an important part of every actual human value system, including mine. Your broken toe example shows this nicely.
So why did I get rid of survival instinct? Because you argued that what I wrote “contains a distinct hint of arbitrariness”. I think it doesn’t. I care for everyone’s preferences, and a dead body has no preferences. And to decide against quantum suicide, that is all what is needed. In place of survival instinct we basically have the disincentive of grieving relatives.
When we explicitly add survival instinct, the ingredient you rightfully miss, then yes, the result will indeed become somewhat messy. But the reason for this mess is the added ingredient in itself, not the other, clean part, nor the interrelation with the other part. I just don’t think survival instinct can be turned into a coherent, formalized value. So the bug is not in my proposed idealized value system, the bug is in my actual messy human value system.
This approach, by the way, affects my views on cryonics, too.
I think it doesn’t. I care for everyone’s preferences, and a dead body has no preferences. And to decide against quantum suicide, that is all what is needed. In place of survival instinct we basically have the disincentive of grieving relatives.
This is a handy way to rationalise against quantum suicide. Until you consider Quantum suicide on a global level. People who have been vaporised along with their entire planet have no preferences… Would you bite that bullet and quantum planetary-suicide?
As I already wrote, the above is not my actual value system, but rather a streamlined version of it. My actual value system does incorporate survival instinct. You intend to show with quantum planetary suicide that the streamlined value system leads to nonsensical results. I don’t really find the results nonsensical. In this sense, I would bite the bullet.
Actually, I wouldn’t, but for a reason not directly related to our current discussion. I don’t have too much faith in the literal truth of the MWI. I am quite confused about quantum mechanics, but I have a gut feeling that single-world is not totally out of the question, and not-every-world is quite likely. This is because as a compatibilist, I am willing to bite some bullets about free will most others will not bite. I believe that the full space-time continuum is very finely tuned in every direction (*), so it is totally plausible to me that some of those many worlds are simply locked from us by fine-tuning. There are already some crankish attempts in this direction under the name superdeterminism. I don’t think these are successful so far, but I surely would not bet my whole planet against the possibility.
(*) This sentence might sound fuzzy or even pseudo-science. All I have is an analogy to make it more concrete: Our world is not a Gold Universe, but I am talking about the sort of fine-tuning found in a Gold Universe.
You intend to show with quantum planetary suicide that the streamlined value system leads to nonsensical results.
Not nonsensical, no. It would be not liking the idea of planetary suicide that would be nonsensical, given your other expressed preferences. I can even see a perverse logic behind your way of carving which parts of the universal wavefunction you care about, based on the kind of understanding you express of QM.
Just… if you are ever exposed to accessible quantum randomness then please stay away from anyone I care about. These values are, by my way of looking at things, exactly as insane as those parents who kill their children and spouse before offing themselves as well. I’m not saying you are evil or anything. It’s not like you are really going to act on any of this so you fall under Mostly Harmless. But the step from mostly killing yourself to evaluating it as preferable for other people to be dead too takes things from none of my business to threat to human life.
Strange as may seem we are talking about the real world here!
wedrifid, please don’t use me as a straw-man. I already told you that my actual value system does contain survival instinct, and I already told you why I omitted it here anyway. Here it is, spelled out even more clearly:
You wanted a clean value system that decides against quantum suicide. (I use ‘clean’ as a synonym of nonarbritrary, low-complexity, aesthetically pleasing.) I proposed a clean value system that is already strong enough to decide against many forms of quantum suicide. You correctly point out that it is not immune against every form.
Incorporating any version of survival instinct makes the value system immune to quantum suicide by definition. I claimed that any value system incorporating survival instincts is necessarily not clean, at least if it has to consistently deal with issues of quantum lottery, mind uploads and such. I don’t have a problem with that, and I choose survival over cleanness. And don’t worry for my children and spouse. I will spell it out very explicitly, just in case: I don’t value the wishes of dead people, because they don’t have any. I value the wishes of living people, most importantly their wish to stay alive.
You completely ignored the physics angle to concentrate on the ethics angle. I think the former is more interesting, and frankly, I am more interested in your clever insights there. I already mentioned that I don’t have too much faith in MWI. Let me add some more detail to this. I believe that if you want to find out the real reason why quantum suicide is a bad idea, you will have to look at physics rather than values. My common sense tells me that if I put a (quantum or other) gun in my mouth right now, and pull the trigger many times, then the next thing I will feel is not that I am very lucky. Rather, I will not feel anything at all because I will be dead. I am quite sure about this instinct, and let us assume for a minute that it is indeed correct. This can mean two things. One possible conclusion is that MWI must be wrong. Another possible conclusion is that MWI is right but we make some error when we try to apply MWI to this situation. I give high probability to both of this possibilities, and I am very interested in any new insights.
Let me now summarize my position on quantum suicide: I endorse it
IF MWI is literally correct. (I don’t believe so.)
IF the interface between MWI and consciousness works as our naive interpretation suggests. (I don’t believe so.)
IF the quantum suicide is planetary, more exactly, if it affects a system that is value-wise isolated from the rest of the universe. (Very hard or impossible to achieve.)
IF survival instinct as a preference of others is taken into account, more concretely, if your mental image of me, the Mad Scientist with the Doomsday Machine, gets the consent of the whole population of the planet. (Very hard or impossible to achieve.)
I get the impression that some people consider “take quantum suicide seriously” equivalent to “think doing it is a good idea”. That makes not taking it seriously a good option.
The way I understand quantum suicide, it’s supposed to force your future survival into the relatively scarce branches where an event goes the way you want it by making it dependent on that event. Killing yourself after living in the branch where that event did not go the way you wanted at some time in the past is just ordinary suicide; although there’s certainly room for a new category along the lines of “counterfactual quantum suicide,” or something.
edit: Although, to the extent that counterfactual quantum suicide would only occur to someone who’d heard of traditional, orthodox quantum suicide, the latter would be a memetic hazard.
What difference does it make if you kill yourself before event X, event X kills you or if you commit suicide after event X? In all cases the branches in which event X does not take place are selected for. That is, if agent Y always commits suicide if event X or is killed by event X then the only branches to include Y are those in which X does not happen.
The difference, to me, is how you define the difference between quantum suicide and classical suicide. Everett’s daughter killing herself in all universes where she outlived him only sounds like quantum suicide to me if her death was linked to his in a mechanical and immediate manner; otherwise, with her suffering in the non-preferred universe for a while, it just sounds like plain old suicide.
I agree MWI is solid, I’m not suggesting that be flagged. But it does not in any way imply quantum suicide; the latter is somewhere between fringe and crackpot, and a proven memetic hazard with at least one recorded death to its credit.
Well, AI go FOOM etc is again somewhere in the area between fringe and crackpot, as judged by people who actually know about the subject. If the list were specifically supposed to represent the opinions of the SIAI, then it would belong on the SIAI website, not on LW.
So it is, cool.
I hadn’t heard of this—can you give more details?
Everett’s daughter, Elizabeth, suffered from manic depression and committed suicide in 1996 (saying in her suicide note that she was going to a parallel universe to be with her father)
Not even the most optimistic interpretations of quantum immortality/quantum suicide think it can bring other people back from the dead. Does it count as a memetic hazard if only a very mistaken version of it is hazardous?
Why not? If you kill yourself in any branch that lacks the structure that is your father, then the only copies of you that will be alive are those that don’t care or those that live in the unlikely universes where your father is alive (even if it means life extension breakthroughs or that he applied for cryonics.)
ETA: I guess you don’t need life extension. After all it is physical possible to grow 1000 years old, if unlikely. Have I misunderstood something here?
No, that’s not what would happen. Rather, being faithful to your commitment, you would go on a practically infinite suicide spree (*) searching for your father. A long and melancholic story with a suprise happy ending.
(*) I googled it and was sad to see that the phrase “suicide spree” is already taken for a different concept.
I’m not sure where you think we disagree? Personally if I was going to take MWI and quantum suicide absolutely seriously I’d still make the best out of every branch. All you do by quantum suicide is to cancel out the copies you deem having unworthy experiences. But why would I do that if I do not change anything about the positive branches.
My reply wasn’t meant to be taken seriously, and I don’t take the idea of quantum suicide seriously. But to answer your question, here is the disagreement, or really, me nitpicking for the sake of comedic effect:
In your scenario, most of the copies will NOT be in universes with your father. Most of them will be in the process of committing suicide. This is because—at least the way I interpreted your wording—your scenario differs from the classic quantum lottery scenario in that here it is you who evaluates whether you are in the right universe or not.
Yes, we agree. So how serious do you take MWI? I’m not sure I understand how someone could take MWI seriously but not quantum suicide. I haven’t read the sequence on it yet though.
Easy—if you believe in MWI, but your utility function assigns value to the amount of measure you exist in, then you don’t believe in quantum suicide. This is the most rational position, IMO.
I am absolutely uninterested in the amount of measure I exist in, per se. (*) I am interested in the emotional pain a quantum suicide would inflict on measure 0.9999999 of my friends and relatives.
(*) If God builds a perfect copy of the whole universe, this will not increase my utility the slightest.
The is a potentially coherent value system but I note that it contains a distinct hint of arbitrariness. You could, technically, like life, dislike death, like happy relatives and care about everything in the branches in which you live but only care about everything except yourself in branches in which you die. But that seems likely to be just a patch job on the intuitions.
Are you sure about this? Isn’t my preference simply a result of a value system that values the happiness of living beings in every branch? (Possibly weighted with how similar / emotionally close they are to me, but that’s not really necessary.) If I kill myself in every branch except in those where I win the lottery, then there will be many branches with (N-1) sad relatives, and a few branches with 1 happy me and (N-1) neutral relatives. So I don’t do that. Is there really anything arbitrary about this?
The part that surprises me is that you do care about all the branches (relatives, etc) yet in those branches you don’t care if you die. You’ll note that I assumed you preferred death to life? In those worlds you seem to have a preference for happy vs sad relatives but have somehow (and here is where I would say ‘arbitrarily’) decided you don’t care whether you live or die.
Say, for example, that you have a moderate aversion to having one of your little toes broken. You set up a quantum lottery where in the ‘lose’ branches have your little toe broken instead of you being killed. Does that seem better or worse to you? I mean, there is suffering of someone near and dear to you so I assume that seems bad to you. Yet it seems to me that if you care about the branch at all then you would prefer ‘sore toe’ to ‘death’ when you lose!
You are right that my proposed value system does not incorporate survival instinct, and this makes it sound weird, as survival instinct is an important part of every actual human value system, including mine. Your broken toe example shows this nicely.
So why did I get rid of survival instinct? Because you argued that what I wrote “contains a distinct hint of arbitrariness”. I think it doesn’t. I care for everyone’s preferences, and a dead body has no preferences. And to decide against quantum suicide, that is all what is needed. In place of survival instinct we basically have the disincentive of grieving relatives.
When we explicitly add survival instinct, the ingredient you rightfully miss, then yes, the result will indeed become somewhat messy. But the reason for this mess is the added ingredient in itself, not the other, clean part, nor the interrelation with the other part. I just don’t think survival instinct can be turned into a coherent, formalized value. So the bug is not in my proposed idealized value system, the bug is in my actual messy human value system.
This approach, by the way, affects my views on cryonics, too.
This is a handy way to rationalise against quantum suicide. Until you consider Quantum suicide on a global level. People who have been vaporised along with their entire planet have no preferences… Would you bite that bullet and quantum planetary-suicide?
As I already wrote, the above is not my actual value system, but rather a streamlined version of it. My actual value system does incorporate survival instinct. You intend to show with quantum planetary suicide that the streamlined value system leads to nonsensical results. I don’t really find the results nonsensical. In this sense, I would bite the bullet.
Actually, I wouldn’t, but for a reason not directly related to our current discussion. I don’t have too much faith in the literal truth of the MWI. I am quite confused about quantum mechanics, but I have a gut feeling that single-world is not totally out of the question, and not-every-world is quite likely. This is because as a compatibilist, I am willing to bite some bullets about free will most others will not bite. I believe that the full space-time continuum is very finely tuned in every direction (*), so it is totally plausible to me that some of those many worlds are simply locked from us by fine-tuning. There are already some crankish attempts in this direction under the name superdeterminism. I don’t think these are successful so far, but I surely would not bet my whole planet against the possibility.
(*) This sentence might sound fuzzy or even pseudo-science. All I have is an analogy to make it more concrete: Our world is not a Gold Universe, but I am talking about the sort of fine-tuning found in a Gold Universe.
Not nonsensical, no. It would be not liking the idea of planetary suicide that would be nonsensical, given your other expressed preferences. I can even see a perverse logic behind your way of carving which parts of the universal wavefunction you care about, based on the kind of understanding you express of QM.
Just… if you are ever exposed to accessible quantum randomness then please stay away from anyone I care about. These values are, by my way of looking at things, exactly as insane as those parents who kill their children and spouse before offing themselves as well. I’m not saying you are evil or anything. It’s not like you are really going to act on any of this so you fall under Mostly Harmless. But the step from mostly killing yourself to evaluating it as preferable for other people to be dead too takes things from none of my business to threat to human life.
Strange as may seem we are talking about the real world here!
wedrifid, please don’t use me as a straw-man. I already told you that my actual value system does contain survival instinct, and I already told you why I omitted it here anyway. Here it is, spelled out even more clearly:
You wanted a clean value system that decides against quantum suicide. (I use ‘clean’ as a synonym of nonarbritrary, low-complexity, aesthetically pleasing.) I proposed a clean value system that is already strong enough to decide against many forms of quantum suicide. You correctly point out that it is not immune against every form.
Incorporating any version of survival instinct makes the value system immune to quantum suicide by definition. I claimed that any value system incorporating survival instincts is necessarily not clean, at least if it has to consistently deal with issues of quantum lottery, mind uploads and such. I don’t have a problem with that, and I choose survival over cleanness. And don’t worry for my children and spouse. I will spell it out very explicitly, just in case: I don’t value the wishes of dead people, because they don’t have any. I value the wishes of living people, most importantly their wish to stay alive.
You completely ignored the physics angle to concentrate on the ethics angle. I think the former is more interesting, and frankly, I am more interested in your clever insights there. I already mentioned that I don’t have too much faith in MWI. Let me add some more detail to this. I believe that if you want to find out the real reason why quantum suicide is a bad idea, you will have to look at physics rather than values. My common sense tells me that if I put a (quantum or other) gun in my mouth right now, and pull the trigger many times, then the next thing I will feel is not that I am very lucky. Rather, I will not feel anything at all because I will be dead. I am quite sure about this instinct, and let us assume for a minute that it is indeed correct. This can mean two things. One possible conclusion is that MWI must be wrong. Another possible conclusion is that MWI is right but we make some error when we try to apply MWI to this situation. I give high probability to both of this possibilities, and I am very interested in any new insights.
Let me now summarize my position on quantum suicide: I endorse it
IF MWI is literally correct. (I don’t believe so.)
IF the interface between MWI and consciousness works as our naive interpretation suggests. (I don’t believe so.)
IF the quantum suicide is planetary, more exactly, if it affects a system that is value-wise isolated from the rest of the universe. (Very hard or impossible to achieve.)
IF survival instinct as a preference of others is taken into account, more concretely, if your mental image of me, the Mad Scientist with the Doomsday Machine, gets the consent of the whole population of the planet. (Very hard or impossible to achieve.)
End of conversation. I did not read beyond that sentence.
I am sorry to hear this, and I don’t really understand it.
Surely actually performing quantum suicide would be very stupid.
I might try it once I have uploaded a copy to some general purpose quantum substrate. Just to see if it works :-)
I get the impression that some people consider “take quantum suicide seriously” equivalent to “think doing it is a good idea”. That makes not taking it seriously a good option.
The way I understand quantum suicide, it’s supposed to force your future survival into the relatively scarce branches where an event goes the way you want it by making it dependent on that event. Killing yourself after living in the branch where that event did not go the way you wanted at some time in the past is just ordinary suicide; although there’s certainly room for a new category along the lines of “counterfactual quantum suicide,” or something.
edit: Although, to the extent that counterfactual quantum suicide would only occur to someone who’d heard of traditional, orthodox quantum suicide, the latter would be a memetic hazard.
What difference does it make if you kill yourself before event X, event X kills you or if you commit suicide after event X? In all cases the branches in which event X does not take place are selected for. That is, if agent Y always commits suicide if event X or is killed by event X then the only branches to include Y are those in which X does not happen.
The difference, to me, is how you define the difference between quantum suicide and classical suicide. Everett’s daughter killing herself in all universes where she outlived him only sounds like quantum suicide to me if her death was linked to his in a mechanical and immediate manner; otherwise, with her suffering in the non-preferred universe for a while, it just sounds like plain old suicide.
The difference between quantum and classical seems to be distinct from that between painless and painful.
I started a discussion: Help: Which concepts are controversial on LW