Why I think that the MWI is belief in belief: buy a lottery ticket, suicide if you lose (a version of the quantum suicide/immortality setup), thus creating an outcome pump for the subset of the branches where you survive (the only one that matters). Thus, if you subscribe to the MWI, this is one of the most rational ways to make money. So, if you need money and don’t follow this strategy, you are either irrational or don’t really believe what you say you do (most likely both).
(I’m not claiming that this is a novel idea, just bringing it up for discussion.)
Possible cop-out: “Oh, but my family will be so unhappy in all those other branches where I die.” LCPW: say, no one really cares about you all that much, would you do it?
That’s not many worlds, that’s quantum immortality. It’s true that the latter depends on the former (or would if there weren’t other big-world theories, cf. Tegmark), but one can subscribe to the former and still think the latter is just a form of confusion.
You’re correct that with that outcome pump, some copies of you would win the lottery. However, I disagree that you should kill yourself upon noticing that you’d lost. This has been discussed on LW before here.
Seems like more cop-outs, instead of LCPWs: the Failure Amplification does not happen in a properly constructed experiment (it is easy to devise a way to die with enough reliability and fewer side effects in case of a failure). If you only find a 99.9% sure kill, then you can still accept bets up to 1000:1. The Quantum Sour Grapes is a math error: the (implicit) expected utility is taken over all branches in the case of win, instead of only those where you survive, as was pointed out in the comments, though the author refuses to acknowledge it. There are more convenient worlds in some of the comments.
This doesn’t make sense. If I’m copied 5 times (in this Everrett branch, nothing about other branches), and one of my copies wins the lottery, I still wouldn’t want to kill myself. This doesn’t mean that I wouldn’t believe my copies existed—it’s just that their existence wouldn’t automatically move me to suicide.
Why then would I want to kill myself if the copies happen to be located in different Everret branches? What does their location have to do with anything?
Here’s an example: let’s assume for a sec there’s no MWI, there’s only one world. Let’s assume further that you’re copied atom-per-atom 5 times and each copy placed in different cities. One of your copies is guaranteed to win a lottery ticket. A different copy than you wins. Once you find out you lost, do you kill yourself in order to be the one to win the lottery ticket? NO! Killing yourself wouldn’t magically transform you to the copy that won the lottery ticket, it would just make you dead.
So why should the logic be different when you apply it to copies in different Everett branches, than when you apply it to copies in different cities of the same Everett branches?
Once you find out you lost, do you kill yourself in order to be the one to win the lottery ticket? NO! Killing yourself wouldn’t magically transform you to the copy that won the lottery ticket, it would just make you dead.
I must be still missing your point. 4⁄5 of you would be dead, but only the branches where you survive matter. No “magical transportation” required.
Did you not read the sentence where my hypothetical is placed in a single world, no “branches”? Can you for the moment answer the question in the world in which there are no branches?
In fact forget the multiple copies altogether, think about a pair of twins. Should one twin kill themselves if the other twin won the lottery, just because 1⁄2 of them would be dead but “only the twin which survives” matters?
Should one twin kill themselves if the other twin won the lottery
Ah, now I understand your setup. Thank you for simplifying it for me. So the issue here is whether to count multiple copies as one person or separate ones, and your argument with twins is pretty compelling… as far as it goes. Now consider the following experiment (just going down the LCPW road to isolate the potential belief-in-belief component of the MWI):
The lottery is setup in a way that you either win big (the odds are small, but finite) or you die instantly and painlessly the rest of the time, with very high reliability, to avoid the “live but maimed” cop-out. Would you participate? There is no problem with twins: no live-but-winless copies ever exist in this scenario.
Same thing in a fantasy-like setting: there are two boxes in front of you, opening one will fulfill your dreams (in the FAI way, no tricks), opening the other will destroy the world. There is no way to tell which one is which. Should you flip a coin and open a box at random?
You value your life (and the world) much higher than simply fulfilling your dreams, so if you don’t believe in the MWI, you will not go for it. If you believe the MWI, then the choice is trivial: one regular world before, one happy world after.
What would you do?
Again, there are many standard cop-outs: “but I only believe in the MWI with 99% probability, not enough to bet the world on it”, etc. These can be removed by a suitable tweaking of the odds or the outcomes. The salient feature is that there is no more multiple-copies argument.
If you believe the MWI, then the choice is trivial: one regular world before, one happy world after.
I think this is where you’re losing people. Why isn’t it “one regular world before, 999999 horrifying wastelands and 1 happy world after”? (or, alternately, “one horrifying wasteland with .999999 of the reality fluid and one happy world with .000001 of the reality fluid”?
The lottery is setup in a way that you either win big (the odds are small, but finite) or you die instantly and painlessly the rest of the time, with very high reliability, to avoid the “live but maimed” cop-out.
I’d need to understand how consciousness works, in order to understand if “I” would continue in this sense. Until then I’m playing it cautious, even if MWI was certain.
What would you do? but I only believe in the MWI with 99% probability, not enough to bet the world on it”, etc. These can be removed by a suitable tweaking of the odds or the outcomes.
That’s not as easy as you seem to think. If I believe in MWI with my current estimation of about 85%, and you think you can do an appropriate scenario for me by merely adjusting the odds or outcomes, then do you think you can do do an appropriate scenario even for someone who only believes in MWI with 10% probability, or 1% probability, or 0.01% probability? What’s your estimated probability for the MWI?
Plus I think you overestimate my capacity to figure out what I would do if I didn’t care if anyone discovered me dead. There probably were times in my life where I would have killed myself if I didn’t care about other people discovering me dead, even without hope of a lottery ticket reward.
I agree, certainly 85% is not nearly enough. (1 chance out of 7 that I die forever? No, thanks!) I think this is the main reason no one takes quantum immortality seriously enough to set up an experiment: their (probably implicit) utility of dying is extremely large and negative, enough to outweigh any kind of monetary payoff. Personally, I give the MWI in some way a 50⁄50 chance (not enough data to argue one way or the other), and a much smaller chance to its literal interpretation of worlds branching out every time a quantum measurement happens, making quantum immortality feasible (probably 1 in a million, but the error bars are too large to make a bet).
Unfortunately, you are apparently the first person who admitted to their doubt in the MWI being the reason behind their rejection of experimental quantum suicide. Most other responses are still belief-in-belief.
To who? The branches in which I end up dead one way or another certainly matter to me. (Which is fortunate, since I don’t have any real hope of continuing to live for infinity.)
I’m supposed to know this? My thought process went: 1. The branch in which I currently live (and all its descendants) matters to me; 2. I assign a very low probability to not dying eventually; 3. Believing 2 does not seem to affect 1 at all.
If you really believe that, there’s no need for a lottery ticket. Just kill yourself in every single world where you’re not the richest person in the world. Thus the only branch were you survive will be the one in which you’re the richest person in the world.
(Assuming an every conceivable outcome is physically realised version of MWI, but then, the lottery ticket gedankenexperiment does that as well.)
Quantum immortality: You kill yourself and die in the vast majority of Everett branches. But you find yourself alive, because you continue to observe only the Everett branches where you survive.
Lottery Ticket Win: You kill yourself if you get a losing ticket. By QI, you find yourself alive… with a losing lottery ticket.
The branch where you won the lottery diverged from your current branch before you killed yourself. There’s no way to transport yourself into that branch.
If killing is more reliable than the odds of winning, in most surviving branches you end up rich.
If the experience of the surviving copies is what’s important for you, just do what Aris Katsaris suggests and call it a day. (ie, upload yourself to a million sims, wait to see if one of the copies wins. If none of them does, delete everything and start over. If any of them does, delete all the other copies and then kill yourself. HAPPII ENDO da ze~)
Just don’t complain if everyone else reacts with “what an idiot”.
ETA: Noticed shminux’s respose to Aris in the sibling. Continuing the discussion there.
Thus, if you subscribe to the MWI, this is one of the most rational ways to make money. So, if you need money and don’t follow this strategy, you are either irrational or don’t really believe what you say you do (most likely both).
No: if I follow that strategy it makes it more likely that others will follow that strategy; so even if I do successfully end up in a world where I won the lottery, it may also be a world where all my love ones committed suicide.
When considering this, I thought of another related question. If MWI/Quantum Immortality insists that you not die, would it also insist that you come into existence earlier? If you can’t die ever (because that keeps you in more branches), then the earlier you’re born, the more branches in which you are alive, therefore MWI/Quantum Immortality indicates that if you exist the most likely explanation is… (I don’t know. I seem to be confused.)
MWI/Quantum Immortality feels like puddle thinking. But I’m not sure I fully understand puddle thinking either, so me saying MWI/Quantum Immortality feels like puddle thinking feels like me explaining a black box with a smaller black box inside.
Given those thoughts, I think my next step is to ask “In what ways is MWI/Quantum Immortality like puddle thinking and in what ways is it not like puddle thinking?
… imagine a puddle waking up one morning and thinking, ‘This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, must have been made to have me in it!’ This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it’s still frantically hanging on to the notion that everything’s going to be all right, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
I believe that my death has negative utility. (Not just because my family and friends will be upset; also because society has wasted a lot of resources on me and I am at the point of being able to pay them back, I anticipate being able to use my life to generate lots of resources for good causes, etc.)
Therefore, I believe that the outcome (I win the lottery ticket in one world; I die in all other worlds) is worse than the outcome (I win the lottery in one world; I live in all other worlds) which is itself worse than (I don’t waste money on a lottery ticket in any world).
Least Convenient Possible World, I assume, would be believing that my life has negative utility unless I won the lottery, in which case, sure, I’d try quantum suicide.
thus creating an outcome pump for the subset of the branches where you survive (the only one that matters).
The LCPW is the one where your argument fails while mine works: suppose only the worlds where you live matter to you, so you happily suicide if you lose. So any egoist believing the MWI should use quantum immortality early and often if he/she is rational.
An egoist is generally someone who cares only about their own self-interest; that should be distinct from someone who has a utility function over experiences, not over outcomes.
But a rational agent with a utility function only over experiences would commit quantum suicide if we also assume there’s minimal risk of the suicide attempt failing/ the lottery not really being random, etc.
In short, it’s an argument that works in the LCPW but not in the world we actually live in, so the absence of suiciding rationalists doesn’t imply MWI is a belief-in-belief.
So far most replies are of the type of the invisible dragons in your garage: multiple reasons why looking for them would never work, so one should not even try. This is a classic signature of belief in belief.
A mildly rational reply from an MWI adept would sound as follows: “While the MWI-based outcome pump has some issues, the concept is interesting enough to try to refine and resolve them.”
So far most replies are of the type of the invisible dragons in your garage: multiple reasons why looking for them would never work, so one should not even try.
Except that you’re the only one who’s postulating the dragon, while everyone else is going “Of course dragons don’t exist, why’d we look for them? We should look for unicorns, dammit, unicorns! Not fire-breathing lizards!”
Why I think that the MWI is belief in belief: buy a lottery ticket, suicide if you lose (a version of the quantum suicide/immortality setup), thus creating an outcome pump for the subset of the branches where you survive (the only one that matters). Thus, if you subscribe to the MWI, this is one of the most rational ways to make money. So, if you need money and don’t follow this strategy, you are either irrational or don’t really believe what you say you do (most likely both).
(I’m not claiming that this is a novel idea, just bringing it up for discussion.)
Possible cop-out: “Oh, but my family will be so unhappy in all those other branches where I die.” LCPW: say, no one really cares about you all that much, would you do it?
That’s not many worlds, that’s quantum immortality. It’s true that the latter depends on the former (or would if there weren’t other big-world theories, cf. Tegmark), but one can subscribe to the former and still think the latter is just a form of confusion.
You’re correct that with that outcome pump, some copies of you would win the lottery. However, I disagree that you should kill yourself upon noticing that you’d lost. This has been discussed on LW before here.
Seems like more cop-outs, instead of LCPWs: the Failure Amplification does not happen in a properly constructed experiment (it is easy to devise a way to die with enough reliability and fewer side effects in case of a failure). If you only find a 99.9% sure kill, then you can still accept bets up to 1000:1. The Quantum Sour Grapes is a math error: the (implicit) expected utility is taken over all branches in the case of win, instead of only those where you survive, as was pointed out in the comments, though the author refuses to acknowledge it. There are more convenient worlds in some of the comments.
Maybe if you’re a particularly silly average utilitarian.
This doesn’t make sense. If I’m copied 5 times (in this Everrett branch, nothing about other branches), and one of my copies wins the lottery, I still wouldn’t want to kill myself. This doesn’t mean that I wouldn’t believe my copies existed—it’s just that their existence wouldn’t automatically move me to suicide.
Why then would I want to kill myself if the copies happen to be located in different Everret branches? What does their location have to do with anything?
I don’t follow your example...
Here’s an example: let’s assume for a sec there’s no MWI, there’s only one world. Let’s assume further that you’re copied atom-per-atom 5 times and each copy placed in different cities. One of your copies is guaranteed to win a lottery ticket. A different copy than you wins. Once you find out you lost, do you kill yourself in order to be the one to win the lottery ticket? NO! Killing yourself wouldn’t magically transform you to the copy that won the lottery ticket, it would just make you dead.
So why should the logic be different when you apply it to copies in different Everett branches, than when you apply it to copies in different cities of the same Everett branches?
I must be still missing your point. 4⁄5 of you would be dead, but only the branches where you survive matter. No “magical transportation” required.
Did you not read the sentence where my hypothetical is placed in a single world, no “branches”? Can you for the moment answer the question in the world in which there are no branches?
In fact forget the multiple copies altogether, think about a pair of twins. Should one twin kill themselves if the other twin won the lottery, just because 1⁄2 of them would be dead but “only the twin which survives” matters?
Ah, now I understand your setup. Thank you for simplifying it for me. So the issue here is whether to count multiple copies as one person or separate ones, and your argument with twins is pretty compelling… as far as it goes. Now consider the following experiment (just going down the LCPW road to isolate the potential belief-in-belief component of the MWI):
The lottery is setup in a way that you either win big (the odds are small, but finite) or you die instantly and painlessly the rest of the time, with very high reliability, to avoid the “live but maimed” cop-out. Would you participate? There is no problem with twins: no live-but-winless copies ever exist in this scenario.
Same thing in a fantasy-like setting: there are two boxes in front of you, opening one will fulfill your dreams (in the FAI way, no tricks), opening the other will destroy the world. There is no way to tell which one is which. Should you flip a coin and open a box at random?
You value your life (and the world) much higher than simply fulfilling your dreams, so if you don’t believe in the MWI, you will not go for it. If you believe the MWI, then the choice is trivial: one regular world before, one happy world after.
What would you do?
Again, there are many standard cop-outs: “but I only believe in the MWI with 99% probability, not enough to bet the world on it”, etc. These can be removed by a suitable tweaking of the odds or the outcomes. The salient feature is that there is no more multiple-copies argument.
I think this is where you’re losing people. Why isn’t it “one regular world before, 999999 horrifying wastelands and 1 happy world after”? (or, alternately, “one horrifying wasteland with .999999 of the reality fluid and one happy world with .000001 of the reality fluid”?
I’d need to understand how consciousness works, in order to understand if “I” would continue in this sense. Until then I’m playing it cautious, even if MWI was certain.
That’s not as easy as you seem to think. If I believe in MWI with my current estimation of about 85%, and you think you can do an appropriate scenario for me by merely adjusting the odds or outcomes, then do you think you can do do an appropriate scenario even for someone who only believes in MWI with 10% probability, or 1% probability, or 0.01% probability? What’s your estimated probability for the MWI?
Plus I think you overestimate my capacity to figure out what I would do if I didn’t care if anyone discovered me dead. There probably were times in my life where I would have killed myself if I didn’t care about other people discovering me dead, even without hope of a lottery ticket reward.
I agree, certainly 85% is not nearly enough. (1 chance out of 7 that I die forever? No, thanks!) I think this is the main reason no one takes quantum immortality seriously enough to set up an experiment: their (probably implicit) utility of dying is extremely large and negative, enough to outweigh any kind of monetary payoff. Personally, I give the MWI in some way a 50⁄50 chance (not enough data to argue one way or the other), and a much smaller chance to its literal interpretation of worlds branching out every time a quantum measurement happens, making quantum immortality feasible (probably 1 in a million, but the error bars are too large to make a bet).
Unfortunately, you are apparently the first person who admitted to their doubt in the MWI being the reason behind their rejection of experimental quantum suicide. Most other responses are still belief-in-belief.
To who? The branches in which I end up dead one way or another certainly matter to me. (Which is fortunate, since I don’t have any real hope of continuing to live for infinity.)
Why do they matter to you?
I’m supposed to know this? My thought process went: 1. The branch in which I currently live (and all its descendants) matters to me; 2. I assign a very low probability to not dying eventually; 3. Believing 2 does not seem to affect 1 at all.
Why does your life matter to you?
Lol.… no?
If you really believe that, there’s no need for a lottery ticket. Just kill yourself in every single world where you’re not the richest person in the world. Thus the only branch were you survive will be the one in which you’re the richest person in the world.
(Assuming an every conceivable outcome is physically realised version of MWI, but then, the lottery ticket gedankenexperiment does that as well.)
Quantum immortality: You kill yourself and die in the vast majority of Everett branches. But you find yourself alive, because you continue to observe only the Everett branches where you survive.
Lottery Ticket Win: You kill yourself if you get a losing ticket. By QI, you find yourself alive… with a losing lottery ticket.
The branch where you won the lottery diverged from your current branch before you killed yourself. There’s no way to transport yourself into that branch.
(For the record, I believe that QI is pure BS.)
If killing is more reliable than the odds of winning, in most surviving branches you end up rich.
If the experience of the surviving copies is what’s important for you, just do what Aris Katsaris suggests and call it a day. (ie, upload yourself to a million sims, wait to see if one of the copies wins. If none of them does, delete everything and start over. If any of them does, delete all the other copies and then kill yourself. HAPPII ENDO da ze~)
Just don’t complain if everyone else reacts with “what an idiot”.
ETA: Noticed shminux’s respose to Aris in the sibling. Continuing the discussion there.
Yeah, but once you average net worth over reality fluid volume, you end up poorer than before.
No: if I follow that strategy it makes it more likely that others will follow that strategy; so even if I do successfully end up in a world where I won the lottery, it may also be a world where all my love ones committed suicide.
Note that I said:
When considering this, I thought of another related question. If MWI/Quantum Immortality insists that you not die, would it also insist that you come into existence earlier? If you can’t die ever (because that keeps you in more branches), then the earlier you’re born, the more branches in which you are alive, therefore MWI/Quantum Immortality indicates that if you exist the most likely explanation is… (I don’t know. I seem to be confused.)
MWI/Quantum Immortality feels like puddle thinking. But I’m not sure I fully understand puddle thinking either, so me saying MWI/Quantum Immortality feels like puddle thinking feels like me explaining a black box with a smaller black box inside.
Given those thoughts, I think my next step is to ask “In what ways is MWI/Quantum Immortality like puddle thinking and in what ways is it not like puddle thinking?
Reference to puddle thinking: http://en.wikipedia.org/wiki/Fine-tuned_Universe#In_fiction_and_popular_culture
I believe that my death has negative utility. (Not just because my family and friends will be upset; also because society has wasted a lot of resources on me and I am at the point of being able to pay them back, I anticipate being able to use my life to generate lots of resources for good causes, etc.)
Therefore, I believe that the outcome (I win the lottery ticket in one world; I die in all other worlds) is worse than the outcome (I win the lottery in one world; I live in all other worlds) which is itself worse than (I don’t waste money on a lottery ticket in any world).
Least Convenient Possible World, I assume, would be believing that my life has negative utility unless I won the lottery, in which case, sure, I’d try quantum suicide.
What? No! All of the worlds matter just as much, assuming your utility function is over outcomes, not experiences..
The LCPW is the one where your argument fails while mine works: suppose only the worlds where you live matter to you, so you happily suicide if you lose. So any egoist believing the MWI should use quantum immortality early and often if he/she is rational.
An egoist is generally someone who cares only about their own self-interest; that should be distinct from someone who has a utility function over experiences, not over outcomes.
But a rational agent with a utility function only over experiences would commit quantum suicide if we also assume there’s minimal risk of the suicide attempt failing/ the lottery not really being random, etc.
In short, it’s an argument that works in the LCPW but not in the world we actually live in, so the absence of suiciding rationalists doesn’t imply MWI is a belief-in-belief.
So far most replies are of the type of the invisible dragons in your garage: multiple reasons why looking for them would never work, so one should not even try. This is a classic signature of belief in belief.
A mildly rational reply from an MWI adept would sound as follows: “While the MWI-based outcome pump has some issues, the concept is interesting enough to try to refine and resolve them.”
Except that you’re the only one who’s postulating the dragon, while everyone else is going “Of course dragons don’t exist, why’d we look for them? We should look for unicorns, dammit, unicorns! Not fire-breathing lizards!”