Applied quantum suicide. If we are in a Big World then all we really care about is probabilities, and we can modify those probabilities by selectively removing ourselves from particular universes.
The philosopher David K Lewis has already supplied (pdf warning) a reductio of ‘quantum immortality’ - the basic problem is that you’re more likely to end up crippled than healthy. (And if you’re crippled then you’re in no position to do lots of snazzy instrumental stuff.) See page 21.
Though Lewis himself doesn’t actually carry his reasoning forwards this far, we can finish off: Since the distinction between a crippled, almost-extinct mind and no mind at all is a blurry continuum, with no non-arbitrary way of measuring places along it, the event “you survive your attempted suicide” is not even well defined, and neither is “the probability that you survive”, nor “the probability that you are crippled, given that you survive”.
The whole notion of ‘quantum immortality’ is monumentally confused. Putting a gun to your head, firing and seeing whether you find yourself in a quantum miracle-world with virtually zero probability is exactly as reasonable a test of the many worlds interpretation as seeing whether a third arm spontaneously erupts from your chest.
Putting a gun to your head, firing and seeing whether you find yourself in a quantum miracle-world with virtually zero probability is exactly as reasonable a test of the many worlds interpretation as seeing whether a third arm spontaneously erupts from your chest.
Agreed. I’d never go about it that way. If I wanted to test the many worlds interpretation I’d do the following:
Strap a few pounds of high explosives around my head and connect the detonator to a computer. Have the computer select a random number between 1 and 1000 via some unbiased quantum process. Program the computer so that if any number greater than 1 is generated the detonator is activated, otherwise have it do nothing. Run the program. Run it again. And again, until I’m satisfied the the many worlds interpretation is correct.
The important things, in my opinion, are:
The method of death should be faster than most thought processes. Blast velocities are typically greater than the traveling speed of action potentials. This ensures you don’t accidentally observe something that commits you to a world with an almost sure probability of death (in the realm where only ‘magical’ quantum effects could save you).
The probability that the method of death fails to kill you should be thousands of orders of magnitude smaller than the probability that the method is never activated at all. In the case of my above example, the probability of finding yourself in a universe where you survived a high explosive blast at point blank range is essentially zero compared to the probability of getting 1 out of a 1000.
Note, with my above setup, it is very easy to transition from testing the many worlds hypothesis, to actually using it to your advantage. Want to factor a large number? Randomly sample the solution space on your computer, detonating only if the random sample isn’t a solution. (Make sure to implement an initial fail-safe probability in case no solution exists!)
Note, with my above setup, it is very easy to transition from testing the many worlds
hypothesis, to actually using it to your advantage. Want to factor a large number?
Randomly sample the solution space on your computer, detonating only if the
random sample isn’t a solution. (Make sure to implement an initial fail-safe
probability in case no solution exists!)
There’s an old joke about this related to the problem of sorting lists. A proposed sorting method is to take what you want, and randomly rearrange it. If that isn’t sorted, destroy the universe.
Your approach seems directly inferior from a utilitarian perspective because it will lead to many universes where not only did you fail to factor it but the rest of us will miss your company (and be stuck cleaning up a large mess).
Your approach seems directly inferior from a utilitarian perspective because it will lead to many universes where not only did you fail to factor it but the rest of us will miss your company (and be stuck cleaning up a large mess).
The solution, of course, is to replace high explosives with the LHC.
Note, with my above setup, it is very easy to transition from testing the many worlds hypothesis, to actually using it to your advantage. Want to factor a large number? Randomly sample the solution space on your computer, detonating only if the random sample isn’t a solution.
This is the most awesome idea I’ve heard all day. But you could do a lot better than factoring large numbers—you could set it up to detonate only if the random number is not the winning lottery pick!
I’ll be in my basement rigging the explosives.........
But you could do a lot better than factoring large numbers
Oh yeah. You can solve any problem in PSPACE. You can basically directly sample the entire space of all programs (with bounded memory).
Screw the lottery. You could make trillions on the stock market. Afterwards sample the entire space of all love letters, send them off to famous movie stars, then detonate only if you don’t get an eager response back. You might need a delegate to read the letter, as you reading it personally would shunt you into particular universes.
I downvoted because of the cynicism expressed in the idea that money can buy love. It read like a bitter complaint that girls (or guys) just want money.
Downvoted you for downvoting me for explaining why I downvoted kodos.
ETA: The cynicism was in saying that money could replace love letters. Also, the original post was about quantum suicide and using it to find the most effective love letter, and the comment about money sort of missed the point, and read like a cheap shot against love.
Downvoted you for downvoting me for explaining why I downvoted you;>
Are you saying that copypasted love letters is an adequate substitute for actual love? That sounds pretty cynical to me. But I still don’t see any inappropriate cynicism coming from kodos
No, it’s not that the letters are an actual substitute for love, it’s more the cynical attitude, “yeah, anyone will love you if you have enough money.”
Second of all, I don’t see the idea of money being able to buy love as being any more or less cynical than randomly generated spam love letters being able to buy love...
Third of all… Blueberry, weren’t you one of the people on the wrong side of the PUA debate? And you don’t see any irony in now acting all holier than thou about cynical attitudes toward mating?
Fourth of all… it was a joke!!!! I mean, seriously people.
Regardless, upvoting both of you back up to 0, cause I don’t think people should be penalized for explaining their downvotes when asked to do so.
ETA: Wow, this is getting ridiculous. I think it’s now safe to say that Human Mating Habits Are The Mind Killer, even more so than politics.
ETA2: LOL@downvoting people in retaliation for explaining why they’re upvoting you ;)
I thought this whole thread was a joke! And I’m sorry for any offense I caused.
Just for clarification: I’m not sure what you mean about the “wrong side” of the debate, but I support PUA and see it as a positive and productive method for helping men and women develop social skills, understand each other, and have better relationships. I see PUA as the opposite of cynical.
More to the point, it’s not well substantiated that the individuals in question would be drawn to riches—there are many people who are, but not nearly 100% of the population. I met a woman who once had a member of The Eagles chatting her up and turned him down.
That’s exactly as reasonable a test of the many worlds interpretation as ‘flipping a coin’ (or a quantum version thereof) lots of times and seeing whether you get all heads.
Oh, and I don’t think you’ve factored in Lewis’ point yet. What he’s saying, in essence, is that you can ‘never really die’. Even when the explosion goes off, the destruction of your head will have to proceed one micro-event after another, and if any one of those micro-events should be one that would finally ‘extinguish’ your consciousness then your awareness will (by the logic of quantum immortality) ‘jump ship’ to the somewhat-less-likely world where it doesn’t happen.
So you’ll end up ‘finding yourself’ in one of the fantastically unlikely worlds where the explosive only maims you.
So you’ll end up ‘finding yourself’ in one of the fantastically unlikely worlds where the explosive only maims you.
This is precisely what my example avoids. There are substantially more worlds where you got a 1 and there was no explosion, than worlds where there was an explosion but you somehow managed to survive.
Still, the mere fact that if your reasoning is valid then it must also be true that (as explained above) “you can never really die” constitutes a reductio.
Alternatively, if you want to say that your consciousness really can cease as long as it happens gradually, then how can there be possibly be a principled boundary line between ‘sudden enough that you’ll survive’ and ‘not sudden enough’.
You spoke earlier of making sure that the method of death was faster than most thought processes, so as to avoid ‘committing yourself’ to a world where you die. But where’s the boundary between ‘committing yourself’ and not doing so? Can you “only partially” commit yourself? How would that work?
Nope, it doesn’t. Unfortunately, we don’t need the many worlds hypothesis to run into this trouble. The trouble already exists in this single universe, assuming consciousness is computable. Just replace quantum world splitting with mind copying. Check out the Anthropic Trilemma.
But where’s the boundary between ‘committing yourself’ and not doing so? Can you “only partially” commit yourself?
If I make an exact copy of you, wait X minutes, and then instantly kill one of you, how big must X be before this is murder? Beats me. I suspect there is no hard line.
If I make an exact copy of you, wait X minutes, and then instantly kill one of you, how big must X be before this is murder? Beats me. I suspect there is no hard line.
I would be willing to undergo such a procedure for 10 dollars if X is a minute or less (and you don’t kill me in front of me, no other adverse effects, etc.). If X is 10 minutes, probably about 100 dollars.
Personally I think the third option is ‘obviously correct’. There isn’t really such a thing as a ‘thread of persisting subjective identity’. And this undermines the idea that in the quantum suicide scenario you should ‘expect to become’ the miraculous survivor.
All we can say is that the multiverse contains ‘miraculous observers’ with tiny ‘probability weights’ attached to them—and we can even concede that some of them get round to thinking “hang on—surely this means Many Worlds is true?” But whether their less unlikely counterparts live or die doesn’t affect this in any way.
Applied quantum suicide. If we are in a Big World then all we really care about is probabilities
I’ve never taken quantum suicide seriously, even given MWI. You speak of probabilities, but what about measures? How do I gain by ensuring that the vast majority of possible futures do not contain anything resembling me, even if the majority of those that do give me a lottery jackpot? All I’m doing if I blow myself up for failing to win the lottery is erasing the overwhelming majority of my future selves, who if asked would very likely object.
Certainly, if you choose to care about total measure, by all means do so. Personally, I care about subjective experience, and couldn’t give a blast what my total measure throughout the multiverse is (except insofar as it effects the subjective experience of other people).
We can modify probabilities conditional on the existence of future versions of ourselves, but those aren’t necessarily the only probabilities we care about.
Applied quantum suicide. If we are in a Big World then all we really care about is probabilities, and we can modify those probabilities by selectively removing ourselves from particular universes.
The philosopher David K Lewis has already supplied (pdf warning) a reductio of ‘quantum immortality’ - the basic problem is that you’re more likely to end up crippled than healthy. (And if you’re crippled then you’re in no position to do lots of snazzy instrumental stuff.) See page 21.
Though Lewis himself doesn’t actually carry his reasoning forwards this far, we can finish off: Since the distinction between a crippled, almost-extinct mind and no mind at all is a blurry continuum, with no non-arbitrary way of measuring places along it, the event “you survive your attempted suicide” is not even well defined, and neither is “the probability that you survive”, nor “the probability that you are crippled, given that you survive”.
The whole notion of ‘quantum immortality’ is monumentally confused. Putting a gun to your head, firing and seeing whether you find yourself in a quantum miracle-world with virtually zero probability is exactly as reasonable a test of the many worlds interpretation as seeing whether a third arm spontaneously erupts from your chest.
Agreed. I’d never go about it that way. If I wanted to test the many worlds interpretation I’d do the following:
Strap a few pounds of high explosives around my head and connect the detonator to a computer. Have the computer select a random number between 1 and 1000 via some unbiased quantum process. Program the computer so that if any number greater than 1 is generated the detonator is activated, otherwise have it do nothing. Run the program. Run it again. And again, until I’m satisfied the the many worlds interpretation is correct.
The important things, in my opinion, are:
The method of death should be faster than most thought processes. Blast velocities are typically greater than the traveling speed of action potentials. This ensures you don’t accidentally observe something that commits you to a world with an almost sure probability of death (in the realm where only ‘magical’ quantum effects could save you).
The probability that the method of death fails to kill you should be thousands of orders of magnitude smaller than the probability that the method is never activated at all. In the case of my above example, the probability of finding yourself in a universe where you survived a high explosive blast at point blank range is essentially zero compared to the probability of getting 1 out of a 1000.
Note, with my above setup, it is very easy to transition from testing the many worlds hypothesis, to actually using it to your advantage. Want to factor a large number? Randomly sample the solution space on your computer, detonating only if the random sample isn’t a solution. (Make sure to implement an initial fail-safe probability in case no solution exists!)
There’s an old joke about this related to the problem of sorting lists. A proposed sorting method is to take what you want, and randomly rearrange it. If that isn’t sorted, destroy the universe.
Your approach seems directly inferior from a utilitarian perspective because it will lead to many universes where not only did you fail to factor it but the rest of us will miss your company (and be stuck cleaning up a large mess).
The solution, of course, is to replace high explosives with the LHC.
This is the most awesome idea I’ve heard all day. But you could do a lot better than factoring large numbers—you could set it up to detonate only if the random number is not the winning lottery pick!
I’ll be in my basement rigging the explosives.........
Oh yeah. You can solve any problem in PSPACE. You can basically directly sample the entire space of all programs (with bounded memory).
Screw the lottery. You could make trillions on the stock market. Afterwards sample the entire space of all love letters, send them off to famous movie stars, then detonate only if you don’t get an eager response back. You might need a delegate to read the letter, as you reading it personally would shunt you into particular universes.
But would you really need the love letters if you had the trillions? I’d think a bank statement would suffice.
ETA: Ok, I’m confused. What’s going on with the downvoting? I’m honestly not concerned at all about the karma, just mystified.
I downvoted because of the cynicism expressed in the idea that money can buy love. It read like a bitter complaint that girls (or guys) just want money.
Upvoted kodos and downvoted you because I don’t see that cynicism in the grandparent.
Downvoted you for downvoting me for explaining why I downvoted kodos.
ETA: The cynicism was in saying that money could replace love letters. Also, the original post was about quantum suicide and using it to find the most effective love letter, and the comment about money sort of missed the point, and read like a cheap shot against love.
Downvoted you for downvoting me for explaining why I downvoted you;> Are you saying that copypasted love letters is an adequate substitute for actual love? That sounds pretty cynical to me. But I still don’t see any inappropriate cynicism coming from kodos
Upvoted you for being meta.
No, it’s not that the letters are an actual substitute for love, it’s more the cynical attitude, “yeah, anyone will love you if you have enough money.”
Wow.… just.… wow
First of all… it was a joke
Second of all, I don’t see the idea of money being able to buy love as being any more or less cynical than randomly generated spam love letters being able to buy love...
Third of all… Blueberry, weren’t you one of the people on the wrong side of the PUA debate? And you don’t see any irony in now acting all holier than thou about cynical attitudes toward mating?
Fourth of all… it was a joke!!!! I mean, seriously people.
Regardless, upvoting both of you back up to 0, cause I don’t think people should be penalized for explaining their downvotes when asked to do so.
ETA: Wow, this is getting ridiculous. I think it’s now safe to say that Human Mating Habits Are The Mind Killer, even more so than politics.
ETA2: LOL@downvoting people in retaliation for explaining why they’re upvoting you ;)
I thought this whole thread was a joke! And I’m sorry for any offense I caused.
Just for clarification: I’m not sure what you mean about the “wrong side” of the debate, but I support PUA and see it as a positive and productive method for helping men and women develop social skills, understand each other, and have better relationships. I see PUA as the opposite of cynical.
More to the point, it’s not well substantiated that the individuals in question would be drawn to riches—there are many people who are, but not nearly 100% of the population. I met a woman who once had a member of The Eagles chatting her up and turned him down.
That’s exactly as reasonable a test of the many worlds interpretation as ‘flipping a coin’ (or a quantum version thereof) lots of times and seeing whether you get all heads.
Oh, and I don’t think you’ve factored in Lewis’ point yet. What he’s saying, in essence, is that you can ‘never really die’. Even when the explosion goes off, the destruction of your head will have to proceed one micro-event after another, and if any one of those micro-events should be one that would finally ‘extinguish’ your consciousness then your awareness will (by the logic of quantum immortality) ‘jump ship’ to the somewhat-less-likely world where it doesn’t happen.
So you’ll end up ‘finding yourself’ in one of the fantastically unlikely worlds where the explosive only maims you.
This is precisely what my example avoids. There are substantially more worlds where you got a 1 and there was no explosion, than worlds where there was an explosion but you somehow managed to survive.
Hmm. OK, you have a point there.
Still, the mere fact that if your reasoning is valid then it must also be true that (as explained above) “you can never really die” constitutes a reductio.
Alternatively, if you want to say that your consciousness really can cease as long as it happens gradually, then how can there be possibly be a principled boundary line between ‘sudden enough that you’ll survive’ and ‘not sudden enough’.
You spoke earlier of making sure that the method of death was faster than most thought processes, so as to avoid ‘committing yourself’ to a world where you die. But where’s the boundary between ‘committing yourself’ and not doing so? Can you “only partially” commit yourself? How would that work?
Doesn’t make sense.
Nope, it doesn’t. Unfortunately, we don’t need the many worlds hypothesis to run into this trouble. The trouble already exists in this single universe, assuming consciousness is computable. Just replace quantum world splitting with mind copying. Check out the Anthropic Trilemma.
If I make an exact copy of you, wait X minutes, and then instantly kill one of you, how big must X be before this is murder? Beats me. I suspect there is no hard line.
I would be willing to undergo such a procedure for 10 dollars if X is a minute or less (and you don’t kill me in front of me, no other adverse effects, etc.). If X is 10 minutes, probably about 100 dollars.
Interesting post!
Personally I think the third option is ‘obviously correct’. There isn’t really such a thing as a ‘thread of persisting subjective identity’. And this undermines the idea that in the quantum suicide scenario you should ‘expect to become’ the miraculous survivor.
All we can say is that the multiverse contains ‘miraculous observers’ with tiny ‘probability weights’ attached to them—and we can even concede that some of them get round to thinking “hang on—surely this means Many Worlds is true?” But whether their less unlikely counterparts live or die doesn’t affect this in any way.
I’ve never taken quantum suicide seriously, even given MWI. You speak of probabilities, but what about measures? How do I gain by ensuring that the vast majority of possible futures do not contain anything resembling me, even if the majority of those that do give me a lottery jackpot? All I’m doing if I blow myself up for failing to win the lottery is erasing the overwhelming majority of my future selves, who if asked would very likely object.
Certainly, if you choose to care about total measure, by all means do so. Personally, I care about subjective experience, and couldn’t give a blast what my total measure throughout the multiverse is (except insofar as it effects the subjective experience of other people).
We can modify probabilities conditional on the existence of future versions of ourselves, but those aren’t necessarily the only probabilities we care about.
My antidote to this particular variety of universal acid in general and quantum suicide in particular: http://lesswrong.com/lw/208/the_iless_eye/