Quantum suicide seems like a good idea to me if we know that the assumptions behind it (both quantum and identity-related) are true, if we’re purely selfish (eg don’t care about the bereaved left behind), and if we don’t assume our actions are sufficiently correlated with those of others to make everyone try quantum suicide and end up all alone in our own personal Everett branch.
Fortunately, if you combine the second and third potential problems you end up with a solution that eliminates both of them. Then you just have the engineering problem involved in building a bigger death box.
However, I might have the same “It’s a good idea, but I am going to refuse to do this for reasons of personal sanity” reaction as I have with Pascal’s Mugging.
I hope so. Your position is entirely consistent—I cannot fault it on objective grounds and what you say in your post does directly imply what you confirm in your comment. That said, the preferences you declare here are vastly different to those that I consider ‘normal’ and so there remains the sneaking suspicion that you are wrong about what you want. That is, that you incorrectly extrapolate your volition.
On the other hand the existence of people with the preferences you describe here is a great potential boon to the rest of us. Whenever parties have vastly different values there is the potential for trade between them. And the difference in values between those that care about measure and those that don’t rounds off to absolute. When you act on your preferences we can essentially just inherit all of your stuff in exchange for a (from our perspective) token probabilistic payout. Everybody wins!
(Not sure where to put this:) Yvain’s position doesn’t seem sane to me, and not just for reasons of preference; attempting to commit suicide will just push most of your experienced moments backwards to regions where you’ve never heard of quantum suicide or where for whatever reason you thought it was a stupid idea. Anticipating ending up in a world with basically no measure just doesn’t make sense: you’re literally making yourself counterfactual. If you decided to carve up experience space into bigger chunks of continuity then this problem goes away, but most people agree that (as Katja put it) “anthropics makes sense with shorter people”. Suicide only makes sense if you want to shift your experience backwards in time or into other branches, not in order to have extremely improbable experiences. I mean, that’s why those branches are extremely improbable: there’s no way you can experience them, quantum suicide or no.
Yvain’s position doesn’t seem sane to me, and not just for reasons of preference; attempting to commit suicide will just push most of your experienced moments backwards to regions where you’ve never heard of quantum suicide or where for whatever reason you thought it was a stupid idea.
This doesn’t seem to be a problem that comes from being quantum suicidal but rather from an entirely different kind of anthropic based suicidal insanity. That is, I would not predict that experience as evaluated by Yvain’s model of caring would be perceived this way. It certainly could be but that would be an additional insanity to the one that makes quantum roulette desirable. (No offense to Yvain and his Quantum Suicidal ilk by referring to this as ‘insanity’. I mean only ‘drastically different preferences preferences to my own in an agent similar enough to me that such comparison is meaningful’. In fact, if you’re going to limit your optimisation to tiny amounts of measure then go ahead and exterminate humanity to maximise paperclips for all I care!)
To expand somewhat: Quantum suiciding at (subjective) time t results in you at time t-1 having more measure than you at time t+1 but under default quantum-suicidal preferences these are in no way in competition. Relative measure between past and future selves isn’t any particular issue. There are just various different subjective experiences at t-1, t and t+1, a desire to have each of them as positive-on-average as can be but no particular inclination to trim measure in one part of a timeline to increase it in another. For example, I wouldn’t expect Yvain to (consider it rational to) commit conventional-and-complete suicide whenever it seemed like all his peak experiences are in the past and all that remained in life is to make the most of the remaining dregs.
You’re not saying that if I perform QS I should literally anticipate that my next experience will be from the past, are you? (AFAICT, if QS is not allowed, I should just anticipate whatever I would anticipate if I was going to die everywhere, that is, going to lose all of my measure.)
(Not Will, but I think I mostly agree with him on this point)
There is no such thing as an uniquely specified “next experience”. There are going to be instances of you that remember being you and consider themselves the same person as you, but there is no meaningful sense in which exactly one of them is right. Granted, all instances of you that remember a particular moment will be in the future of that moment, but it seems silly to only care about the experiences of that subset of instances of you and completely neglect the experiences of instances that only share your memories up to an earlier point. If you weight the experiences more sensibly then in the case of a rigorously executed quantum suicide the bulk of the weight will be in instances that diverged before the decision to commit quantum suicide. There will be no chain of memory leading from the QS to those instances, but why should that matter?
So, if Omega was willing to put a copy of you in an Everett branch that didn’t already have one, how much money would you be willing to bid for this service?
If Omega was going to charge $100, and the offer remained open for as many Everett branches as you wanted, how many $100s would you give Omega?
So, if Omega was willing to put a copy of you in an Everett branch that didn’t already have one, how much money would you be willing to bid for this service?
I’m not used to evaluating the worth of Everett branches by count. But for the purpose of this question may I assume you mean “another Everett branch of equal measure to this one, as of the time I click ‘comment’”?
As for an answer… um… I’m not sure, a fair bit? Working out my preferences—quantitatively - in situations so far outside the usual realm of operation is tricky.
If Omega was going to charge $100, and the offer remained open for as many Everett branches as you wanted, how many $100s would you give Omega?
After I gave him everything I had I would get a new job that more closer matched my potential for financial gain.
Two extra considerations:
Even aside from a terminal preference for measure maximization I would consider buying more measure purely for the purpose of giving me acausal bargaining power. (I’m even less sure about qualitatively evaluating the usefulness of acausal bargaining power.)
Buying more equal-measure branches is different to trying to preserve measure in the one we are in. While I think I have preferences such that I would buy a new one I’m not sure if the default behavior of humans would be to do so.
Fortunately, if you combine the second and third potential problems you end up with a solution that eliminates both of them. Then you just have the engineering problem involved in building a bigger death box.
I hope so. Your position is entirely consistent—I cannot fault it on objective grounds and what you say in your post does directly imply what you confirm in your comment. That said, the preferences you declare here are vastly different to those that I consider ‘normal’ and so there remains the sneaking suspicion that you are wrong about what you want. That is, that you incorrectly extrapolate your volition.
On the other hand the existence of people with the preferences you describe here is a great potential boon to the rest of us. Whenever parties have vastly different values there is the potential for trade between them. And the difference in values between those that care about measure and those that don’t rounds off to absolute. When you act on your preferences we can essentially just inherit all of your stuff in exchange for a (from our perspective) token probabilistic payout. Everybody wins!
(Not sure where to put this:) Yvain’s position doesn’t seem sane to me, and not just for reasons of preference; attempting to commit suicide will just push most of your experienced moments backwards to regions where you’ve never heard of quantum suicide or where for whatever reason you thought it was a stupid idea. Anticipating ending up in a world with basically no measure just doesn’t make sense: you’re literally making yourself counterfactual. If you decided to carve up experience space into bigger chunks of continuity then this problem goes away, but most people agree that (as Katja put it) “anthropics makes sense with shorter people”. Suicide only makes sense if you want to shift your experience backwards in time or into other branches, not in order to have extremely improbable experiences. I mean, that’s why those branches are extremely improbable: there’s no way you can experience them, quantum suicide or no.
Here is fine.
This doesn’t seem to be a problem that comes from being quantum suicidal but rather from an entirely different kind of anthropic based suicidal insanity. That is, I would not predict that experience as evaluated by Yvain’s model of caring would be perceived this way. It certainly could be but that would be an additional insanity to the one that makes quantum roulette desirable. (No offense to Yvain and his Quantum Suicidal ilk by referring to this as ‘insanity’. I mean only ‘drastically different preferences preferences to my own in an agent similar enough to me that such comparison is meaningful’. In fact, if you’re going to limit your optimisation to tiny amounts of measure then go ahead and exterminate humanity to maximise paperclips for all I care!)
To expand somewhat: Quantum suiciding at (subjective) time t results in you at time t-1 having more measure than you at time t+1 but under default quantum-suicidal preferences these are in no way in competition. Relative measure between past and future selves isn’t any particular issue. There are just various different subjective experiences at t-1, t and t+1, a desire to have each of them as positive-on-average as can be but no particular inclination to trim measure in one part of a timeline to increase it in another. For example, I wouldn’t expect Yvain to (consider it rational to) commit conventional-and-complete suicide whenever it seemed like all his peak experiences are in the past and all that remained in life is to make the most of the remaining dregs.
You’re not saying that if I perform QS I should literally anticipate that my next experience will be from the past, are you? (AFAICT, if QS is not allowed, I should just anticipate whatever I would anticipate if I was going to die everywhere, that is, going to lose all of my measure.)
(Not Will, but I think I mostly agree with him on this point)
There is no such thing as an uniquely specified “next experience”. There are going to be instances of you that remember being you and consider themselves the same person as you, but there is no meaningful sense in which exactly one of them is right. Granted, all instances of you that remember a particular moment will be in the future of that moment, but it seems silly to only care about the experiences of that subset of instances of you and completely neglect the experiences of instances that only share your memories up to an earlier point. If you weight the experiences more sensibly then in the case of a rigorously executed quantum suicide the bulk of the weight will be in instances that diverged before the decision to commit quantum suicide. There will be no chain of memory leading from the QS to those instances, but why should that matter?
So, if Omega was willing to put a copy of you in an Everett branch that didn’t already have one, how much money would you be willing to bid for this service?
If Omega was going to charge $100, and the offer remained open for as many Everett branches as you wanted, how many $100s would you give Omega?
I’m not used to evaluating the worth of Everett branches by count. But for the purpose of this question may I assume you mean “another Everett branch of equal measure to this one, as of the time I click ‘comment’”?
As for an answer… um… I’m not sure, a fair bit? Working out my preferences—quantitatively - in situations so far outside the usual realm of operation is tricky.
After I gave him everything I had I would get a new job that more closer matched my potential for financial gain.
Two extra considerations:
Even aside from a terminal preference for measure maximization I would consider buying more measure purely for the purpose of giving me acausal bargaining power. (I’m even less sure about qualitatively evaluating the usefulness of acausal bargaining power.)
Buying more equal-measure branches is different to trying to preserve measure in the one we are in. While I think I have preferences such that I would buy a new one I’m not sure if the default behavior of humans would be to do so.