Why does patternism [the position that you are only a pattern in physics and any continuations of it are you/you’d sign up for cryonics/you’d step into Parfit’s teleporter/you’ve read the QM sequence]
not imply
subjective immortality? [you will see people dying, other people will see you die, but you will never experience it yourself]
(contingent on the universe being big enough for lots of continuations of you to exist physically)
I asked this on the official IRC, but only feep was kind enough to oblige (and had a unique argument that I don’t think everyone is using)
If you have a completely thought out explanation for why it does imply that, you ought never to be worried about what you’re doing leading to your death (maybe painful existence, but never death), because there would be a version of you that would miraculously escape it.
If you bite that bullet as well, then I would like you to formulate your argument cleanly, then answer this (rot13):
ETA: This is slightly different from a Quantum Immortality question (although resolutions might be similar) - there is no need to involve QM or its interpretations here, even in a classical universe (as long as it’s large enough), if you’re a patternist, you can expect to “teleport” to another exact clone somewhere that manages to live.
I think it does imply subjective immortality. I’ll bite that bullet. Therefore, you should sign up for cryonics.
Consciousness isn’t continuous. There can be interruptions, like falling asleep or undergoing anesthesia. A successor mind/pattern is a conscious pattern that remembers being you. In the multiverse, any given mind has many many successors. It doesn’t have to follow immediately, or even have to follow at all, temporally. At the separations implied even for a Tegmark Level I multiverse, past and future are meaningless distinctions, since there can be no interactions.
You are your mind/pattern, not your body. A mind/pattern is independent of substrate. Your unconscious, sleeping self is not your successor mind/pattern. It’s an unconscious object that has a high probability of creating your successor (i.e. it can wake up). Same with your cryonicically-preserved corpsicle, though the probability is lower.
Any near-death event will cause grievous suffering to any barely-surviving successors, and grief and loss to friends and relatives in branches where you (objectively) don’t survive. I don’t want to suffer grievous injury, because that would hurt. I also don’t want my friends and relatives to suffer my loss. Thus, I’m reluctant to risk anything that may cause objective death.
But, the universe being a dangerous place, I can’t make that risk zero. By signing up for cryonics, I can increase the measure of successors that have a good life, even after barely surviving.
In the Multiverse, death isn’t all-or-none, black or white. A successor is a mind that remembers being you. It does not have to remember everything. If you take a drug that causes you to not form long-term memory of any event today, have you died by the next day? Objectively, no. Your friends and relatives can still talk to “you” the next day. Subjectively, partially. Your successors lack certain memories. But people forget things all the time.
Being mortal in the multiverse, you can expect that your measure of successors will continue to diminish as your branches die, but the measure never reaches absolute zero. Eventually all that remains are Bolzman Brains and the like. The most probable Boltzman brain successors only live long enough to have a “single” conscious qualia of remembering being you. The briefest of conscious thoughts. Their successors remember that thought and may have another random thought. You can eventually expect an eternity of totally random qualia and no control at all over your experience.
This isn’t Hell, but Limbo. Suffering is probably only a small corner of possible qualia-space, but so is eudaimonia. After an eternity you might stumble onto a small Botzlman World where you have some measure of control over your utility for some brief time, but that world will die, and your successors will again be only Boltzman brains.
I can’t help that some of my successors from any given moment are Boltzman brains. But I don’t want my only successors to be Boltzman Brains, because they don’t increase my utility. Therefore, cryonics.
See the Measure Problem of cosmology. I’m not certain of my answer, and I’d prefer not to bet my life on it, but it seems more likely than not. I do not believe that Boltzman Brains can be eliminated from cosmology, only that they have lesser measure than evolved beings like us. This is because of the Trivial Theorem of Arithmetic: almost all natural numbers are really damn huge. The universe doesn’t have to be infinite to get a Tegmark Level I multiverse. It just has to be sufficiently large.
I’m not sure what you’re implying. Most people close to me are not even aware that I advocate cryonics. I expect this will change once I get my finances sorted out enough to actually sign up for cryonics myself, but for most people, cryonics alone already flunks the Absurdity heuristic. Likewise with many of the perfectly rational ideas here on LW, including the logical implications of quantum mechanics and cosmology, like Subjective Immortality. Linking more “absurditiess” seems unlikely to help my case in most instances. One step at a time.
Actually, I’m just interested. I’ve been wondering if big world immortality is a subject that would make people a) think that the speaker is nuts, b) freak out and possibly go nuts or c) go nuts because they think the speaker is crazy; and whether or not it’s a bad idea to bring it up.
I’m not willing to decipher your second question because this theme bothers me enough as it is, but I’ll just say that I’m amazed figuring this stuff out is not considered a higher priority by rationalists. If at some point someone can definitely tell me what to think about this, I’d be glad about it.
you ought never to be worried about what you’re doing leading to your death (maybe painful existence, but never death), because there would be a version of you that would miraculously escape it.
I am worried about the version that feels the pain but doesn’t die.
You should strive to maximize utility of your pattern, averaged over both subjective probability (uncertainty) and squared amplitude of wave-function.
If you include the latter, then it all adds up to normalcy.
If you select a state of the MWI-world according to born rule (i.e. using squared amplitude of the wave-function), then this world-state will, with overwhelming probability, be compatible with causality, entropy increase over time, and a mostly classic history, involving natural selection yielding patterns that are good at maximizing their squared-amplitude-weighted spread, i.e. DNA and brains that care about squared-amplitude (even if they don’t know it).
Of course this is a non-answer to your question. Also, we have not yet finished the necessary math to prove that this non-answer is internally consistent (we=mankind), but I think this is (a) plausible, (b) the gist of what EY wrote on the topic, and (c) definitely not an original insight by EY / the sequences.
See my reply to Oscar_Cunningham below; I’m not sure if Egan’s law is followed exactly (it never is, otherwise you’ve only managed to make the same predictions as before, with a complexity penalty!)
I have a problem with the definition: patternism doesn’t fall automatically out of reductionism / naturalism, so it’s not automatically accepted by those who accept cryonics.
I agree that patternism contingently implies subjective immortality, but I agree with Oscar Cunningham that subjective immortality does not imply not-caring about death. I think patternism is stronger than beliefs that cause people to sign up for cryonics or step into the teleporter or read (even agree with) the QM sequence.
(I’m not convinced that the universe is large enough for patternism to actually imply subjective immortality.)
What cosmological assumptions? Assumptions related to identity, perhaps, as discussed here. But it seems to me that MWI essentially guarantees that for every observer-moment, there will always exist a “subsequent” one, and the same seems to apply to all levels of a Tegmark multiverse.
I don’t think MWI is sufficiently well defined or understood for it to be known whether or not that is implied. For example it would not be the case in Robin Hanson’s mangled worlds proposal, and no one knows whether that proposal is correct or not.
“subjective immortality does not imply not-caring about death.”
Sure. You can care about whatever you want to care about, no matter what is the case. But even my version mostly prevents me from caring about death except in fairly short term ways; e.g. I don’t bother to do things that would extend my lifespan, even when I know about them. And I definitely would not bother with cryonics.
If some one offered me a bet giving $0 or $100 based on a quantum coin flip I’d be willing to pay $50 for it. So it’s clear that I’m acting for the sake of my average future self, not just the best or worst outcome. Therefore I also act to avoid outcomes where I die, even if there are still some possibilities where I live. The fact that I won’t experience the “dead” outcomes is irrelevant—I can still act for the sake of things which I won’t experience.
What about the question of whether I anticipate immortality? Well if I was planning what to do after an event where I might die, I would think to myself “I only need to think about the possibility where I live, since I won’t be able to carry out any actions in the other case” which is perhaps not the same as “anticipating immortality” but it has the same effect.
I don’t think that follows exactly. Specifically, that “you’re acting for the sake of things which you won’t experience”.
You are correct in your pricing of quantum flips according to payoffs adjusted by the Born rule.
But the payoffs from your dead versions don’t count, assuming you can only find yourself in non-dead continuations. I don’t know if this is a position (Bostrom or Carroll have almost surely written about it) or just outright stupidity, but it seems to me that this assumption (of only finding yourself alive) shrinks your ensemble of future states, leaving your decision theoretic judgements to only deal with the alive ones
If I’m offered a bet of being given $0 or $100 over two flips of a fair quantum coin, with payoffs:
|00> → $0
|11> → $100
|01> → certain immediate death
|10> → certain immediate death
I’d still price it at $50, rather than $25.
You could say, a little vaguely, that the others are physical possibilities, but they’re not anthropic possibilities.
As for “I can still act for the sake of things which I won’t experience” in general, where you care about dead versions, apart from you being able to experience such, you might find Living in Many Worlds helpful, specifically this bit:
Are there horrible worlds out there, which are utterly beyond your ability to affect? Sure. And horrible things happened during the 12th century, which are also beyond your ability to affect. But the 12th century is not your responsibility, because it has, as the quaint phrase goes, “already happened”. I would suggest that you consider every world which is not in your future, to be part of the “generalized past”.
If you care about other people finding you dead and mourning you though, then the case would be different, and you’d have to adjust your payoffs accordingly.
Note again though, this should have nothing necessarily to do with QM (all of this would hold in a large enough classical universe).
As for me, personally, I don’t think I buy immortality, but then I’d have to modus tollens out a lot of stuff (like stepping into a teleporter, or even perhaps the notion of continuity).
As I’ve pointed out before, we don’t need to say whether patternism is true, or whether the universe is big or not, to notice that we are subjectively non-mortal—no matter what is the case, we will never experience dying (in the sense of going out of existence.)
I guess we’ve had this discussion before, but: the difference between patternism and your version of subjective mortality is that in your version we nevertheless should not expect to exist indefinitely.
Sure. Nonetheless, you should not expect anything noticeably different from that to happen either. The same kinds of things will happen: you will find yourself wondering why you were the lucky one who survived the car crash, not wondering why you were the unlucky one who did not.
This is a good question and I’d be interested to hear answers to it as well.
Briefly, I’ll say that there seem to be plenty of reductio-ad-absurdum arguments for a “self” existing at all, such as the implication of philosophical zombies and the like. Rob Bensinger’s post here goes into this matter a bit.
If these arguments have validity, then it seems to me that neither “annihilation” nor “immortality” can actually be true. In short, it might be that “patternism” probably implies that there is no “you” at all, besides the conceptual construct your brain has created for itself. But these questions are really important to me to get right, because it calls into question whether it is rational, or moral, to try and preserve myself through the use of technologies such as cryonics, if that effort and money can be used somewhere else for a different moral good.
If, by a reduction of “I” to a pattern, you stop being a moral subject, then surely to all other people you can apply the same reduction and they stop being moral subjects too.
True, but there are also the questions of, should I try to ensure that exactly one of myself exists at any given moment? Instead of just trying to preserve my body, should I be content with just creating a million copies of my mind in some simulation, apathetic to whether or not my “original” still exists anywhere? Or should I be content with creating agents that aren’t like me and don’t have my exact history of experiences, but have the same goals as I do? It seems that these issues depend on sort of dualist questions of mind and whether or not there is moral value assigned to preserving this “self”.
Why does patternism [the position that you are only a pattern in physics and any continuations of it are you/you’d sign up for cryonics/you’d step into Parfit’s teleporter/you’ve read the QM sequence]
not imply
subjective immortality? [you will see people dying, other people will see you die, but you will never experience it yourself]
(contingent on the universe being big enough for lots of continuations of you to exist physically)
I asked this on the official IRC, but only feep was kind enough to oblige (and had a unique argument that I don’t think everyone is using)
If you have a completely thought out explanation for why it does imply that, you ought never to be worried about what you’re doing leading to your death (maybe painful existence, but never death), because there would be a version of you that would miraculously escape it.
If you bite that bullet as well, then I would like you to formulate your argument cleanly, then answer this (rot13):
jul jrer lbh noyr gb haqretb narfgurfvn? (hayrff lbh pbagraq lbh jrer fgvyy pbafpvbhf rira gura)
ETA: This is slightly different from a Quantum Immortality question (although resolutions might be similar) - there is no need to involve QM or its interpretations here, even in a classical universe (as long as it’s large enough), if you’re a patternist, you can expect to “teleport” to another exact clone somewhere that manages to live.
You can fall asleep, so it seems like continuity does get broken sometimes.
I think it does imply subjective immortality. I’ll bite that bullet. Therefore, you should sign up for cryonics.
Consciousness isn’t continuous. There can be interruptions, like falling asleep or undergoing anesthesia. A successor mind/pattern is a conscious pattern that remembers being you. In the multiverse, any given mind has many many successors. It doesn’t have to follow immediately, or even have to follow at all, temporally. At the separations implied even for a Tegmark Level I multiverse, past and future are meaningless distinctions, since there can be no interactions.
You are your mind/pattern, not your body. A mind/pattern is independent of substrate. Your unconscious, sleeping self is not your successor mind/pattern. It’s an unconscious object that has a high probability of creating your successor (i.e. it can wake up). Same with your cryonicically-preserved corpsicle, though the probability is lower.
Any near-death event will cause grievous suffering to any barely-surviving successors, and grief and loss to friends and relatives in branches where you (objectively) don’t survive. I don’t want to suffer grievous injury, because that would hurt. I also don’t want my friends and relatives to suffer my loss. Thus, I’m reluctant to risk anything that may cause objective death.
But, the universe being a dangerous place, I can’t make that risk zero. By signing up for cryonics, I can increase the measure of successors that have a good life, even after barely surviving.
In the Multiverse, death isn’t all-or-none, black or white. A successor is a mind that remembers being you. It does not have to remember everything. If you take a drug that causes you to not form long-term memory of any event today, have you died by the next day? Objectively, no. Your friends and relatives can still talk to “you” the next day. Subjectively, partially. Your successors lack certain memories. But people forget things all the time.
Being mortal in the multiverse, you can expect that your measure of successors will continue to diminish as your branches die, but the measure never reaches absolute zero. Eventually all that remains are Bolzman Brains and the like. The most probable Boltzman brain successors only live long enough to have a “single” conscious qualia of remembering being you. The briefest of conscious thoughts. Their successors remember that thought and may have another random thought. You can eventually expect an eternity of totally random qualia and no control at all over your experience.
This isn’t Hell, but Limbo. Suffering is probably only a small corner of possible qualia-space, but so is eudaimonia. After an eternity you might stumble onto a small Botzlman World where you have some measure of control over your utility for some brief time, but that world will die, and your successors will again be only Boltzman brains.
I can’t help that some of my successors from any given moment are Boltzman brains. But I don’t want my only successors to be Boltzman Brains, because they don’t increase my utility. Therefore, cryonics.
See the Measure Problem of cosmology. I’m not certain of my answer, and I’d prefer not to bet my life on it, but it seems more likely than not. I do not believe that Boltzman Brains can be eliminated from cosmology, only that they have lesser measure than evolved beings like us. This is because of the Trivial Theorem of Arithmetic: almost all natural numbers are really damn huge. The universe doesn’t have to be infinite to get a Tegmark Level I multiverse. It just has to be sufficiently large.
Are people close to you aware that this is a reason that you advocate cryonics?
I’m not sure what you’re implying. Most people close to me are not even aware that I advocate cryonics. I expect this will change once I get my finances sorted out enough to actually sign up for cryonics myself, but for most people, cryonics alone already flunks the Absurdity heuristic. Likewise with many of the perfectly rational ideas here on LW, including the logical implications of quantum mechanics and cosmology, like Subjective Immortality. Linking more “absurditiess” seems unlikely to help my case in most instances. One step at a time.
Actually, I’m just interested. I’ve been wondering if big world immortality is a subject that would make people a) think that the speaker is nuts, b) freak out and possibly go nuts or c) go nuts because they think the speaker is crazy; and whether or not it’s a bad idea to bring it up.
I’m not willing to decipher your second question because this theme bothers me enough as it is, but I’ll just say that I’m amazed figuring this stuff out is not considered a higher priority by rationalists. If at some point someone can definitely tell me what to think about this, I’d be glad about it.
I am worried about the version that feels the pain but doesn’t die.
It implies Big world immortality, but it may be not nice, until we use it correctly. I wrote about it here: http://lesswrong.com/lw/n7u/the_map_of_quantum_big_world_immortality/
You should strive to maximize utility of your pattern, averaged over both subjective probability (uncertainty) and squared amplitude of wave-function.
If you include the latter, then it all adds up to normalcy.
If you select a state of the MWI-world according to born rule (i.e. using squared amplitude of the wave-function), then this world-state will, with overwhelming probability, be compatible with causality, entropy increase over time, and a mostly classic history, involving natural selection yielding patterns that are good at maximizing their squared-amplitude-weighted spread, i.e. DNA and brains that care about squared-amplitude (even if they don’t know it).
Of course this is a non-answer to your question. Also, we have not yet finished the necessary math to prove that this non-answer is internally consistent (we=mankind), but I think this is (a) plausible, (b) the gist of what EY wrote on the topic, and (c) definitely not an original insight by EY / the sequences.
See my reply to Oscar_Cunningham below; I’m not sure if Egan’s law is followed exactly (it never is, otherwise you’ve only managed to make the same predictions as before, with a complexity penalty!)
I have a problem with the definition: patternism doesn’t fall automatically out of reductionism / naturalism, so it’s not automatically accepted by those who accept cryonics.
Can you help me with this?
It seems to me:
‘reductionism/naturalism’ + ‘continuity of consciousness in time’ + ‘no tiny little tags on particles that make up a conscious mind’ = ‘patternism’
Are you saying that there’s something wrong with the latter two summands? Or it doesn’t quite add up?
I agree that patternism contingently implies subjective immortality, but I agree with Oscar Cunningham that subjective immortality does not imply not-caring about death. I think patternism is stronger than beliefs that cause people to sign up for cryonics or step into the teleporter or read (even agree with) the QM sequence.
(I’m not convinced that the universe is large enough for patternism to actually imply subjective immortality.)
(Fgvchyngvat ynetr-havirefr cnggreavfz.) Gurer’f na vafgnagvngvba bs zl cnggrea gung unf gur fhowrpgvir rkcrevrapr bs orvat tvira narfgurfvn naq erznvavat pbafpvbhf. Gurer’f znal bs gubfr. Ohg vg’f abg gur vafgnagvngvba gung rkvfgf ba rnegu, juvpu unf gur fhowrpgvir rkcrevrapr bs orvat tvira narfgurfvn naq gura jnxvat hc. Nyfb, nyzbfg nyy bs gubfr bgure cnggreaf qrpburer vagb fbzrguvat ragveryl hayvxr gur cnggrea ba rnegu.
Why wouldn’t it be? That conclusion follows logically from many physical theories that are currently taken quite seriously.
Such as? Subjective immortality isn’t implied by MWI without further cosmological assumptions.
What cosmological assumptions? Assumptions related to identity, perhaps, as discussed here. But it seems to me that MWI essentially guarantees that for every observer-moment, there will always exist a “subsequent” one, and the same seems to apply to all levels of a Tegmark multiverse.
I don’t think MWI is sufficiently well defined or understood for it to be known whether or not that is implied. For example it would not be the case in Robin Hanson’s mangled worlds proposal, and no one knows whether that proposal is correct or not.
Fair enough. I have no argument and low confidence, it just seems vaguely implausible.
“subjective immortality does not imply not-caring about death.”
Sure. You can care about whatever you want to care about, no matter what is the case. But even my version mostly prevents me from caring about death except in fairly short term ways; e.g. I don’t bother to do things that would extend my lifespan, even when I know about them. And I definitely would not bother with cryonics.
If some one offered me a bet giving $0 or $100 based on a quantum coin flip I’d be willing to pay $50 for it. So it’s clear that I’m acting for the sake of my average future self, not just the best or worst outcome. Therefore I also act to avoid outcomes where I die, even if there are still some possibilities where I live. The fact that I won’t experience the “dead” outcomes is irrelevant—I can still act for the sake of things which I won’t experience.
What about the question of whether I anticipate immortality? Well if I was planning what to do after an event where I might die, I would think to myself “I only need to think about the possibility where I live, since I won’t be able to carry out any actions in the other case” which is perhaps not the same as “anticipating immortality” but it has the same effect.
I don’t think that follows exactly. Specifically, that “you’re acting for the sake of things which you won’t experience”.
You are correct in your pricing of quantum flips according to payoffs adjusted by the Born rule.
But the payoffs from your dead versions don’t count, assuming you can only find yourself in non-dead continuations. I don’t know if this is a position (Bostrom or Carroll have almost surely written about it) or just outright stupidity, but it seems to me that this assumption (of only finding yourself alive) shrinks your ensemble of future states, leaving your decision theoretic judgements to only deal with the alive ones
If I’m offered a bet of being given $0 or $100 over two flips of a fair quantum coin, with payoffs:
|00> → $0
|11> → $100
|01> → certain immediate death
|10> → certain immediate death
I’d still price it at $50, rather than $25.
You could say, a little vaguely, that the others are physical possibilities, but they’re not anthropic possibilities.
As for “I can still act for the sake of things which I won’t experience” in general, where you care about dead versions, apart from you being able to experience such, you might find Living in Many Worlds helpful, specifically this bit:
If you care about other people finding you dead and mourning you though, then the case would be different, and you’d have to adjust your payoffs accordingly.
Note again though, this should have nothing necessarily to do with QM (all of this would hold in a large enough classical universe).
As for me, personally, I don’t think I buy immortality, but then I’d have to modus tollens out a lot of stuff (like stepping into a teleporter, or even perhaps the notion of continuity).
As I’ve pointed out before, we don’t need to say whether patternism is true, or whether the universe is big or not, to notice that we are subjectively non-mortal—no matter what is the case, we will never experience dying (in the sense of going out of existence.)
I guess we’ve had this discussion before, but: the difference between patternism and your version of subjective mortality is that in your version we nevertheless should not expect to exist indefinitely.
Sure. Nonetheless, you should not expect anything noticeably different from that to happen either. The same kinds of things will happen: you will find yourself wondering why you were the lucky one who survived the car crash, not wondering why you were the unlucky one who did not.
This is a good question and I’d be interested to hear answers to it as well.
Briefly, I’ll say that there seem to be plenty of reductio-ad-absurdum arguments for a “self” existing at all, such as the implication of philosophical zombies and the like. Rob Bensinger’s post here goes into this matter a bit.
If these arguments have validity, then it seems to me that neither “annihilation” nor “immortality” can actually be true. In short, it might be that “patternism” probably implies that there is no “you” at all, besides the conceptual construct your brain has created for itself. But these questions are really important to me to get right, because it calls into question whether it is rational, or moral, to try and preserve myself through the use of technologies such as cryonics, if that effort and money can be used somewhere else for a different moral good.
If, by a reduction of “I” to a pattern, you stop being a moral subject, then surely to all other people you can apply the same reduction and they stop being moral subjects too.
True, but there are also the questions of, should I try to ensure that exactly one of myself exists at any given moment? Instead of just trying to preserve my body, should I be content with just creating a million copies of my mind in some simulation, apathetic to whether or not my “original” still exists anywhere? Or should I be content with creating agents that aren’t like me and don’t have my exact history of experiences, but have the same goals as I do? It seems that these issues depend on sort of dualist questions of mind and whether or not there is moral value assigned to preserving this “self”.