If the many worlds of the Many Worlds Interpretation of quantum mechanics are real, there’s at least a good chance that Quantum Immortality is real as well: All conscious beings should expect to experience the next moment in at least one Everett branch even if they stop existing in all other branches, and the moment after that in at least one other branch, and so on forever.
Yes, Quantum Immortality is “real”, as far as it goes. The problem is that it is inappropriately named and leads to inappropriate conclusions by misusing non-quantum intuitions. So yes, if you plan to put yourself in a 50% quantum death-box and keep doing so indefinitely you can expect there to be a branch in which you remain alive through 100 iterations. The mistake is to consider this intuitively closer to “immortality” rather than “almost entirely dead”.
Doesn’t it follow that each of us should expect to keep living in this state of constant degradation and suffering for a very, very long time, perhaps forever?
No. Don’t do a count on branches, aggregate the amplitude of the branches in question. We should expect to die. There happen to be an infinite (as far as we know) number of progressively more ‘improbable’ branches in which we are degrading but they still aggregate to something trivial. It is like a zeno’s paradox.
I have no idea whether these brief expressions of intuitions that I find useful are at all helpful for you. If not then, like Carl, I recommend this post that explores related concepts with diagrams and examples. (But I’m biased!)
(How do you know? People keep saying it, seriously or not, but when one is aware of a source of a bias, it seems as easy to overcompensate as to undercompensate, at which point you no longer know that you are biased.)
(How do you know? People keep saying it, seriously or not, but when one is aware of a source of a bias, it seems as easy to overcompensate as to undercompensate, at which point you no longer know that you are biased.)
I don’t, and I’m sufficiently confident regarding the relevance of said post that a little doubt regarding under or over confidence matters little. On top of that I’m no more biased regarding what I wrote in the past than what I write in any comment I am writing now so additional warnings of bias for a reference call rather than inline text is largely redundant.
That was a colloquial usage not a lesswrongian one. It is sometimes appropriate to lampshade self-references so that it does not appear that one is trying to double-count ones own testimony.
Certainly (hence “seriously or not”). It just irks me when people say things whose literal interpretation translates into wrong or meaningless statements, particularly when those statements are misleading or wrong in a non-obvious way. So my issue is with (the use of) such statements themselves, not the intent behind their usage, which in most cases doesn’t take the literal interpretation into account. (It’s usually possible to find a substitute without this flaw.)
Certainly. It just irks me when people say things whose literal interpretation translates into wrong or meaningless statements
This example fits into a third category. That is, the colloquial meaning is valid, the lesswrong/OvercomingBias connotations are misleading but it is in fact technically true. I am, after all, biased about my own work and that is something that a reader should consider when I link to it. Assuming no difference in prior impressions of Carl and I Carl_Shuman’s link should be weighted slightly differently than my own.
I agree that I’d be best served to use different wording due to the potentially distracting connotations (do you have suggestions?) but I disagree regarding the actual technical wrongness of the statement.
Wait, what did you mean by “I don’t” in the previous comment then? I understood that comment as confirming that you don’t know that you are biased, but in this comment you say “I am, after all, biased about my own work”.
To clarify: by “biased”, I mean a known direction of epistemic distortion, a belief that’s known to be too strong or too weak, a belief that’s not calibrated, and is on the wrong side in a known direction. If the direction of a bias is unknown, it doesn’t count as a bias (in the sense I used the word).
By this definition, knowing that you’re biased means knowing something about the way in which you’re biased, that can be used to update the belief until you no longer have such actionable information about its updated state. For example, if you expect that you estimate the quality or relevance of your own post as higher than its actual quality or relevance, this is actionable information to adjust your estimation down. After you do that, you will no longer know whether the adjusted estimation is too high or too low, so you are no longer biased in this sense.
(I guess the confusion/disagreement comes from the difference in our usage of the world “bias”. What do you mean by “biased”, such that you can remain biased about your own work even after taking that issue into account?)
(I wasn’t able to unpack the statement “That is, the colloquial meaning is valid, the lesswrong/OvercomingBias connotations are misleading but it is in fact technically true.”, that is I don’t know what specifically you refer to by “colloquial meaning”, “LW/OB connotations”.)
Wait, what did you mean by “I don’t” in the previous comment then? I understood that comment as confirming that you don’t know that you are biased, but in this comment you say “I am, after all, biased about my own work”.
“I can not reliably state the nature or direction of whatever biases I may have. Even if I was entirely confident regarding the bias I should and in fact do expect others to bear that potential bias in mind.”
I’ve just finished reading your post. Basically what is says is, if I care about reality I should care about all future branches, not just the ones where I’m alive (or have achieved some desired result, like a million dollars). Okay, I get that. I do care about all future branches (well, the ones I can affect, anyway). But here’s the thing: I care even more about the first-person mental states that I will actually be/experience.
Let’s say that a version of me will be tortured in branch A, while another version of me will be sipping his coffee in branch B. From an outside perspective, it’s irrelevant (meaningless, even) which version of me gets tortured; but if ‘I’ ‘end up’ in branch A, I’ll care a whole lot.
So yeah, if I don’t sign up for cryonics and if Aubrey de Grey and Eliezer slack off too much, I expect to die, in the same sense that I don’t expect to win the lottery. I also expect to actually have the first-person experience of dying over the course of millenia. And I care about both of these things, but in different ways. Is there a contradiction here? I don’t think there is.
The two senses of “care” are different, and it’s dangerous to confuse them. (I’m going to ignore the psychological aspects of their role and will talk only about their consequentialist role.) The first is relevant to the decisions that affect whether you die and what other events happen in those worlds, you have to care about the event of dying and the worlds where that happens in order to plan the shape of the events in those worlds, including avoidance of death. The second sense of “caring” is relevant to giving up, to planning for the event of not dying, where you no longer control the worlds where you died, and so there is no point in taking them into account in your planning (within that hypothetical).
The caring about the futures where you survive is an optimization trick, and its applicability depends on the following considerations: (1) the probability of survival, hence the relative importance of planning for survival as opposed to other possibilities, (2) the marginal value of planning further for the general case, taking the worlds where you don’t survive into account, (3) the marginal value of planning further for the special case of survival. If, as is the case with quantum immortality, the probability of survival is too low, it isn’t worth your thought to work on the situation where you survive, you should instead worry about the general case. Once you get into an improbable quantum immortality situation (i.e. survive), only then should you start caring about it (since at that point you do lose control about the general situation), and not before.
No. Don’t do a count on branches, aggregate the amplitude of the branches in question. We should expect to die
Objectively or subjectively? If the objective measure of the branches is very low, you could round that off to “expect to die”… from someone else’s perspective. From your perspective, even if there is one low probability branch where you continue, you can be sure to subjectively experience it, since there is no “you” to experience anything in the high probability branches.
But really it’s too firm a conclusion to expect to live.
There are no facts about MWI and QI because they rely on questions about 1. Probability 2. Consciousness 3. Personal identity that we don’t have good answers to.
Yes, Quantum Immortality is “real”, as far as it goes. The problem is that it is inappropriately named and leads to inappropriate conclusions by misusing non-quantum intuitions. So yes, if you plan to put yourself in a 50% quantum death-box and keep doing so indefinitely you can expect there to be a branch in which you remain alive through 100 iterations. The mistake is to consider this intuitively closer to “immortality” rather than “almost entirely dead”.
As far as I know I am already “almost entirely dead” in all possible worlds, so playing with the scant epsilon of universes I exist in doesn’t seem like too much of a problem. If there were a way to narrow my measure of existence down to only a single universe in which only the very best things ever happened it seems like that would have the highest utility of any solution. If such a best universe exists it is very, very unlikely. My expected value from its existence is consequently very, very little unless I can prevent my experience in lesser universes.
The questions are twofold: Can quantum suicide select such a best universe (is there a cause of the very best things happening that involves continually trying to kill myself; it seems contradictory) and if so, is it the most likely or quickest way to experience such a universe? Clearly an even better goal would be a future where all universes are the best possible universe, but the ability to accomplish that does not seem likely. I lack a sufficient understanding of quantum mechanics (and the real territory) to answer these questions.
No. Don’t do a count on branches, aggregate the amplitude of the branches in question. We should expect to die. There happen to be an infinite (as far as we know) number of progressively more ‘improbable’ branches in which we are degrading but they still aggregate to something trivial. It is like a zeno’s paradox.
This is something of a paradox because my dead branches won’t experience anything, leaving my experience only in the branches where I live. I should expect to experience near-death, but I should never expect to experience being dead. So a more useful expectation is what my future self will experience in 100 years, 1,000 years, or 1,000,000 years. Probably I will have died off in all but an epsilon of future possible universes, but what will those epsilon be like? They are the ones that matter.
What about anthropics? Should we care more about the worlds where we exist?
EDIT: Wait, that’s nonsense.
I’d say “ridiculous”, not “nonsense”. An agent certainly could care about said worlds and not about others. Yvain has even expressed preferences along these lines himself and gone as far as to bite several related bullets. Yet while such preferences are logically coherent I would usually think it is more likely that someone professing them is confused about what they want.
I would usually think it is more likely that someone professing them is confused about what they want.
Indeed. I was thinking about subjective probabilities, without noticing that what I expect to observe isn’t what I expect to happen when dealing with anthropics.
Yes, Quantum Immortality is “real”, as far as it goes. The problem is that it is inappropriately named and leads to inappropriate conclusions by misusing non-quantum intuitions. So yes, if you plan to put yourself in a 50% quantum death-box and keep doing so indefinitely you can expect there to be a branch in which you remain alive through 100 iterations. The mistake is to consider this intuitively closer to “immortality” rather than “almost entirely dead”.
No. Don’t do a count on branches, aggregate the amplitude of the branches in question. We should expect to die. There happen to be an infinite (as far as we know) number of progressively more ‘improbable’ branches in which we are degrading but they still aggregate to something trivial. It is like a zeno’s paradox.
I have no idea whether these brief expressions of intuitions that I find useful are at all helpful for you. If not then, like Carl, I recommend this post that explores related concepts with diagrams and examples. (But I’m biased!)
(How do you know? People keep saying it, seriously or not, but when one is aware of a source of a bias, it seems as easy to overcompensate as to undercompensate, at which point you no longer know that you are biased.)
I don’t, and I’m sufficiently confident regarding the relevance of said post that a little doubt regarding under or over confidence matters little. On top of that I’m no more biased regarding what I wrote in the past than what I write in any comment I am writing now so additional warnings of bias for a reference call rather than inline text is largely redundant.
That was a colloquial usage not a lesswrongian one. It is sometimes appropriate to lampshade self-references so that it does not appear that one is trying to double-count ones own testimony.
Certainly (hence “seriously or not”). It just irks me when people say things whose literal interpretation translates into wrong or meaningless statements, particularly when those statements are misleading or wrong in a non-obvious way. So my issue is with (the use of) such statements themselves, not the intent behind their usage, which in most cases doesn’t take the literal interpretation into account. (It’s usually possible to find a substitute without this flaw.)
This example fits into a third category. That is, the colloquial meaning is valid, the lesswrong/OvercomingBias connotations are misleading but it is in fact technically true. I am, after all, biased about my own work and that is something that a reader should consider when I link to it. Assuming no difference in prior impressions of Carl and I Carl_Shuman’s link should be weighted slightly differently than my own.
I agree that I’d be best served to use different wording due to the potentially distracting connotations (do you have suggestions?) but I disagree regarding the actual technical wrongness of the statement.
Wait, what did you mean by “I don’t” in the previous comment then? I understood that comment as confirming that you don’t know that you are biased, but in this comment you say “I am, after all, biased about my own work”.
To clarify: by “biased”, I mean a known direction of epistemic distortion, a belief that’s known to be too strong or too weak, a belief that’s not calibrated, and is on the wrong side in a known direction. If the direction of a bias is unknown, it doesn’t count as a bias (in the sense I used the word).
By this definition, knowing that you’re biased means knowing something about the way in which you’re biased, that can be used to update the belief until you no longer have such actionable information about its updated state. For example, if you expect that you estimate the quality or relevance of your own post as higher than its actual quality or relevance, this is actionable information to adjust your estimation down. After you do that, you will no longer know whether the adjusted estimation is too high or too low, so you are no longer biased in this sense.
(I guess the confusion/disagreement comes from the difference in our usage of the world “bias”. What do you mean by “biased”, such that you can remain biased about your own work even after taking that issue into account?)
(I wasn’t able to unpack the statement “That is, the colloquial meaning is valid, the lesswrong/OvercomingBias connotations are misleading but it is in fact technically true.”, that is I don’t know what specifically you refer to by “colloquial meaning”, “LW/OB connotations”.)
“I can not reliably state the nature or direction of whatever biases I may have. Even if I was entirely confident regarding the bias I should and in fact do expect others to bear that potential bias in mind.”
I’ve just finished reading your post. Basically what is says is, if I care about reality I should care about all future branches, not just the ones where I’m alive (or have achieved some desired result, like a million dollars). Okay, I get that. I do care about all future branches (well, the ones I can affect, anyway). But here’s the thing: I care even more about the first-person mental states that I will actually be/experience.
Let’s say that a version of me will be tortured in branch A, while another version of me will be sipping his coffee in branch B. From an outside perspective, it’s irrelevant (meaningless, even) which version of me gets tortured; but if ‘I’ ‘end up’ in branch A, I’ll care a whole lot.
So yeah, if I don’t sign up for cryonics and if Aubrey de Grey and Eliezer slack off too much, I expect to die, in the same sense that I don’t expect to win the lottery. I also expect to actually have the first-person experience of dying over the course of millenia. And I care about both of these things, but in different ways. Is there a contradiction here? I don’t think there is.
The two senses of “care” are different, and it’s dangerous to confuse them. (I’m going to ignore the psychological aspects of their role and will talk only about their consequentialist role.) The first is relevant to the decisions that affect whether you die and what other events happen in those worlds, you have to care about the event of dying and the worlds where that happens in order to plan the shape of the events in those worlds, including avoidance of death. The second sense of “caring” is relevant to giving up, to planning for the event of not dying, where you no longer control the worlds where you died, and so there is no point in taking them into account in your planning (within that hypothetical).
The caring about the futures where you survive is an optimization trick, and its applicability depends on the following considerations: (1) the probability of survival, hence the relative importance of planning for survival as opposed to other possibilities, (2) the marginal value of planning further for the general case, taking the worlds where you don’t survive into account, (3) the marginal value of planning further for the special case of survival. If, as is the case with quantum immortality, the probability of survival is too low, it isn’t worth your thought to work on the situation where you survive, you should instead worry about the general case. Once you get into an improbable quantum immortality situation (i.e. survive), only then should you start caring about it (since at that point you do lose control about the general situation), and not before.
Objectively or subjectively? If the objective measure of the branches is very low, you could round that off to “expect to die”… from someone else’s perspective. From your perspective, even if there is one low probability branch where you continue, you can be sure to subjectively experience it, since there is no “you” to experience anything in the high probability branches.
But really it’s too firm a conclusion to expect to live.
There are no facts about MWI and QI because they rely on questions about 1. Probability 2. Consciousness 3. Personal identity that we don’t have good answers to.
As far as I know I am already “almost entirely dead” in all possible worlds, so playing with the scant epsilon of universes I exist in doesn’t seem like too much of a problem. If there were a way to narrow my measure of existence down to only a single universe in which only the very best things ever happened it seems like that would have the highest utility of any solution. If such a best universe exists it is very, very unlikely. My expected value from its existence is consequently very, very little unless I can prevent my experience in lesser universes.
The questions are twofold: Can quantum suicide select such a best universe (is there a cause of the very best things happening that involves continually trying to kill myself; it seems contradictory) and if so, is it the most likely or quickest way to experience such a universe? Clearly an even better goal would be a future where all universes are the best possible universe, but the ability to accomplish that does not seem likely. I lack a sufficient understanding of quantum mechanics (and the real territory) to answer these questions.
This is something of a paradox because my dead branches won’t experience anything, leaving my experience only in the branches where I live. I should expect to experience near-death, but I should never expect to experience being dead. So a more useful expectation is what my future self will experience in 100 years, 1,000 years, or 1,000,000 years. Probably I will have died off in all but an epsilon of future possible universes, but what will those epsilon be like? They are the ones that matter.
What about anthropics? Should we care more about the worlds where we exist?
EDIT: Wait, that’s nonsense.
I’d say “ridiculous”, not “nonsense”. An agent certainly could care about said worlds and not about others. Yvain has even expressed preferences along these lines himself and gone as far as to bite several related bullets. Yet while such preferences are logically coherent I would usually think it is more likely that someone professing them is confused about what they want.
Indeed. I was thinking about subjective probabilities, without noticing that what I expect to observe isn’t what I expect to happen when dealing with anthropics.
I was pretty tired …