But if you hold “you X” to be true merely because someone who feels like they’re you does X, without regard for how plentiful those someones are across the multiverse (or perhaps just that part of it that can be considered the future of the-you-I’m-talking-to, or something) then you’re going to have trouble preferring a 1% chance of death (or pain or poverty or whatever) to a 99% chance. I think this indicates that that’s a bad way to use the language.
I’m not sure I entirely get what you’re saying; but basically, yes, I can see trouble there.
But I think that, at its core, the point of QI is just to say that given MWI, conscious observers should expect to subjectively exist forever, and in that it differs from our normal intuition which is that without extra effort like signing up for cryonics, we should be pretty certain that we’ll die at some point and no longer exist after that. I’m not sure that all this talk about identity exactly hits the mark, although it’s relevant in the sense that I’m hopeful that somebody manages to show me why QI isn’t as bad as it seems to be.
QI or no QI, we should believe the following two things.
In every outcome I will ever get to experience, I will still be alive.
In the vast majority of outcomes 200 years from now (assuming no big medical breakthroughs etc.), measured in any terms that aren’t defined by my experiences, I will be dead.
What QI mostly seems to add to this is some (questionable) definitions of words like “you”, and really not much else.
I agree with qmotus that something is being added, not so much by QI, as by the many worlds interpretation. There is certainly a difference between “there will be only one outcome” and “all possible outcomes will happen.”
If we think all possible outcomes will happen, and if you assume that “200 years from now, I will still be alive,” is a possible outcome, it follows from your #1 that I will experience being alive 200 years from now. This isn’t a question of how we define “I”—it is true on any definition, given that the premises use the same definition. (This is not to deny that I will also be dead—that follows as well.)
If only one possible outcome will happen, then very likely 200 years from now, I will not experience being alive.
So if QI adds anything to MWI, it would be that “200 years from now, I will still be alive,” and the like, are possible outcomes.
There is certainly a difference between “there will be only one outcome” and “all possible outcomes will happen”
There’s no observable difference between them. In particular, “happen” here has to include “happen on branches inaccessible to us”, which means that a lot of the intuitions we’ve developed for how we should feel about something “happening” or not “happening” need to be treated with extreme caution.
If we think [...] it follows from your #1 that I will experience being alive 200 years from now. This isn’t a question of how we define “I”—it is true on any definition
OK. But the plausibility—even on MWI—of (1) “all possible outcomes will happen” plus (2) “it is possible that 200 years from now, I will still be alive” depends on either an unusual meaning for “will happen” or an unusual meaning for “I” (or of course both).
Maybe the right way to put it is this. MWI turns “ordinary” uncertainty (not knowing how the world is or will be) into indexical uncertainty (not knowing where in the world “I” will be). If you accept MWI, then you can take something like “X will happen” to mean “I will be in a branch where X happens” (in which case you’re only entitled to say it when X happens on all branches, or at least a good enough approximation to that) or to mean “there will be a branch where X happens” (in which case you shouldn’t feel about that in the same way as you feel about things definitely happening in the usual sense).
So: yes, on some branch I will experience being alive 200 years from now; this indeed follows from MWI. But to go from there to saying flatly “I will experience being alive 200 years from now” you need to be using “I will …” locutions in a very nonstandard manner. If your employer asks “Will you embezzle all our money?” and your intentions are honest, you will probably not answer “yes” even though presumably there’s some very low-measure portion of the multiverse where for some reason you set out to do so and succeed.
Whether that nonstandard usage is a matter of redefining “I” (so it applies equally to every possible continuation of present-you, however low its measure) or “will” (so it applies equally to every possible future, however low its measure) is up to you. But as soon as you say “I will experience being alive 200 years from now” you are speaking a different language from the one you speak when you say “I will not embezzle all your money”. The latter is still a useful thing to be able to say, and I suggest that it’s better not to redefine our language so that “I will” stops being usable to distinguish large-measure futures from tiny-measure futures.
if QI adds anything to MWI, it would be that [...] are possible outcomes.
Unless they were already possible outcomes without MWI, they are not possible outcomes with MWI (whether QI or no QI).
What MWI adds is that in a particular sense they are not merely possible outcomes but certain outcomes. But note that the thing that MWI makes (so far as we know) a certain outcome is not what we normally express by “in 200 years I will still be alive”.
You raise a valid point, which makes me think that our language may simply be inadequate to describe living in many worlds. Because both “yes” and “no” seem to me to be valid answers to the question “will you embezzle all our money”.
I still don’t think that it refutes QI, though. Take an observer at some moment: looking towards the future and ignoring the branches where they don’t exist, they will see that every branch will lead to them living to be infinitely old; but every branch doesn’t lead to them embezzling their employer’s money.
But note that the thing that MWI makes (so far as we know) a certain outcome is not what we normally express by “in 200 years I will still be alive”.
Do you mean that it’s not certain because of the identity considerations presented, or that MWI doesn’t even say that it’s necessarily true in some branch?
I don’t think refuting is what QI needs. It is, actually, true (on MWI) that despite the train rushing towards you while you’re tied to the tracks, or your multiply-metastatic inoperable cancer, or whatever other horrors, there are teeny-tiny bits of wavefunction (and hence of reality) in which you somehow survive those horrors.
What QI says that isn’t just restating MWI is as much a matter of attitude to that fact as anything else.
I wasn’t claiming that QI and inevitable embezzlement are exactly analogous; the former involves an anthropic(ish) element absent from the latter.
Do you mean that it’s not certain because of the identity considerations presented, or that MWI doesn’t even say that it’s necessarily true in some branch?
The “so far as we know” was because of the possibility that there are catastrophes MWI gives you no way to survive (though I think that can only be true in so far as QM-as-presently-understood is incomplete or incorrect). The “not what we normally express by …” was because of what I’d been saying in the rest of my comment.
I see. But I fail to understand, then, how this is uninteresting, as you said in your original comment. Let’s say you find yourself on those rain tracks: what do you expect to happen, then? What if a family member or other important person comes to see you for (what they believe to be) a final time? Do you simply say goodbye to them, fully aware that from your point of view, it won’t be a final time? What if we repeat this for a hundred times in a row?
I have the following expectations in that situation:
In most possible futures, I will soon die. Of course I won’t experience that (though I will experience some of the process), but other people will find that the world goes on without me in it.
Therefore, most of my possible trajectories from here end very soon, in death.
In a tiny minority of possible futures, I somehow survive. The train stops more abruptly than I thought possible, or gets derailed before hitting me. My cancer abruptly and bizarrely goes into complete remission. Or, more oddly but not necessarily more improbably: I get most of the way towards death but something stops me partway. The train rips my limbs off and somehow my head and torso get flung away from the tracks, and someone finds me before I lose too much blood. The cancer gets most of the way towards killing me, at which point some eccentric billionaire decides to bribe everyone involved to get my head frozen, and it turns out that cryonics works better than I expect it to. Etc.
I suspect you will want to say something like: “OK, very good, but what do you expect to experience?” but I think I have told you everything there is to say. I expect that a week from now (in our hypothetical about-to-die situation) all that remains of “my” measure will be in situations where I had an extraordinary narrow escape from death. That doesn’t seem to me like enough reason to say, e.g., that “I expect to survive”.
Do you simply say goodbye to them [...]?
Of course. From my present point of view it almost certainly will be a final time. From the point of view of those ridiculously lucky versions of me that somehow survive it won’t be, but that’s no different from the fact that (MWI or no, QI or no) I might somehow survive anyway.
If we repeat this several times in a row, then actually my update isn’t so much in the direction of QI (which I think has zero actual factual content; it’s just a matter of definitions and attitudes, ) as in the direction of weird theories in which someone or something is deliberately keeping me alive. Because if I have just had ten successive one-in-a-billion-billion escapes, hypotheses like “there is a god after all, and for some reason it has plans that involve my survival” start to be less improbable than “I just got repeatedly and outrageously lucky”.
I think that this attitude to QI is wrong because the measure should be renormilized if the number of the observers change.
We can’t count the worlds where I do not exist as worlds that influence my measure (or if we do, we have to add all other worlds where I do not exist, which are infinite and so my chances to exist in any next moment are almost zero).
The number of “me” will not change in case of embezzle. But If I die in some branches, it changes. It may be a little bit foggy in case of quantum immortality, but if we use many world immortality it may be clear.
For example a million copies of the program tries to calculate something inside actual computer. The goal system of the program is that it should calculate, say, pi with 10 digits accuracy. But it knows that most copies of the program will be killed soon, before it will able to finish calculation. Should it stop, knowing that it will be killed in next moment and with overwhelming probability? No, because if it stops, all its other copies stop too. So it must behave as it will survive.
My point is that from decision theory point of view rational agent should behave as if QI works, and plan his action or expectation accordingly. It also should expect that all his future experiences will be supportive to QI.
I will try to construct more clear example: For example, I have to survive many rounds of russian rouletts with chances of survival 1 in 10 each. The only thing I could change about it is following: after each round I will be asked if I believe in QI and will be punished by electroshock if I say “NO”. If I say “YES”, I will be punished twice in this round, but never again in any round.
If agent believe in QI it is rational to him to say “YES” in the beginning, get two shocks and never get it again.
If he “believes in measure”, than it will be rational to him to say NO, get one punishment in the beginning, and 0,1 punishment in next round, 0.01 punishment in third and so on, with total 1.111, which is smaller than 2.
My point here is that after several rounds most people (if they will be such agents) will change their decision and will say Yes.
In case of your example with train it means that it will be rational to you to use part of your time not for speaking with relatives, but for planning your actions after you survive in most probable way (train derails).
the measure should be renormalized if the number of observers change
I’m pretty sure I disagree very strongly with this, but I’m not absolutely certain I understand what you’re proposing so I could be wrong.
from decision theory a rational agent should behave as if QI works
Not quite, I think. Aren’t you implicitly assuming that the rational agent doesn’t care what happens on any branch where they cease to exist? Plenty of (otherwise?) rational agents do care. If you give me a choice between a world where I get an extra piece of chocolate now but my family get tortured for a year after I die, and an otherwise identical world where I don’t get the chocolate and they don’t get the torture, I pick the first without hesitation.
Can we transpose something like this to your example of the computer? I think so, though it gets a little silly. Suppose the program actually cares about the welfare of its programmer, and discovers that while it’s running it’s costing the programmer a lot of money. Then maybe it should stop, on the grounds that the cost of those millions of futile runs outweighs the benefit of the one that will complete and reveal the tenth decimal place of pi.
(Of course the actual right decision depends on the relative sizes of the utilities and the probabilities involved. So it is with QI.)
After surviving enough rounds of your Russian Roulette game, I will (as I said above) start to take seriously the possibility that there’s some bias in the results. (The hypotheses here wouldn’t need to be as extravagant as in the case of surviving obviously-fatal diseases or onrushing trains.) That would make it rational to say yes to the QI question (at least as far as avoiding shocks goes; I also have a preference for not lying, which would make it difficult for me to give either a simple yes or a simple no as answer).
I agree that in the train situation it would be reasonable to use a bit of time to decide what to do if the train derails. I would feel no inclination to spend any time deciding what to do if the Hand of God plucks me from its path or a series of quantum fluctuations makes its atoms zip off one by one in unexpected directions.
It looks like you suppose that there is branches where agent cease to exist, like dead end branches. In this branches he has zero experience after death.
But another description of this situation is that there is no dead ends, because branching happens in every point, and so we should count only cells of space-time there future I exist.
For example I do not exist on Mars and on all other Solar system bodies (except Earth). It doesn’t mean that I died on Mars. Mars is just empty cell in our calculation of future me, which we should not count on. The same is true about branches where I was killed.
Renormalization on observer number is used in other discussions of anthropic, like anorthic principle, Sleeping beauty.
There are still some open questions there, like how we could measure identical observers.
If an agent cares about his family, yes. He should not care about his QI. (But if he really believe in MWI and modal realism, he may also conclude that he can’t do anything to change their fate.)
Qi is rises very quickly chances that I am in a strange universe where God exist (or that I am in a simulation which also models afterlife). So finding myself in it will be evidence that QI worked.
I will try completely different explanation. For example, I die, but in future I will be resurrected by strong AI as exact copy of me. If I think that personal identity is information, I should be happy about it.
Let now assume that 10 copies of me exist in ten planets and all of them die, all the same way. The same future Ai may think that it will be enough to create only one copy of me to resurrect all dead copies. Now it is more similar to QI.
If we have many copies of compact disk with Windows95 and many of them destroyed, it doesn’t matter if one disk still exist.
So, first of all, if only one copy exists then any given misfortune is more likely to wipe out every last one of me than if ten copies exist. Aside from that, I think it’s correct that I shouldn’t much care now how many of me there are—i.e., what measure worlds like the one I’m in have relative to some predecessor.
But there’s a time-asymmetry here: I can still care (and do) about the measure of future worlds with me in is, relative to the one I’m in now. (Because I can influence “successors” of where-I-am-now but not “predecessors”. The point of caring about things is to help you influence them.)
It looks like that we are close to conclusion that QI mainly put difference between “egocentric” and “altruistic” goal systems.
The most interesting question is: where is the border between them? If I like my hand, is it part of me or of external world?
There is also interesting analogy with virus behavior. A virus seems to be interested in existing of his remote copies, with which it may have no any casual connections, because they will continue to replicate. (Altruistic genes do the same, if they exist after all). So egoistic behaviour here is altruistic to another copies of the virus.
I suspect you will want to say something like: “OK, very good, but what do you expect to experience?” but I think I have told you everything there is to say.
I’m tempted to, but I guess you have tried to explain your position as well as you can. I see you what you are trying to say, but I still find it quite incomprehensible how that attitude can be adopted in practice. On the other hand, I feel like it (or somehow getting rid of the idea of continuity of consciousness, as Yvain has suggested, which I have no idea how to do) is quite essential for not being as anxious and horrified about quantum/big world immortality as I am.
But unless you are already absolutely certain of your position in this discussion, you should also update toward, “I was mistaken and QI has factual content and is more likely to be true than I thought it was.”
Probably. But note that according to my present understanding, from my outrageously-surviving self’s vantage point all my recent weird experiences are exactly what I should expect—QI or no QI, MWI or MWI, merely conditioning on my still being there to observe anything.
I would say that QI (actually, MWI) adds a third thing, which is that “I will experience every outcome where I’m alive”, but it seems that I’m not able to communicate my points very effectively here.
How does MWI do that? On the face of it, MWI says nothing about experience, so how do you get that third thing from MWI? (I think you’ll need to do it by adding questionable word definitions, assumptions about personal identity, etc. But I’m willing to be shown I’m wrong!)
I think this post by entirelyuseless answers your question quite well, so if you’re still puzzled by this, we can continue there. Also, I don’t see how QI depends on any additional weird assumptions. After all, you’re using the word “experience” in your list of two points without defining it exactly. I don’t see why it’s necessary to define it either: a conscious experience is most likely simply a computational thing with a physical basis, and MWI and these other big world scenarios essentially say that all physical states (that are not prohibited by the laws of physics) happen somewhere.
But if you hold “you X” to be true merely because someone who feels like they’re you does X, without regard for how plentiful those someones are across the multiverse (or perhaps just that part of it that can be considered the future of the-you-I’m-talking-to, or something) then you’re going to have trouble preferring a 1% chance of death (or pain or poverty or whatever) to a 99% chance. I think this indicates that that’s a bad way to use the language.
I’m not sure I entirely get what you’re saying; but basically, yes, I can see trouble there.
But I think that, at its core, the point of QI is just to say that given MWI, conscious observers should expect to subjectively exist forever, and in that it differs from our normal intuition which is that without extra effort like signing up for cryonics, we should be pretty certain that we’ll die at some point and no longer exist after that. I’m not sure that all this talk about identity exactly hits the mark, although it’s relevant in the sense that I’m hopeful that somebody manages to show me why QI isn’t as bad as it seems to be.
QI or no QI, we should believe the following two things.
In every outcome I will ever get to experience, I will still be alive.
In the vast majority of outcomes 200 years from now (assuming no big medical breakthroughs etc.), measured in any terms that aren’t defined by my experiences, I will be dead.
What QI mostly seems to add to this is some (questionable) definitions of words like “you”, and really not much else.
I agree with qmotus that something is being added, not so much by QI, as by the many worlds interpretation. There is certainly a difference between “there will be only one outcome” and “all possible outcomes will happen.”
If we think all possible outcomes will happen, and if you assume that “200 years from now, I will still be alive,” is a possible outcome, it follows from your #1 that I will experience being alive 200 years from now. This isn’t a question of how we define “I”—it is true on any definition, given that the premises use the same definition. (This is not to deny that I will also be dead—that follows as well.)
If only one possible outcome will happen, then very likely 200 years from now, I will not experience being alive.
So if QI adds anything to MWI, it would be that “200 years from now, I will still be alive,” and the like, are possible outcomes.
There’s no observable difference between them. In particular, “happen” here has to include “happen on branches inaccessible to us”, which means that a lot of the intuitions we’ve developed for how we should feel about something “happening” or not “happening” need to be treated with extreme caution.
OK. But the plausibility—even on MWI—of (1) “all possible outcomes will happen” plus (2) “it is possible that 200 years from now, I will still be alive” depends on either an unusual meaning for “will happen” or an unusual meaning for “I” (or of course both).
Maybe the right way to put it is this. MWI turns “ordinary” uncertainty (not knowing how the world is or will be) into indexical uncertainty (not knowing where in the world “I” will be). If you accept MWI, then you can take something like “X will happen” to mean “I will be in a branch where X happens” (in which case you’re only entitled to say it when X happens on all branches, or at least a good enough approximation to that) or to mean “there will be a branch where X happens” (in which case you shouldn’t feel about that in the same way as you feel about things definitely happening in the usual sense).
So: yes, on some branch I will experience being alive 200 years from now; this indeed follows from MWI. But to go from there to saying flatly “I will experience being alive 200 years from now” you need to be using “I will …” locutions in a very nonstandard manner. If your employer asks “Will you embezzle all our money?” and your intentions are honest, you will probably not answer “yes” even though presumably there’s some very low-measure portion of the multiverse where for some reason you set out to do so and succeed.
Whether that nonstandard usage is a matter of redefining “I” (so it applies equally to every possible continuation of present-you, however low its measure) or “will” (so it applies equally to every possible future, however low its measure) is up to you. But as soon as you say “I will experience being alive 200 years from now” you are speaking a different language from the one you speak when you say “I will not embezzle all your money”. The latter is still a useful thing to be able to say, and I suggest that it’s better not to redefine our language so that “I will” stops being usable to distinguish large-measure futures from tiny-measure futures.
Unless they were already possible outcomes without MWI, they are not possible outcomes with MWI (whether QI or no QI).
What MWI adds is that in a particular sense they are not merely possible outcomes but certain outcomes. But note that the thing that MWI makes (so far as we know) a certain outcome is not what we normally express by “in 200 years I will still be alive”.
You raise a valid point, which makes me think that our language may simply be inadequate to describe living in many worlds. Because both “yes” and “no” seem to me to be valid answers to the question “will you embezzle all our money”.
I still don’t think that it refutes QI, though. Take an observer at some moment: looking towards the future and ignoring the branches where they don’t exist, they will see that every branch will lead to them living to be infinitely old; but every branch doesn’t lead to them embezzling their employer’s money.
Do you mean that it’s not certain because of the identity considerations presented, or that MWI doesn’t even say that it’s necessarily true in some branch?
I don’t think refuting is what QI needs. It is, actually, true (on MWI) that despite the train rushing towards you while you’re tied to the tracks, or your multiply-metastatic inoperable cancer, or whatever other horrors, there are teeny-tiny bits of wavefunction (and hence of reality) in which you somehow survive those horrors.
What QI says that isn’t just restating MWI is as much a matter of attitude to that fact as anything else.
I wasn’t claiming that QI and inevitable embezzlement are exactly analogous; the former involves an anthropic(ish) element absent from the latter.
The “so far as we know” was because of the possibility that there are catastrophes MWI gives you no way to survive (though I think that can only be true in so far as QM-as-presently-understood is incomplete or incorrect). The “not what we normally express by …” was because of what I’d been saying in the rest of my comment.
I see. But I fail to understand, then, how this is uninteresting, as you said in your original comment. Let’s say you find yourself on those rain tracks: what do you expect to happen, then? What if a family member or other important person comes to see you for (what they believe to be) a final time? Do you simply say goodbye to them, fully aware that from your point of view, it won’t be a final time? What if we repeat this for a hundred times in a row?
I have the following expectations in that situation:
In most possible futures, I will soon die. Of course I won’t experience that (though I will experience some of the process), but other people will find that the world goes on without me in it.
Therefore, most of my possible trajectories from here end very soon, in death.
In a tiny minority of possible futures, I somehow survive. The train stops more abruptly than I thought possible, or gets derailed before hitting me. My cancer abruptly and bizarrely goes into complete remission. Or, more oddly but not necessarily more improbably: I get most of the way towards death but something stops me partway. The train rips my limbs off and somehow my head and torso get flung away from the tracks, and someone finds me before I lose too much blood. The cancer gets most of the way towards killing me, at which point some eccentric billionaire decides to bribe everyone involved to get my head frozen, and it turns out that cryonics works better than I expect it to. Etc.
I suspect you will want to say something like: “OK, very good, but what do you expect to experience?” but I think I have told you everything there is to say. I expect that a week from now (in our hypothetical about-to-die situation) all that remains of “my” measure will be in situations where I had an extraordinary narrow escape from death. That doesn’t seem to me like enough reason to say, e.g., that “I expect to survive”.
Of course. From my present point of view it almost certainly will be a final time. From the point of view of those ridiculously lucky versions of me that somehow survive it won’t be, but that’s no different from the fact that (MWI or no, QI or no) I might somehow survive anyway.
If we repeat this several times in a row, then actually my update isn’t so much in the direction of QI (which I think has zero actual factual content; it’s just a matter of definitions and attitudes, ) as in the direction of weird theories in which someone or something is deliberately keeping me alive. Because if I have just had ten successive one-in-a-billion-billion escapes, hypotheses like “there is a god after all, and for some reason it has plans that involve my survival” start to be less improbable than “I just got repeatedly and outrageously lucky”.
I think that this attitude to QI is wrong because the measure should be renormilized if the number of the observers change.
We can’t count the worlds where I do not exist as worlds that influence my measure (or if we do, we have to add all other worlds where I do not exist, which are infinite and so my chances to exist in any next moment are almost zero).
The number of “me” will not change in case of embezzle. But If I die in some branches, it changes. It may be a little bit foggy in case of quantum immortality, but if we use many world immortality it may be clear.
For example a million copies of the program tries to calculate something inside actual computer. The goal system of the program is that it should calculate, say, pi with 10 digits accuracy. But it knows that most copies of the program will be killed soon, before it will able to finish calculation. Should it stop, knowing that it will be killed in next moment and with overwhelming probability? No, because if it stops, all its other copies stop too. So it must behave as it will survive.
My point is that from decision theory point of view rational agent should behave as if QI works, and plan his action or expectation accordingly. It also should expect that all his future experiences will be supportive to QI.
I will try to construct more clear example: For example, I have to survive many rounds of russian rouletts with chances of survival 1 in 10 each. The only thing I could change about it is following: after each round I will be asked if I believe in QI and will be punished by electroshock if I say “NO”. If I say “YES”, I will be punished twice in this round, but never again in any round.
If agent believe in QI it is rational to him to say “YES” in the beginning, get two shocks and never get it again. If he “believes in measure”, than it will be rational to him to say NO, get one punishment in the beginning, and 0,1 punishment in next round, 0.01 punishment in third and so on, with total 1.111, which is smaller than 2.
My point here is that after several rounds most people (if they will be such agents) will change their decision and will say Yes.
In case of your example with train it means that it will be rational to you to use part of your time not for speaking with relatives, but for planning your actions after you survive in most probable way (train derails).
I’m pretty sure I disagree very strongly with this, but I’m not absolutely certain I understand what you’re proposing so I could be wrong.
Not quite, I think. Aren’t you implicitly assuming that the rational agent doesn’t care what happens on any branch where they cease to exist? Plenty of (otherwise?) rational agents do care. If you give me a choice between a world where I get an extra piece of chocolate now but my family get tortured for a year after I die, and an otherwise identical world where I don’t get the chocolate and they don’t get the torture, I pick the first without hesitation.
Can we transpose something like this to your example of the computer? I think so, though it gets a little silly. Suppose the program actually cares about the welfare of its programmer, and discovers that while it’s running it’s costing the programmer a lot of money. Then maybe it should stop, on the grounds that the cost of those millions of futile runs outweighs the benefit of the one that will complete and reveal the tenth decimal place of pi.
(Of course the actual right decision depends on the relative sizes of the utilities and the probabilities involved. So it is with QI.)
After surviving enough rounds of your Russian Roulette game, I will (as I said above) start to take seriously the possibility that there’s some bias in the results. (The hypotheses here wouldn’t need to be as extravagant as in the case of surviving obviously-fatal diseases or onrushing trains.) That would make it rational to say yes to the QI question (at least as far as avoiding shocks goes; I also have a preference for not lying, which would make it difficult for me to give either a simple yes or a simple no as answer).
I agree that in the train situation it would be reasonable to use a bit of time to decide what to do if the train derails. I would feel no inclination to spend any time deciding what to do if the Hand of God plucks me from its path or a series of quantum fluctuations makes its atoms zip off one by one in unexpected directions.
It looks like you suppose that there is branches where agent cease to exist, like dead end branches. In this branches he has zero experience after death. But another description of this situation is that there is no dead ends, because branching happens in every point, and so we should count only cells of space-time there future I exist. For example I do not exist on Mars and on all other Solar system bodies (except Earth). It doesn’t mean that I died on Mars. Mars is just empty cell in our calculation of future me, which we should not count on. The same is true about branches where I was killed. Renormalization on observer number is used in other discussions of anthropic, like anorthic principle, Sleeping beauty. There are still some open questions there, like how we could measure identical observers.
If an agent cares about his family, yes. He should not care about his QI. (But if he really believe in MWI and modal realism, he may also conclude that he can’t do anything to change their fate.)
Qi is rises very quickly chances that I am in a strange universe where God exist (or that I am in a simulation which also models afterlife). So finding myself in it will be evidence that QI worked.
I will try completely different explanation. For example, I die, but in future I will be resurrected by strong AI as exact copy of me. If I think that personal identity is information, I should be happy about it.
Let now assume that 10 copies of me exist in ten planets and all of them die, all the same way. The same future Ai may think that it will be enough to create only one copy of me to resurrect all dead copies. Now it is more similar to QI.
If we have many copies of compact disk with Windows95 and many of them destroyed, it doesn’t matter if one disk still exist.
So, first of all, if only one copy exists then any given misfortune is more likely to wipe out every last one of me than if ten copies exist. Aside from that, I think it’s correct that I shouldn’t much care now how many of me there are—i.e., what measure worlds like the one I’m in have relative to some predecessor.
But there’s a time-asymmetry here: I can still care (and do) about the measure of future worlds with me in is, relative to the one I’m in now. (Because I can influence “successors” of where-I-am-now but not “predecessors”. The point of caring about things is to help you influence them.)
It looks like that we are close to conclusion that QI mainly put difference between “egocentric” and “altruistic” goal systems. The most interesting question is: where is the border between them? If I like my hand, is it part of me or of external world?
There is also interesting analogy with virus behavior. A virus seems to be interested in existing of his remote copies, with which it may have no any casual connections, because they will continue to replicate. (Altruistic genes do the same, if they exist after all). So egoistic behaviour here is altruistic to another copies of the virus.
I’m tempted to, but I guess you have tried to explain your position as well as you can. I see you what you are trying to say, but I still find it quite incomprehensible how that attitude can be adopted in practice. On the other hand, I feel like it (or somehow getting rid of the idea of continuity of consciousness, as Yvain has suggested, which I have no idea how to do) is quite essential for not being as anxious and horrified about quantum/big world immortality as I am.
But unless you are already absolutely certain of your position in this discussion, you should also update toward, “I was mistaken and QI has factual content and is more likely to be true than I thought it was.”
Probably. But note that according to my present understanding, from my outrageously-surviving self’s vantage point all my recent weird experiences are exactly what I should expect—QI or no QI, MWI or MWI, merely conditioning on my still being there to observe anything.
I would say that QI (actually, MWI) adds a third thing, which is that “I will experience every outcome where I’m alive”, but it seems that I’m not able to communicate my points very effectively here.
How does MWI do that? On the face of it, MWI says nothing about experience, so how do you get that third thing from MWI? (I think you’ll need to do it by adding questionable word definitions, assumptions about personal identity, etc. But I’m willing to be shown I’m wrong!)
I think this post by entirelyuseless answers your question quite well, so if you’re still puzzled by this, we can continue there. Also, I don’t see how QI depends on any additional weird assumptions. After all, you’re using the word “experience” in your list of two points without defining it exactly. I don’t see why it’s necessary to define it either: a conscious experience is most likely simply a computational thing with a physical basis, and MWI and these other big world scenarios essentially say that all physical states (that are not prohibited by the laws of physics) happen somewhere.
As you can see, I’ve replied at some length to entirelyuseless’s comment.