I see. But I fail to understand, then, how this is uninteresting, as you said in your original comment. Let’s say you find yourself on those rain tracks: what do you expect to happen, then? What if a family member or other important person comes to see you for (what they believe to be) a final time? Do you simply say goodbye to them, fully aware that from your point of view, it won’t be a final time? What if we repeat this for a hundred times in a row?
I have the following expectations in that situation:
In most possible futures, I will soon die. Of course I won’t experience that (though I will experience some of the process), but other people will find that the world goes on without me in it.
Therefore, most of my possible trajectories from here end very soon, in death.
In a tiny minority of possible futures, I somehow survive. The train stops more abruptly than I thought possible, or gets derailed before hitting me. My cancer abruptly and bizarrely goes into complete remission. Or, more oddly but not necessarily more improbably: I get most of the way towards death but something stops me partway. The train rips my limbs off and somehow my head and torso get flung away from the tracks, and someone finds me before I lose too much blood. The cancer gets most of the way towards killing me, at which point some eccentric billionaire decides to bribe everyone involved to get my head frozen, and it turns out that cryonics works better than I expect it to. Etc.
I suspect you will want to say something like: “OK, very good, but what do you expect to experience?” but I think I have told you everything there is to say. I expect that a week from now (in our hypothetical about-to-die situation) all that remains of “my” measure will be in situations where I had an extraordinary narrow escape from death. That doesn’t seem to me like enough reason to say, e.g., that “I expect to survive”.
Do you simply say goodbye to them [...]?
Of course. From my present point of view it almost certainly will be a final time. From the point of view of those ridiculously lucky versions of me that somehow survive it won’t be, but that’s no different from the fact that (MWI or no, QI or no) I might somehow survive anyway.
If we repeat this several times in a row, then actually my update isn’t so much in the direction of QI (which I think has zero actual factual content; it’s just a matter of definitions and attitudes, ) as in the direction of weird theories in which someone or something is deliberately keeping me alive. Because if I have just had ten successive one-in-a-billion-billion escapes, hypotheses like “there is a god after all, and for some reason it has plans that involve my survival” start to be less improbable than “I just got repeatedly and outrageously lucky”.
I think that this attitude to QI is wrong because the measure should be renormilized if the number of the observers change.
We can’t count the worlds where I do not exist as worlds that influence my measure (or if we do, we have to add all other worlds where I do not exist, which are infinite and so my chances to exist in any next moment are almost zero).
The number of “me” will not change in case of embezzle. But If I die in some branches, it changes. It may be a little bit foggy in case of quantum immortality, but if we use many world immortality it may be clear.
For example a million copies of the program tries to calculate something inside actual computer. The goal system of the program is that it should calculate, say, pi with 10 digits accuracy. But it knows that most copies of the program will be killed soon, before it will able to finish calculation. Should it stop, knowing that it will be killed in next moment and with overwhelming probability? No, because if it stops, all its other copies stop too. So it must behave as it will survive.
My point is that from decision theory point of view rational agent should behave as if QI works, and plan his action or expectation accordingly. It also should expect that all his future experiences will be supportive to QI.
I will try to construct more clear example: For example, I have to survive many rounds of russian rouletts with chances of survival 1 in 10 each. The only thing I could change about it is following: after each round I will be asked if I believe in QI and will be punished by electroshock if I say “NO”. If I say “YES”, I will be punished twice in this round, but never again in any round.
If agent believe in QI it is rational to him to say “YES” in the beginning, get two shocks and never get it again.
If he “believes in measure”, than it will be rational to him to say NO, get one punishment in the beginning, and 0,1 punishment in next round, 0.01 punishment in third and so on, with total 1.111, which is smaller than 2.
My point here is that after several rounds most people (if they will be such agents) will change their decision and will say Yes.
In case of your example with train it means that it will be rational to you to use part of your time not for speaking with relatives, but for planning your actions after you survive in most probable way (train derails).
the measure should be renormalized if the number of observers change
I’m pretty sure I disagree very strongly with this, but I’m not absolutely certain I understand what you’re proposing so I could be wrong.
from decision theory a rational agent should behave as if QI works
Not quite, I think. Aren’t you implicitly assuming that the rational agent doesn’t care what happens on any branch where they cease to exist? Plenty of (otherwise?) rational agents do care. If you give me a choice between a world where I get an extra piece of chocolate now but my family get tortured for a year after I die, and an otherwise identical world where I don’t get the chocolate and they don’t get the torture, I pick the first without hesitation.
Can we transpose something like this to your example of the computer? I think so, though it gets a little silly. Suppose the program actually cares about the welfare of its programmer, and discovers that while it’s running it’s costing the programmer a lot of money. Then maybe it should stop, on the grounds that the cost of those millions of futile runs outweighs the benefit of the one that will complete and reveal the tenth decimal place of pi.
(Of course the actual right decision depends on the relative sizes of the utilities and the probabilities involved. So it is with QI.)
After surviving enough rounds of your Russian Roulette game, I will (as I said above) start to take seriously the possibility that there’s some bias in the results. (The hypotheses here wouldn’t need to be as extravagant as in the case of surviving obviously-fatal diseases or onrushing trains.) That would make it rational to say yes to the QI question (at least as far as avoiding shocks goes; I also have a preference for not lying, which would make it difficult for me to give either a simple yes or a simple no as answer).
I agree that in the train situation it would be reasonable to use a bit of time to decide what to do if the train derails. I would feel no inclination to spend any time deciding what to do if the Hand of God plucks me from its path or a series of quantum fluctuations makes its atoms zip off one by one in unexpected directions.
It looks like you suppose that there is branches where agent cease to exist, like dead end branches. In this branches he has zero experience after death.
But another description of this situation is that there is no dead ends, because branching happens in every point, and so we should count only cells of space-time there future I exist.
For example I do not exist on Mars and on all other Solar system bodies (except Earth). It doesn’t mean that I died on Mars. Mars is just empty cell in our calculation of future me, which we should not count on. The same is true about branches where I was killed.
Renormalization on observer number is used in other discussions of anthropic, like anorthic principle, Sleeping beauty.
There are still some open questions there, like how we could measure identical observers.
If an agent cares about his family, yes. He should not care about his QI. (But if he really believe in MWI and modal realism, he may also conclude that he can’t do anything to change their fate.)
Qi is rises very quickly chances that I am in a strange universe where God exist (or that I am in a simulation which also models afterlife). So finding myself in it will be evidence that QI worked.
I will try completely different explanation. For example, I die, but in future I will be resurrected by strong AI as exact copy of me. If I think that personal identity is information, I should be happy about it.
Let now assume that 10 copies of me exist in ten planets and all of them die, all the same way. The same future Ai may think that it will be enough to create only one copy of me to resurrect all dead copies. Now it is more similar to QI.
If we have many copies of compact disk with Windows95 and many of them destroyed, it doesn’t matter if one disk still exist.
So, first of all, if only one copy exists then any given misfortune is more likely to wipe out every last one of me than if ten copies exist. Aside from that, I think it’s correct that I shouldn’t much care now how many of me there are—i.e., what measure worlds like the one I’m in have relative to some predecessor.
But there’s a time-asymmetry here: I can still care (and do) about the measure of future worlds with me in is, relative to the one I’m in now. (Because I can influence “successors” of where-I-am-now but not “predecessors”. The point of caring about things is to help you influence them.)
It looks like that we are close to conclusion that QI mainly put difference between “egocentric” and “altruistic” goal systems.
The most interesting question is: where is the border between them? If I like my hand, is it part of me or of external world?
There is also interesting analogy with virus behavior. A virus seems to be interested in existing of his remote copies, with which it may have no any casual connections, because they will continue to replicate. (Altruistic genes do the same, if they exist after all). So egoistic behaviour here is altruistic to another copies of the virus.
I suspect you will want to say something like: “OK, very good, but what do you expect to experience?” but I think I have told you everything there is to say.
I’m tempted to, but I guess you have tried to explain your position as well as you can. I see you what you are trying to say, but I still find it quite incomprehensible how that attitude can be adopted in practice. On the other hand, I feel like it (or somehow getting rid of the idea of continuity of consciousness, as Yvain has suggested, which I have no idea how to do) is quite essential for not being as anxious and horrified about quantum/big world immortality as I am.
But unless you are already absolutely certain of your position in this discussion, you should also update toward, “I was mistaken and QI has factual content and is more likely to be true than I thought it was.”
Probably. But note that according to my present understanding, from my outrageously-surviving self’s vantage point all my recent weird experiences are exactly what I should expect—QI or no QI, MWI or MWI, merely conditioning on my still being there to observe anything.
I see. But I fail to understand, then, how this is uninteresting, as you said in your original comment. Let’s say you find yourself on those rain tracks: what do you expect to happen, then? What if a family member or other important person comes to see you for (what they believe to be) a final time? Do you simply say goodbye to them, fully aware that from your point of view, it won’t be a final time? What if we repeat this for a hundred times in a row?
I have the following expectations in that situation:
In most possible futures, I will soon die. Of course I won’t experience that (though I will experience some of the process), but other people will find that the world goes on without me in it.
Therefore, most of my possible trajectories from here end very soon, in death.
In a tiny minority of possible futures, I somehow survive. The train stops more abruptly than I thought possible, or gets derailed before hitting me. My cancer abruptly and bizarrely goes into complete remission. Or, more oddly but not necessarily more improbably: I get most of the way towards death but something stops me partway. The train rips my limbs off and somehow my head and torso get flung away from the tracks, and someone finds me before I lose too much blood. The cancer gets most of the way towards killing me, at which point some eccentric billionaire decides to bribe everyone involved to get my head frozen, and it turns out that cryonics works better than I expect it to. Etc.
I suspect you will want to say something like: “OK, very good, but what do you expect to experience?” but I think I have told you everything there is to say. I expect that a week from now (in our hypothetical about-to-die situation) all that remains of “my” measure will be in situations where I had an extraordinary narrow escape from death. That doesn’t seem to me like enough reason to say, e.g., that “I expect to survive”.
Of course. From my present point of view it almost certainly will be a final time. From the point of view of those ridiculously lucky versions of me that somehow survive it won’t be, but that’s no different from the fact that (MWI or no, QI or no) I might somehow survive anyway.
If we repeat this several times in a row, then actually my update isn’t so much in the direction of QI (which I think has zero actual factual content; it’s just a matter of definitions and attitudes, ) as in the direction of weird theories in which someone or something is deliberately keeping me alive. Because if I have just had ten successive one-in-a-billion-billion escapes, hypotheses like “there is a god after all, and for some reason it has plans that involve my survival” start to be less improbable than “I just got repeatedly and outrageously lucky”.
I think that this attitude to QI is wrong because the measure should be renormilized if the number of the observers change.
We can’t count the worlds where I do not exist as worlds that influence my measure (or if we do, we have to add all other worlds where I do not exist, which are infinite and so my chances to exist in any next moment are almost zero).
The number of “me” will not change in case of embezzle. But If I die in some branches, it changes. It may be a little bit foggy in case of quantum immortality, but if we use many world immortality it may be clear.
For example a million copies of the program tries to calculate something inside actual computer. The goal system of the program is that it should calculate, say, pi with 10 digits accuracy. But it knows that most copies of the program will be killed soon, before it will able to finish calculation. Should it stop, knowing that it will be killed in next moment and with overwhelming probability? No, because if it stops, all its other copies stop too. So it must behave as it will survive.
My point is that from decision theory point of view rational agent should behave as if QI works, and plan his action or expectation accordingly. It also should expect that all his future experiences will be supportive to QI.
I will try to construct more clear example: For example, I have to survive many rounds of russian rouletts with chances of survival 1 in 10 each. The only thing I could change about it is following: after each round I will be asked if I believe in QI and will be punished by electroshock if I say “NO”. If I say “YES”, I will be punished twice in this round, but never again in any round.
If agent believe in QI it is rational to him to say “YES” in the beginning, get two shocks and never get it again. If he “believes in measure”, than it will be rational to him to say NO, get one punishment in the beginning, and 0,1 punishment in next round, 0.01 punishment in third and so on, with total 1.111, which is smaller than 2.
My point here is that after several rounds most people (if they will be such agents) will change their decision and will say Yes.
In case of your example with train it means that it will be rational to you to use part of your time not for speaking with relatives, but for planning your actions after you survive in most probable way (train derails).
I’m pretty sure I disagree very strongly with this, but I’m not absolutely certain I understand what you’re proposing so I could be wrong.
Not quite, I think. Aren’t you implicitly assuming that the rational agent doesn’t care what happens on any branch where they cease to exist? Plenty of (otherwise?) rational agents do care. If you give me a choice between a world where I get an extra piece of chocolate now but my family get tortured for a year after I die, and an otherwise identical world where I don’t get the chocolate and they don’t get the torture, I pick the first without hesitation.
Can we transpose something like this to your example of the computer? I think so, though it gets a little silly. Suppose the program actually cares about the welfare of its programmer, and discovers that while it’s running it’s costing the programmer a lot of money. Then maybe it should stop, on the grounds that the cost of those millions of futile runs outweighs the benefit of the one that will complete and reveal the tenth decimal place of pi.
(Of course the actual right decision depends on the relative sizes of the utilities and the probabilities involved. So it is with QI.)
After surviving enough rounds of your Russian Roulette game, I will (as I said above) start to take seriously the possibility that there’s some bias in the results. (The hypotheses here wouldn’t need to be as extravagant as in the case of surviving obviously-fatal diseases or onrushing trains.) That would make it rational to say yes to the QI question (at least as far as avoiding shocks goes; I also have a preference for not lying, which would make it difficult for me to give either a simple yes or a simple no as answer).
I agree that in the train situation it would be reasonable to use a bit of time to decide what to do if the train derails. I would feel no inclination to spend any time deciding what to do if the Hand of God plucks me from its path or a series of quantum fluctuations makes its atoms zip off one by one in unexpected directions.
It looks like you suppose that there is branches where agent cease to exist, like dead end branches. In this branches he has zero experience after death. But another description of this situation is that there is no dead ends, because branching happens in every point, and so we should count only cells of space-time there future I exist. For example I do not exist on Mars and on all other Solar system bodies (except Earth). It doesn’t mean that I died on Mars. Mars is just empty cell in our calculation of future me, which we should not count on. The same is true about branches where I was killed. Renormalization on observer number is used in other discussions of anthropic, like anorthic principle, Sleeping beauty. There are still some open questions there, like how we could measure identical observers.
If an agent cares about his family, yes. He should not care about his QI. (But if he really believe in MWI and modal realism, he may also conclude that he can’t do anything to change their fate.)
Qi is rises very quickly chances that I am in a strange universe where God exist (or that I am in a simulation which also models afterlife). So finding myself in it will be evidence that QI worked.
I will try completely different explanation. For example, I die, but in future I will be resurrected by strong AI as exact copy of me. If I think that personal identity is information, I should be happy about it.
Let now assume that 10 copies of me exist in ten planets and all of them die, all the same way. The same future Ai may think that it will be enough to create only one copy of me to resurrect all dead copies. Now it is more similar to QI.
If we have many copies of compact disk with Windows95 and many of them destroyed, it doesn’t matter if one disk still exist.
So, first of all, if only one copy exists then any given misfortune is more likely to wipe out every last one of me than if ten copies exist. Aside from that, I think it’s correct that I shouldn’t much care now how many of me there are—i.e., what measure worlds like the one I’m in have relative to some predecessor.
But there’s a time-asymmetry here: I can still care (and do) about the measure of future worlds with me in is, relative to the one I’m in now. (Because I can influence “successors” of where-I-am-now but not “predecessors”. The point of caring about things is to help you influence them.)
It looks like that we are close to conclusion that QI mainly put difference between “egocentric” and “altruistic” goal systems. The most interesting question is: where is the border between them? If I like my hand, is it part of me or of external world?
There is also interesting analogy with virus behavior. A virus seems to be interested in existing of his remote copies, with which it may have no any casual connections, because they will continue to replicate. (Altruistic genes do the same, if they exist after all). So egoistic behaviour here is altruistic to another copies of the virus.
I’m tempted to, but I guess you have tried to explain your position as well as you can. I see you what you are trying to say, but I still find it quite incomprehensible how that attitude can be adopted in practice. On the other hand, I feel like it (or somehow getting rid of the idea of continuity of consciousness, as Yvain has suggested, which I have no idea how to do) is quite essential for not being as anxious and horrified about quantum/big world immortality as I am.
But unless you are already absolutely certain of your position in this discussion, you should also update toward, “I was mistaken and QI has factual content and is more likely to be true than I thought it was.”
Probably. But note that according to my present understanding, from my outrageously-surviving self’s vantage point all my recent weird experiences are exactly what I should expect—QI or no QI, MWI or MWI, merely conditioning on my still being there to observe anything.