I still don’t quite get your point, at least that of your second version. But I think quantum (or “big world”) immortality is simply the idea that as time goes on, from a subjective point of view we will always observer our own existence, which is a fairly straightforward implication of not just MWI, but of several other multiverse scenarios as well: the main point is introducing an anthropic aspect into considerations of probabilities. Now, there seems to be quite a bit of disagreement over whether this is relevant or interesting, with some people arguing that what we should care about is the measure of those worlds where we still exist, for example. I wonder if what you’re trying to say is something similar. Myself, I think this is a massive departure from normal secular thinking, where it is thought that, unless extraordinary measures (like cryonics) are taken to prevent it, at some point we will all simply die and never exist anymore.
I agree that the naive version you describe sort of exists, but I think most people who believe in QI don’t make decisions like that. Paying assassins to kill yourself if you have a below-average day doesn’t make much sense, for example, because it’s quite likely that you’ll just find youself in a hospital. Actually it quite possibly makes life-and-death situations worse, not safer.
Ah, interesting. You seem to be subscribing to a version that looks more like “We’re never going to observe ourselves dead, and at no point in the future will be ‘really’ dead, where ‘really’ means with amplitude 0. This idea is big and important! It’s a massive departure from classical thinking. But because it’s big and complicated and important, I don’t have any particular way to operationalize it at the moment.”
I think you need to cash it out more. It’s important to attempt to say how quantum immortality actually changes how you make decisions.
I think I lack the philosophical toolkit to answer this with the level of formalism that I suppose you would like, but here are a few scenarios where I think it might make a difference nevertheless.
Let’s suppose you’re terminally sick and in considerable pain with a very slim chance of recovery. You also live in a country where euthanasia is legal. Given QI, do you take that option, and if so, what do you expect to happen? What if the method of assisted suicide that is available almost always works, but in the few cases it doesn’t, it causes a considerable amount of pain?
Another one: in a classical setting, it’s quite unlikely that I’ll live to be 100 years old: let’s say the chance is 1/1000. The likelihood is, therefore, small enough that I shouldn’t really care about it and instead live my life as if I will almost certainly be dead in about 80 years: spend all my earnings in that time, unless I want to leave inheritance, and so forth.
But with QI, this 1/1000 objective chance instead becomes a 100% subjective chance that I’ll still be alive when I’m 100! So I should care a lot more about what’s inside that 1/1000: if most of the cases where I’m still alive have me as a bedridden, immobile living corpse, I should probably be quite worried about them. Now, it’s not altogether clear to me what I should do about it. Does signing up for cyonics make sense, for example? One might argue that it increases the likelihood that I’ll be alive and well instead. And the same applies whether the timeframe is 100, 1000 or 10000 years: in any case I should subjectively expect to exist, although not necessarily in a human form, of course. In a classical setting those scenarios are hardly worth thinking about.
Maybe I’m committing some sort of naive error here, but this sure does sound significant to me.
Thanks for the reply. How would you respond to the idea of hiring someone to kill you if you had a bad day? Perhaps you take a poison each morning, and they provide the antidote each evening if nothing too bad happened. This gives you a higher chance of having only good days.
You might say “but what about the costs of this scheme, or the chance that if it fails I will be injured or otherwise worse-off?” But if you say this, then you are also saying that it would be a good idea, if only the cost and the chance of failure were small!
On the other hand, maybe you bite the bullet. That’s fine, there’s no natural law against hiring people to kill you.
And this isn’t just an isolated apparently-bad consequence. If a decision-maker assumes that they’ll survive no matter what (conditioning on a future event), it can end up with very different choices from before—the differences will look much more like “figure out how to ensure I die if I have a bad day” than “well, it makes investing in the stock market more prudent.”
And the no-natural-law thing cuts both ways. There’s no law against making decisions like normal because arranging to kill yourself sounds bad.
I’m not sure. Another thing to think about would be what my SO and relatives think, so at least I probably wouldn’t do it unless the day is truly exceptionally lousy. But do I see a problem here, in principle? Maybe not, but I’m not sure.
Where do we disagree, exactly? What would you do in the euthanasia situation I described? Do you think that because hiring assassins doesn’t make sense to you, the living to be 100 or 1000 scenario isn’t interesting, either?
QI makes cryonics more sensible, because it rises share of the world where I am 120 and not terminally ill compare to share of the worlds where I am 120 and terminally ill.
QI doesn’t matter much in “altruistic” decision making systems, if I care for other people measure of wellbeing (but consider the fact that if I inform them about QI they may be less worried about death, so even here it could play)
QI is more important if I value as most important thing my future pleasures and sufferings.
Anyway QI is more about facts about future observations, but not about decision theories. QI does not tell us which DT is better. But it could make pressure on the agent to choose more egoistic DT as it will be more rewarded.
I still don’t quite get your point, at least that of your second version. But I think quantum (or “big world”) immortality is simply the idea that as time goes on, from a subjective point of view we will always observer our own existence, which is a fairly straightforward implication of not just MWI, but of several other multiverse scenarios as well: the main point is introducing an anthropic aspect into considerations of probabilities. Now, there seems to be quite a bit of disagreement over whether this is relevant or interesting, with some people arguing that what we should care about is the measure of those worlds where we still exist, for example. I wonder if what you’re trying to say is something similar. Myself, I think this is a massive departure from normal secular thinking, where it is thought that, unless extraordinary measures (like cryonics) are taken to prevent it, at some point we will all simply die and never exist anymore.
I agree that the naive version you describe sort of exists, but I think most people who believe in QI don’t make decisions like that. Paying assassins to kill yourself if you have a below-average day doesn’t make much sense, for example, because it’s quite likely that you’ll just find youself in a hospital. Actually it quite possibly makes life-and-death situations worse, not safer.
Ah, interesting. You seem to be subscribing to a version that looks more like “We’re never going to observe ourselves dead, and at no point in the future will be ‘really’ dead, where ‘really’ means with amplitude 0. This idea is big and important! It’s a massive departure from classical thinking. But because it’s big and complicated and important, I don’t have any particular way to operationalize it at the moment.”
I think you need to cash it out more. It’s important to attempt to say how quantum immortality actually changes how you make decisions.
I think I lack the philosophical toolkit to answer this with the level of formalism that I suppose you would like, but here are a few scenarios where I think it might make a difference nevertheless.
Let’s suppose you’re terminally sick and in considerable pain with a very slim chance of recovery. You also live in a country where euthanasia is legal. Given QI, do you take that option, and if so, what do you expect to happen? What if the method of assisted suicide that is available almost always works, but in the few cases it doesn’t, it causes a considerable amount of pain?
Another one: in a classical setting, it’s quite unlikely that I’ll live to be 100 years old: let’s say the chance is 1/1000. The likelihood is, therefore, small enough that I shouldn’t really care about it and instead live my life as if I will almost certainly be dead in about 80 years: spend all my earnings in that time, unless I want to leave inheritance, and so forth.
But with QI, this 1/1000 objective chance instead becomes a 100% subjective chance that I’ll still be alive when I’m 100! So I should care a lot more about what’s inside that 1/1000: if most of the cases where I’m still alive have me as a bedridden, immobile living corpse, I should probably be quite worried about them. Now, it’s not altogether clear to me what I should do about it. Does signing up for cyonics make sense, for example? One might argue that it increases the likelihood that I’ll be alive and well instead. And the same applies whether the timeframe is 100, 1000 or 10000 years: in any case I should subjectively expect to exist, although not necessarily in a human form, of course. In a classical setting those scenarios are hardly worth thinking about.
Maybe I’m committing some sort of naive error here, but this sure does sound significant to me.
Thanks for the reply. How would you respond to the idea of hiring someone to kill you if you had a bad day? Perhaps you take a poison each morning, and they provide the antidote each evening if nothing too bad happened. This gives you a higher chance of having only good days.
You might say “but what about the costs of this scheme, or the chance that if it fails I will be injured or otherwise worse-off?” But if you say this, then you are also saying that it would be a good idea, if only the cost and the chance of failure were small!
On the other hand, maybe you bite the bullet. That’s fine, there’s no natural law against hiring people to kill you.
And this isn’t just an isolated apparently-bad consequence. If a decision-maker assumes that they’ll survive no matter what (conditioning on a future event), it can end up with very different choices from before—the differences will look much more like “figure out how to ensure I die if I have a bad day” than “well, it makes investing in the stock market more prudent.”
And the no-natural-law thing cuts both ways. There’s no law against making decisions like normal because arranging to kill yourself sounds bad.
I’m not sure. Another thing to think about would be what my SO and relatives think, so at least I probably wouldn’t do it unless the day is truly exceptionally lousy. But do I see a problem here, in principle? Maybe not, but I’m not sure.
Where do we disagree, exactly? What would you do in the euthanasia situation I described? Do you think that because hiring assassins doesn’t make sense to you, the living to be 100 or 1000 scenario isn’t interesting, either?
QI makes cryonics more sensible, because it rises share of the world where I am 120 and not terminally ill compare to share of the worlds where I am 120 and terminally ill.
QI doesn’t matter much in “altruistic” decision making systems, if I care for other people measure of wellbeing (but consider the fact that if I inform them about QI they may be less worried about death, so even here it could play)
QI is more important if I value as most important thing my future pleasures and sufferings.
Anyway QI is more about facts about future observations, but not about decision theories. QI does not tell us which DT is better. But it could make pressure on the agent to choose more egoistic DT as it will be more rewarded.
I’m not altogether certain that it will make them less worried.