I think I lack the philosophical toolkit to answer this with the level of formalism that I suppose you would like, but here are a few scenarios where I think it might make a difference nevertheless.
Let’s suppose you’re terminally sick and in considerable pain with a very slim chance of recovery. You also live in a country where euthanasia is legal. Given QI, do you take that option, and if so, what do you expect to happen? What if the method of assisted suicide that is available almost always works, but in the few cases it doesn’t, it causes a considerable amount of pain?
Another one: in a classical setting, it’s quite unlikely that I’ll live to be 100 years old: let’s say the chance is 1/1000. The likelihood is, therefore, small enough that I shouldn’t really care about it and instead live my life as if I will almost certainly be dead in about 80 years: spend all my earnings in that time, unless I want to leave inheritance, and so forth.
But with QI, this 1/1000 objective chance instead becomes a 100% subjective chance that I’ll still be alive when I’m 100! So I should care a lot more about what’s inside that 1/1000: if most of the cases where I’m still alive have me as a bedridden, immobile living corpse, I should probably be quite worried about them. Now, it’s not altogether clear to me what I should do about it. Does signing up for cyonics make sense, for example? One might argue that it increases the likelihood that I’ll be alive and well instead. And the same applies whether the timeframe is 100, 1000 or 10000 years: in any case I should subjectively expect to exist, although not necessarily in a human form, of course. In a classical setting those scenarios are hardly worth thinking about.
Maybe I’m committing some sort of naive error here, but this sure does sound significant to me.
Thanks for the reply. How would you respond to the idea of hiring someone to kill you if you had a bad day? Perhaps you take a poison each morning, and they provide the antidote each evening if nothing too bad happened. This gives you a higher chance of having only good days.
You might say “but what about the costs of this scheme, or the chance that if it fails I will be injured or otherwise worse-off?” But if you say this, then you are also saying that it would be a good idea, if only the cost and the chance of failure were small!
On the other hand, maybe you bite the bullet. That’s fine, there’s no natural law against hiring people to kill you.
And this isn’t just an isolated apparently-bad consequence. If a decision-maker assumes that they’ll survive no matter what (conditioning on a future event), it can end up with very different choices from before—the differences will look much more like “figure out how to ensure I die if I have a bad day” than “well, it makes investing in the stock market more prudent.”
And the no-natural-law thing cuts both ways. There’s no law against making decisions like normal because arranging to kill yourself sounds bad.
I’m not sure. Another thing to think about would be what my SO and relatives think, so at least I probably wouldn’t do it unless the day is truly exceptionally lousy. But do I see a problem here, in principle? Maybe not, but I’m not sure.
Where do we disagree, exactly? What would you do in the euthanasia situation I described? Do you think that because hiring assassins doesn’t make sense to you, the living to be 100 or 1000 scenario isn’t interesting, either?
I think I lack the philosophical toolkit to answer this with the level of formalism that I suppose you would like, but here are a few scenarios where I think it might make a difference nevertheless.
Let’s suppose you’re terminally sick and in considerable pain with a very slim chance of recovery. You also live in a country where euthanasia is legal. Given QI, do you take that option, and if so, what do you expect to happen? What if the method of assisted suicide that is available almost always works, but in the few cases it doesn’t, it causes a considerable amount of pain?
Another one: in a classical setting, it’s quite unlikely that I’ll live to be 100 years old: let’s say the chance is 1/1000. The likelihood is, therefore, small enough that I shouldn’t really care about it and instead live my life as if I will almost certainly be dead in about 80 years: spend all my earnings in that time, unless I want to leave inheritance, and so forth.
But with QI, this 1/1000 objective chance instead becomes a 100% subjective chance that I’ll still be alive when I’m 100! So I should care a lot more about what’s inside that 1/1000: if most of the cases where I’m still alive have me as a bedridden, immobile living corpse, I should probably be quite worried about them. Now, it’s not altogether clear to me what I should do about it. Does signing up for cyonics make sense, for example? One might argue that it increases the likelihood that I’ll be alive and well instead. And the same applies whether the timeframe is 100, 1000 or 10000 years: in any case I should subjectively expect to exist, although not necessarily in a human form, of course. In a classical setting those scenarios are hardly worth thinking about.
Maybe I’m committing some sort of naive error here, but this sure does sound significant to me.
Thanks for the reply. How would you respond to the idea of hiring someone to kill you if you had a bad day? Perhaps you take a poison each morning, and they provide the antidote each evening if nothing too bad happened. This gives you a higher chance of having only good days.
You might say “but what about the costs of this scheme, or the chance that if it fails I will be injured or otherwise worse-off?” But if you say this, then you are also saying that it would be a good idea, if only the cost and the chance of failure were small!
On the other hand, maybe you bite the bullet. That’s fine, there’s no natural law against hiring people to kill you.
And this isn’t just an isolated apparently-bad consequence. If a decision-maker assumes that they’ll survive no matter what (conditioning on a future event), it can end up with very different choices from before—the differences will look much more like “figure out how to ensure I die if I have a bad day” than “well, it makes investing in the stock market more prudent.”
And the no-natural-law thing cuts both ways. There’s no law against making decisions like normal because arranging to kill yourself sounds bad.
I’m not sure. Another thing to think about would be what my SO and relatives think, so at least I probably wouldn’t do it unless the day is truly exceptionally lousy. But do I see a problem here, in principle? Maybe not, but I’m not sure.
Where do we disagree, exactly? What would you do in the euthanasia situation I described? Do you think that because hiring assassins doesn’t make sense to you, the living to be 100 or 1000 scenario isn’t interesting, either?