Paradoxically, if a person doesn’t sign for cryonics and expresses the desire not to be resurrected by other means, say, resurrectional simulation, she will be resurrected only in those worlds where the superintelligent AI doesn’t care about her decisions. Many of this worlds are s-risks worlds.
Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.
>Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.
But she decreases the share of her futures where she will be resurrected at all, some of which contain hostile resurrection, and therefore she really decreases the share of her futures where she will be hostilely resurrected. She just won’t consciously experience those where she doesn’t exist, which is better than suffering from the perspective of those who consider suffering negative utility.
If we assume that the total share matters, we will get some absurd capabilities to manipulate such share by selective forgetting things and thus merging with our copies in different worlds and increase our total share. I tried to explain this idea here. So only relative share matters.
Another argument to ignore “total measure” comes from many-worlds interpretation: as the world branches, my total measure should decline many orders of magnitude every second, but it doesn’t affect my decision making.
>as the world branches, my total measure should decline many orders of magnitude every second
I’m not sure why you think that. From any moment in time, it’s consistent to count all future forks toward my personal identity without having to count all other copies that don’t causally branch from my current self. Perhaps this depends on how we define personal identity.
>but it doesn’t affect my decision making.
Perhaps it should—tempered by the possibilities that your assumptions are incorrect, of course.
Another accounting trick: Count future where you don’t exist as neutral perspectives of your personal identity (empty consciousness). This should collapse the distinction between total and relative measure. Yes, it’s a trick, but the alternative is even more counter-intuitive to me.
Let’s regard a classical analogy: You’re in a hypothetical situation where your future contains of negative utility. Let’s say you suffer −5000 utils per unit time for the next 10 minutes, then you die with certainty. But you have the option of adding another 10 trillion years of life at −4999 utils per unit time. If we regard relative rather than total measure, this should be preferable because your average utils will be ~-4999 per unit time rather than −5000. But it’s clearly a much more horrible fate.
I always found average utlitarianism unattractive because of mere addition problems like this, in addition to all the other problems utilitariansims have.
Paradoxically, if a person doesn’t sign for cryonics and expresses the desire not to be resurrected by other means, say, resurrectional simulation, she will be resurrected only in those worlds where the superintelligent AI doesn’t care about her decisions. Many of this worlds are s-risks worlds.
This seems to depend on how much weight you put on resurrection being able to happen without being frozen. Many people put even the possibility of resurrection with being frozen to be negligible, and without being frozen to be impossible. If this is how your probabilities fall, then the chance of S-risk has less to do with the AI caring about your decisions, and more to do with the AI being physically able to resurrect you.
If I care only about the relative share of the outcomes, the total resurrection probability doesn’t matter. e.g. if there is 1 000 000 timelines, and I will be resurrected in 1000 of them, and 700 of them will be s-risk, my P(alive in the future and in s-risks)=0.7.
If I care about the total world share (the rest 999 000 of timelines) I should chose absurd actions which will increase my total share in the world, for example, forgetting things and merging with other timelines, more here.
Paradoxically, if a person doesn’t sign for cryonics and expresses the desire not to be resurrected by other means, say, resurrectional simulation, she will be resurrected only in those worlds where the superintelligent AI doesn’t care about her decisions. Many of this worlds are s-risks worlds.
Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.
>Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.
But she decreases the share of her futures where she will be resurrected at all, some of which contain hostile resurrection, and therefore she really decreases the share of her futures where she will be hostilely resurrected. She just won’t consciously experience those where she doesn’t exist, which is better than suffering from the perspective of those who consider suffering negative utility.
If we assume that the total share matters, we will get some absurd capabilities to manipulate such share by selective forgetting things and thus merging with our copies in different worlds and increase our total share. I tried to explain this idea here. So only relative share matters.
That’s a clever accounting trick, but I only care what happens in my actual future(s), not elsewhere in the universe that I can’t causally affect.
Another argument to ignore “total measure” comes from many-worlds interpretation: as the world branches, my total measure should decline many orders of magnitude every second, but it doesn’t affect my decision making.
>as the world branches, my total measure should decline many orders of magnitude every second
I’m not sure why you think that. From any moment in time, it’s consistent to count all future forks toward my personal identity without having to count all other copies that don’t causally branch from my current self. Perhaps this depends on how we define personal identity.
>but it doesn’t affect my decision making.
Perhaps it should—tempered by the possibilities that your assumptions are incorrect, of course.
Another accounting trick: Count future where you don’t exist as neutral perspectives of your personal identity (empty consciousness). This should collapse the distinction between total and relative measure. Yes, it’s a trick, but the alternative is even more counter-intuitive to me.
Let’s regard a classical analogy: You’re in a hypothetical situation where your future contains of negative utility. Let’s say you suffer −5000 utils per unit time for the next 10 minutes, then you die with certainty. But you have the option of adding another 10 trillion years of life at −4999 utils per unit time. If we regard relative rather than total measure, this should be preferable because your average utils will be ~-4999 per unit time rather than −5000. But it’s clearly a much more horrible fate.
I always found average utlitarianism unattractive because of mere addition problems like this, in addition to all the other problems utilitariansims have.
This seems to depend on how much weight you put on resurrection being able to happen without being frozen. Many people put even the possibility of resurrection with being frozen to be negligible, and without being frozen to be impossible. If this is how your probabilities fall, then the chance of S-risk has less to do with the AI caring about your decisions, and more to do with the AI being physically able to resurrect you.
If I care only about the relative share of the outcomes, the total resurrection probability doesn’t matter. e.g. if there is 1 000 000 timelines, and I will be resurrected in 1000 of them, and 700 of them will be s-risk, my P(alive in the future and in s-risks)=0.7.
If I care about the total world share (the rest 999 000 of timelines) I should chose absurd actions which will increase my total share in the world, for example, forgetting things and merging with other timelines, more here.