[epistemic status: low confidence. I’ve noodled on this subject more than once recently (courtesy of Planecrash), but not all that seriously]
The idea of resurrectors optimizing the measure of resurrect-ees isn’t one I’d considered, but I’m not sure it helps. I think the Future is much more likely to be dominated by unfriendly agents than friendly ones. Friendly ones seem more likely to try to revive cryo patients, but it’s still not obvious to me that rolling those dice is a good idea. Allowing permadeath amounts to giving up a low probability of a very good outcome to eliminate a high(...er) probability of a very bad outcome.
Adding quantum measure doesn’t change that much, I don’t think; hypothetical friendly agents can try to optimize my measure, but if they’re a tiny fraction of my Future then it won’t make much difference.
Adding the infinite MUH is more complicated; it implies that permadeath is probably impossible (which is frightening enough on its own), and it’s not clear to me what cryo does in that case. Suppose my signing up for cryo is 5% likely to “work”, and independently suppose that humanity is 1% likely to solve the aging problem before anyone I care about dies; does signing up under those conditions shift my long-run measure away from futures where I and my loved ones simply got the cure and survived, and towards futures where I’m preserved alone and go senile first? I’m not sure, but if I take MUH as given then that’s the sort of choice I’m making.
I think low-quality resurrections by bad agents are almost inevitable – voice cloning by scammers is happening now. But such low-quality resurrections will lack almost all my childhood memories and all fine details. But from pain-view (can I say it?) it will be almost like me, as in the moment of pain fine-grained childhood memories are not important.
Friendly AIs may literally till light cones with my copies to reach measure domination, so even if they are 0.01 per cent of total AIs, they can still succeed (they may need to use some acausal trade between themselves to do it better as I described here).
i don’t think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷♂️
[epistemic status: low confidence. I’ve noodled on this subject more than once recently (courtesy of Planecrash), but not all that seriously]
The idea of resurrectors optimizing the measure of resurrect-ees isn’t one I’d considered, but I’m not sure it helps. I think the Future is much more likely to be dominated by unfriendly agents than friendly ones. Friendly ones seem more likely to try to revive cryo patients, but it’s still not obvious to me that rolling those dice is a good idea. Allowing permadeath amounts to giving up a low probability of a very good outcome to eliminate a high(...er) probability of a very bad outcome.
Adding quantum measure doesn’t change that much, I don’t think; hypothetical friendly agents can try to optimize my measure, but if they’re a tiny fraction of my Future then it won’t make much difference.
Adding the infinite MUH is more complicated; it implies that permadeath is probably impossible (which is frightening enough on its own), and it’s not clear to me what cryo does in that case. Suppose my signing up for cryo is 5% likely to “work”, and independently suppose that humanity is 1% likely to solve the aging problem before anyone I care about dies; does signing up under those conditions shift my long-run measure away from futures where I and my loved ones simply got the cure and survived, and towards futures where I’m preserved alone and go senile first? I’m not sure, but if I take MUH as given then that’s the sort of choice I’m making.
I think low-quality resurrections by bad agents are almost inevitable – voice cloning by scammers is happening now. But such low-quality resurrections will lack almost all my childhood memories and all fine details. But from pain-view (can I say it?) it will be almost like me, as in the moment of pain fine-grained childhood memories are not important.
Friendly AIs may literally till light cones with my copies to reach measure domination, so even if they are 0.01 per cent of total AIs, they can still succeed (they may need to use some acausal trade between themselves to do it better as I described here).
i don’t think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷♂️