Against s-risk concern: Hostile low-quality resurrection is almost inevitable (think about AI scammers who clone voices), so better to have high-quality resurrection by non-hostile agent who may also ensure that resurrected you have higher measure than your low-quality copies.
Low-quality resurrections are already proliferating by bad actors. Two examples are voice cloning by scammers and recommendation systems by social networks. Also AI generated revenge porn in South Korea.
The main question is what level of similarity is enough for me to ensure personal identity. The bad variant here would be if only identity token is enough, that is, a short string of data that identifies me and includes my name, profession, location and a few kilobytes of some other properties. This is the list of things I remember in the morning when I am trying to recognize who am I. In that case producing low quality, but identically important copies will be easy.
[epistemic status: low confidence. I’ve noodled on this subject more than once recently (courtesy of Planecrash), but not all that seriously]
The idea of resurrectors optimizing the measure of resurrect-ees isn’t one I’d considered, but I’m not sure it helps. I think the Future is much more likely to be dominated by unfriendly agents than friendly ones. Friendly ones seem more likely to try to revive cryo patients, but it’s still not obvious to me that rolling those dice is a good idea. Allowing permadeath amounts to giving up a low probability of a very good outcome to eliminate a high(...er) probability of a very bad outcome.
Adding quantum measure doesn’t change that much, I don’t think; hypothetical friendly agents can try to optimize my measure, but if they’re a tiny fraction of my Future then it won’t make much difference.
Adding the infinite MUH is more complicated; it implies that permadeath is probably impossible (which is frightening enough on its own), and it’s not clear to me what cryo does in that case. Suppose my signing up for cryo is 5% likely to “work”, and independently suppose that humanity is 1% likely to solve the aging problem before anyone I care about dies; does signing up under those conditions shift my long-run measure away from futures where I and my loved ones simply got the cure and survived, and towards futures where I’m preserved alone and go senile first? I’m not sure, but if I take MUH as given then that’s the sort of choice I’m making.
I think low-quality resurrections by bad agents are almost inevitable – voice cloning by scammers is happening now. But such low-quality resurrections will lack almost all my childhood memories and all fine details. But from pain-view (can I say it?) it will be almost like me, as in the moment of pain fine-grained childhood memories are not important.
Friendly AIs may literally till light cones with my copies to reach measure domination, so even if they are 0.01 per cent of total AIs, they can still succeed (they may need to use some acausal trade between themselves to do it better as I described here).
i don’t think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷♂️
Against s-risk concern: Hostile low-quality resurrection is almost inevitable (think about AI scammers who clone voices), so better to have high-quality resurrection by non-hostile agent who may also ensure that resurrected you have higher measure than your low-quality copies.
Why is hostile low-quality resurrection almost inevitable? If you want to clone someone into an em, why not pick a living human?
Frozen people have potential brain damage and an outdated understanding of the world.
Low-quality resurrections are already proliferating by bad actors. Two examples are voice cloning by scammers and recommendation systems by social networks. Also AI generated revenge porn in South Korea.
The main question is what level of similarity is enough for me to ensure personal identity. The bad variant here would be if only identity token is enough, that is, a short string of data that identifies me and includes my name, profession, location and a few kilobytes of some other properties. This is the list of things I remember in the morning when I am trying to recognize who am I. In that case producing low quality, but identically important copies will be easy.
[epistemic status: low confidence. I’ve noodled on this subject more than once recently (courtesy of Planecrash), but not all that seriously]
The idea of resurrectors optimizing the measure of resurrect-ees isn’t one I’d considered, but I’m not sure it helps. I think the Future is much more likely to be dominated by unfriendly agents than friendly ones. Friendly ones seem more likely to try to revive cryo patients, but it’s still not obvious to me that rolling those dice is a good idea. Allowing permadeath amounts to giving up a low probability of a very good outcome to eliminate a high(...er) probability of a very bad outcome.
Adding quantum measure doesn’t change that much, I don’t think; hypothetical friendly agents can try to optimize my measure, but if they’re a tiny fraction of my Future then it won’t make much difference.
Adding the infinite MUH is more complicated; it implies that permadeath is probably impossible (which is frightening enough on its own), and it’s not clear to me what cryo does in that case. Suppose my signing up for cryo is 5% likely to “work”, and independently suppose that humanity is 1% likely to solve the aging problem before anyone I care about dies; does signing up under those conditions shift my long-run measure away from futures where I and my loved ones simply got the cure and survived, and towards futures where I’m preserved alone and go senile first? I’m not sure, but if I take MUH as given then that’s the sort of choice I’m making.
I think low-quality resurrections by bad agents are almost inevitable – voice cloning by scammers is happening now. But such low-quality resurrections will lack almost all my childhood memories and all fine details. But from pain-view (can I say it?) it will be almost like me, as in the moment of pain fine-grained childhood memories are not important.
Friendly AIs may literally till light cones with my copies to reach measure domination, so even if they are 0.01 per cent of total AIs, they can still succeed (they may need to use some acausal trade between themselves to do it better as I described here).
i don’t think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷♂️