This trips my too-good-to-be-true alarms, but has my provisional attention anyway. The main reasons I’m not signed up for cryonics are cost, inconvenience, and s-risks. Eliminating cost (and cost-related inconveniences) could move me...but I want to know how this institution differs such that they can offer such storage at low or no cost, where others don’t or can’t.
I mean, it’s not a big secret, there’s a wealthy person behind it. And there’s 2 potential motivations for it: 1) altruistic/mission-driven 2) helps improve the service to have more cases, which can benefit themselves as well.
But also, Oregon Brain Preservation is less expensive as a result of: 1) doing brain-only (Alcor doesn’t extract the brain for its neuro cases) 2) using chemical preservation which doesn’t require LN2 (this represents a significant portion of the cost) 3) not including the cost of stand-by, which is also a significant portion (ie. staying at your bedside in advance until you die) 4) collaborating with local funeral homes (instead of having a fully in-house team that can be deployed anywhere) 5) only offering the service locally (no flights)
I visited Oregon Brain Preservation, talked with Jordan Spark and exchanged emails, and been following them for many years, and Jordan seems really solid IMO.
Cryonics Germany people seem very caring and seem to understand well how to work with a thanatologist. I also had email exchanges with them, but not as much.
not including the cost of stand-by, which is also a significant portion (ie. staying at your bedside in advance until you die)
I assumed this was an overstatement. A quick check shows I’m wrong: TomorrowBio offer whole body (€200k) or just brain preservation (€60k). The ‘standby, stabilisation and transport’ service (included in the previous costs) amount to €80k and €50k respectively. I expected it to be much less.
That said, they still set aside €10K for long term storage of the head. I guess this means your head has a higher chance of being stored safety.
We’re increasing the prices to €75k for brain-only. 15k for long-term storage, 60k for SST, without good SST it’s not “cryopreservation”, its freezing people.
And “Cryonics is free” is really a bad title. Not just because it’s not true, but the organizations that offer it pro bono (paid by 3rd parties) should only be used by people who can’t otherwise afford it. Else, they will cease to exist soon due to limited funding. (disclaimer: I run tomorrow.bio)
Btw, Im happy to answer any question re cryopreservation if anybody is interested, just reach out.
You might want to know that I took a look through the site, and was curious, but I just closed the page the moment the “Calculate your contribution” form refused to show me the pricing options unless I gave it an email address.
Oregon Brain Preservation uses a technique allowing fridge temperature storage, and seem well funded, so idk if the argument works out
Idk the finances for Cryonics Germany, but I would indeed guess that Tomorrow Bio has more funding + provides better SST. I would recommend using Tomorrow Bio over Cryonics Germany if you can afford it
To add to bogdanb’s comment below, you might want to be careful because you seem to be ‘forcing’ people to subscribe to promotional newsletters in order to get a price quote which, aside from being quite a nasty thing to do, is also a blatant violation of European GDPR regulations for which you could receive a hefty fine
Suppose Alice lives naturally for 100 years and is cremated. And suppose Bob lives naturally for 40 years then has his brain frozen for 60 years, and then has his brain cremated. The odds that Bob gets tortured by a spiteful AI should be pretty much exactly the same as for Alice. Basically, its the odds that spiteful AIs appear before 2034.
if you’re alive, you can kill yourself when s-risks increases beyond your comfort point. if you’re preserved, then you rely on other people to execute on those wishes
Killing oneself with high certainty of effectiveness is more difficult than most assume. The side effects on health and personal freedom of a failed attempt to end one’s life in the current era are rather extreme.
Anyways, emulating or reviving humans will always incur some cost; I suspect that those who are profitable to emulate or revive will get a lot more emulation time than those who are not.
If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it? I think it’s far more likely that an unfriendly agent will simply disregard suffering in pursuit of some other goal. I’ve spent my regular life trying to figure out how to accomplish arbitrary goals more effectively with less suffering, so more of the same set of challenges in an afterlife would be nothing new.
Killing oneself with high certainty of effectiveness is more difficult than most assume.
Dying naturally also isn’t as smooth as plenty of people assume. I’m pretty sure that “taking things into your hands” leads to higher amount of expected suffering reduction in most cases, and it’s not informed rational analysis that prevents people from taking that option.
If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it?
Yes? I mean, unless we entertain some extreme abstractions like it simulating all possible minds of certain complexity or whatever.
It’s not obvious to me that those are the same, though they might be. Either way, it’s not what I was thinking of. I was considering the Bob-1 you describe vs. a Bob-2 that lives the same 40 years and doesn’t have his brain frozen. It seems to me that Bob-1 (40L + 60F) is taking on a greater s-risk than Bob-2 (40L+0F).
(Of course, Bob-1 is simultaneously buying a shot at revival, which is the whole point after all. Tradeoffs are tradeoffs.)
Against s-risk concern: Hostile low-quality resurrection is almost inevitable (think about AI scammers who clone voices), so better to have high-quality resurrection by non-hostile agent who may also ensure that resurrected you have higher measure than your low-quality copies.
Low-quality resurrections are already proliferating by bad actors. Two examples are voice cloning by scammers and recommendation systems by social networks. Also AI generated revenge porn in South Korea.
The main question is what level of similarity is enough for me to ensure personal identity. The bad variant here would be if only identity token is enough, that is, a short string of data that identifies me and includes my name, profession, location and a few kilobytes of some other properties. This is the list of things I remember in the morning when I am trying to recognize who am I. In that case producing low quality, but identically important copies will be easy.
[epistemic status: low confidence. I’ve noodled on this subject more than once recently (courtesy of Planecrash), but not all that seriously]
The idea of resurrectors optimizing the measure of resurrect-ees isn’t one I’d considered, but I’m not sure it helps. I think the Future is much more likely to be dominated by unfriendly agents than friendly ones. Friendly ones seem more likely to try to revive cryo patients, but it’s still not obvious to me that rolling those dice is a good idea. Allowing permadeath amounts to giving up a low probability of a very good outcome to eliminate a high(...er) probability of a very bad outcome.
Adding quantum measure doesn’t change that much, I don’t think; hypothetical friendly agents can try to optimize my measure, but if they’re a tiny fraction of my Future then it won’t make much difference.
Adding the infinite MUH is more complicated; it implies that permadeath is probably impossible (which is frightening enough on its own), and it’s not clear to me what cryo does in that case. Suppose my signing up for cryo is 5% likely to “work”, and independently suppose that humanity is 1% likely to solve the aging problem before anyone I care about dies; does signing up under those conditions shift my long-run measure away from futures where I and my loved ones simply got the cure and survived, and towards futures where I’m preserved alone and go senile first? I’m not sure, but if I take MUH as given then that’s the sort of choice I’m making.
I think low-quality resurrections by bad agents are almost inevitable – voice cloning by scammers is happening now. But such low-quality resurrections will lack almost all my childhood memories and all fine details. But from pain-view (can I say it?) it will be almost like me, as in the moment of pain fine-grained childhood memories are not important.
Friendly AIs may literally till light cones with my copies to reach measure domination, so even if they are 0.01 per cent of total AIs, they can still succeed (they may need to use some acausal trade between themselves to do it better as I described here).
i don’t think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷♂️
This trips my too-good-to-be-true alarms, but has my provisional attention anyway. The main reasons I’m not signed up for cryonics are cost, inconvenience, and s-risks. Eliminating cost (and cost-related inconveniences) could move me...but I want to know how this institution differs such that they can offer such storage at low or no cost, where others don’t or can’t.
I mean, it’s not a big secret, there’s a wealthy person behind it. And there’s 2 potential motivations for it:
1) altruistic/mission-driven
2) helps improve the service to have more cases, which can benefit themselves as well.
But also, Oregon Brain Preservation is less expensive as a result of:
1) doing brain-only (Alcor doesn’t extract the brain for its neuro cases)
2) using chemical preservation which doesn’t require LN2 (this represents a significant portion of the cost)
3) not including the cost of stand-by, which is also a significant portion (ie. staying at your bedside in advance until you die)
4) collaborating with local funeral homes (instead of having a fully in-house team that can be deployed anywhere)
5) only offering the service locally (no flights)
I visited Oregon Brain Preservation, talked with Jordan Spark and exchanged emails, and been following them for many years, and Jordan seems really solid IMO.
Cryonics Germany people seem very caring and seem to understand well how to work with a thanatologist. I also had email exchanges with them, but not as much.
🤷♂️
Concerns about personal s-risks makes sense.
Who is the wealthy person?
I assumed this was an overstatement. A quick check shows I’m wrong: TomorrowBio offer whole body (€200k) or just brain preservation (€60k). The ‘standby, stabilisation and transport’ service (included in the previous costs) amount to €80k and €50k respectively. I expected it to be much less.
That said, they still set aside €10K for long term storage of the head. I guess this means your head has a higher chance of being stored safety.
We’re increasing the prices to €75k for brain-only. 15k for long-term storage, 60k for SST, without good SST it’s not “cryopreservation”, its freezing people.
And “Cryonics is free” is really a bad title. Not just because it’s not true, but the organizations that offer it pro bono (paid by 3rd parties) should only be used by people who can’t otherwise afford it. Else, they will cease to exist soon due to limited funding.
(disclaimer: I run tomorrow.bio)
Btw, Im happy to answer any question re cryopreservation if anybody is interested, just reach out.
You might want to know that I took a look through the site, and was curious, but I just closed the page the moment the “Calculate your contribution” form refused to show me the pricing options unless I gave it an email address.
Oregon Brain Preservation uses a technique allowing fridge temperature storage, and seem well funded, so idk if the argument works out
Idk the finances for Cryonics Germany, but I would indeed guess that Tomorrow Bio has more funding + provides better SST. I would recommend using Tomorrow Bio over Cryonics Germany if you can afford it
To add to bogdanb’s comment below, you might want to be careful because you seem to be ‘forcing’ people to subscribe to promotional newsletters in order to get a price quote which, aside from being quite a nasty thing to do, is also a blatant violation of European GDPR regulations for which you could receive a hefty fine
I don’t understand the s-risk consideration.
Suppose Alice lives naturally for 100 years and is cremated. And suppose Bob lives naturally for 40 years then has his brain frozen for 60 years, and then has his brain cremated. The odds that Bob gets tortured by a spiteful AI should be pretty much exactly the same as for Alice. Basically, its the odds that spiteful AIs appear before 2034.
if you’re alive, you can kill yourself when s-risks increases beyond your comfort point. if you’re preserved, then you rely on other people to execute on those wishes
Killing oneself with high certainty of effectiveness is more difficult than most assume. The side effects on health and personal freedom of a failed attempt to end one’s life in the current era are rather extreme.
Anyways, emulating or reviving humans will always incur some cost; I suspect that those who are profitable to emulate or revive will get a lot more emulation time than those who are not.
If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it? I think it’s far more likely that an unfriendly agent will simply disregard suffering in pursuit of some other goal. I’ve spent my regular life trying to figure out how to accomplish arbitrary goals more effectively with less suffering, so more of the same set of challenges in an afterlife would be nothing new.
Dying naturally also isn’t as smooth as plenty of people assume. I’m pretty sure that “taking things into your hands” leads to higher amount of expected suffering reduction in most cases, and it’s not informed rational analysis that prevents people from taking that option.
Yes? I mean, unless we entertain some extreme abstractions like it simulating all possible minds of certain complexity or whatever.
Right, but you might prefer
living now >
not living, no chance of revival or torture >
not living, chance of revival later and chance of torture
It’s not obvious to me that those are the same, though they might be. Either way, it’s not what I was thinking of. I was considering the Bob-1 you describe vs. a Bob-2 that lives the same 40 years and doesn’t have his brain frozen. It seems to me that Bob-1 (40L + 60F) is taking on a greater s-risk than Bob-2 (40L+0F).
(Of course, Bob-1 is simultaneously buying a shot at revival, which is the whole point after all. Tradeoffs are tradeoffs.)
Against s-risk concern: Hostile low-quality resurrection is almost inevitable (think about AI scammers who clone voices), so better to have high-quality resurrection by non-hostile agent who may also ensure that resurrected you have higher measure than your low-quality copies.
Why is hostile low-quality resurrection almost inevitable? If you want to clone someone into an em, why not pick a living human?
Frozen people have potential brain damage and an outdated understanding of the world.
Low-quality resurrections are already proliferating by bad actors. Two examples are voice cloning by scammers and recommendation systems by social networks. Also AI generated revenge porn in South Korea.
The main question is what level of similarity is enough for me to ensure personal identity. The bad variant here would be if only identity token is enough, that is, a short string of data that identifies me and includes my name, profession, location and a few kilobytes of some other properties. This is the list of things I remember in the morning when I am trying to recognize who am I. In that case producing low quality, but identically important copies will be easy.
[epistemic status: low confidence. I’ve noodled on this subject more than once recently (courtesy of Planecrash), but not all that seriously]
The idea of resurrectors optimizing the measure of resurrect-ees isn’t one I’d considered, but I’m not sure it helps. I think the Future is much more likely to be dominated by unfriendly agents than friendly ones. Friendly ones seem more likely to try to revive cryo patients, but it’s still not obvious to me that rolling those dice is a good idea. Allowing permadeath amounts to giving up a low probability of a very good outcome to eliminate a high(...er) probability of a very bad outcome.
Adding quantum measure doesn’t change that much, I don’t think; hypothetical friendly agents can try to optimize my measure, but if they’re a tiny fraction of my Future then it won’t make much difference.
Adding the infinite MUH is more complicated; it implies that permadeath is probably impossible (which is frightening enough on its own), and it’s not clear to me what cryo does in that case. Suppose my signing up for cryo is 5% likely to “work”, and independently suppose that humanity is 1% likely to solve the aging problem before anyone I care about dies; does signing up under those conditions shift my long-run measure away from futures where I and my loved ones simply got the cure and survived, and towards futures where I’m preserved alone and go senile first? I’m not sure, but if I take MUH as given then that’s the sort of choice I’m making.
I think low-quality resurrections by bad agents are almost inevitable – voice cloning by scammers is happening now. But such low-quality resurrections will lack almost all my childhood memories and all fine details. But from pain-view (can I say it?) it will be almost like me, as in the moment of pain fine-grained childhood memories are not important.
Friendly AIs may literally till light cones with my copies to reach measure domination, so even if they are 0.01 per cent of total AIs, they can still succeed (they may need to use some acausal trade between themselves to do it better as I described here).
i don’t think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷♂️