Do we know how common it is for someone who thought they were signed up for cryonics to not actually be frozen because something was screwed up with the paperwork?
I’m not aware of any such cases. Perhaps someone more involved with cryonics can comment on this?
As to accidental death and related issues- I’m not sure. Given your analysis you’ve added in this comment I’m more inclined to accept your original number.
Something might kill a lot of people at once. This is unlikely, but is it more than 1% unlikely?
Well, if something killed a lot of people it would likely be a heavily traumatic event, so that would be already accounted for. Even a sudden plague would create disruption. So I don’t think that there’s any at all likely scenario where there’s a disaster that causes a sudden influx of cryopatients and doesn’t trigger one of the other failure modes. I have trouble even imagining what that sort of situation would look like- maybe an meteorite striking a cryonics convention?
I didn’t know about this. That is worrysome. Would you put it closer to 0.25?
Unsure. The cryonics law seems to be a fluke, and see the other reply to my remark which notes how the law in practice isn’t nearly as restrictive as one might think.
Direct revival seems really impractical to me. Should it?
I don’t think that direct revival seems substantially more impractical than uploading. I suspect that uploading will likely come first, but I don’t see why sufficiently advanced nanotech couldn’t handle direct revival. There’s also a non-trivial number of cryonauts and people considering cryonics who are more comfortable with revival than uploading.
I convert all the probabilities of failure to probabilities of success by subtracting them from 1. Then I multiply them all and subtract the result from one
Well, yeah this works if they are all independent probabilities. But some of them are clearly not. For example, a lot of the worst case post-preservation problems are likely to be correlated with each other (a lot of them have large scale catastrophes as likely causes). Those should then reduce the chance of failure. But at the same time, other possibilities are essentially exclusive- say dying from Alzheimer’s or dying from traumatic brain injury at a young age. That sort of thing should result in an increased total probability. Working out how to these all interact might require a much more complicated model (you mentioned the Drake equation as an inspiration and it is interesting to note that it runs into very similar issues). But I agree that as a very rough approximation, you can assume that everything is independent and probably not be too far off.
So I don’t think that there’s any at all likely scenario where there’s a disaster that causes a sudden influx of cryopatients and doesn’t trigger one of the other failure modes. I have trouble even imagining what that sort of situation would look like- maybe an meteorite striking a cryonics convention?
How about a deliberate attack at a cryonics convention? There was stuff about nanotech researchers getting bombs in the mail, I don’t see why it wouldn’t happen to cryonicists, especially a couple of decades from now when it cryonics might be more popular (i.e., higher on the public radar) than now.
On that note, at first thought it doesn’t seem like it would take enormous effort for someone to sabotage a cryonics facility. (Remember, you don’t have to destroy it completely, you just need to partially thaw for a short while or otherwise damage the what I imagine are closely-packed brains; given they’re stored in liquid nitrogen, even just fracturing the dewars might be enough: you can’t quite send someone to fix them until the temperatures rises a lot.)
This might not be a big risk today, but if cryonics does get even a little bit mainstream in the future it’s easy to imagine all sorts of people who might want to do that.
(A well-meaning if not quite reasonable person who just wants to save the thousands of frozen people from missing on Heaven is what my brain popped out right now. I’m sure reality will find something even sillier.)
That’s a really good set of points. This almost suggests that a sufficiently selfish cryonist might want to optimize how popular cryonics becomes. Popular enough to provide longterm security and pull but not so popular to be a target.
The other benefit would be on the revival side. My brain’s information is more interesting the fewer peers I have from my own era. These revival problems are actually one of my larger concerns. I can’t imagine why anyone would want to run an upload for more time than it would take to have a few conversations.
I’m not aware of any such cases. Perhaps someone more involved with cryonics can comment on this?
As to accidental death and related issues- I’m not sure. Given your analysis you’ve added in this comment I’m more inclined to accept your original number.
Well, if something killed a lot of people it would likely be a heavily traumatic event, so that would be already accounted for. Even a sudden plague would create disruption. So I don’t think that there’s any at all likely scenario where there’s a disaster that causes a sudden influx of cryopatients and doesn’t trigger one of the other failure modes. I have trouble even imagining what that sort of situation would look like- maybe an meteorite striking a cryonics convention?
Unsure. The cryonics law seems to be a fluke, and see the other reply to my remark which notes how the law in practice isn’t nearly as restrictive as one might think.
I don’t think that direct revival seems substantially more impractical than uploading. I suspect that uploading will likely come first, but I don’t see why sufficiently advanced nanotech couldn’t handle direct revival. There’s also a non-trivial number of cryonauts and people considering cryonics who are more comfortable with revival than uploading.
Well, yeah this works if they are all independent probabilities. But some of them are clearly not. For example, a lot of the worst case post-preservation problems are likely to be correlated with each other (a lot of them have large scale catastrophes as likely causes). Those should then reduce the chance of failure. But at the same time, other possibilities are essentially exclusive- say dying from Alzheimer’s or dying from traumatic brain injury at a young age. That sort of thing should result in an increased total probability. Working out how to these all interact might require a much more complicated model (you mentioned the Drake equation as an inspiration and it is interesting to note that it runs into very similar issues). But I agree that as a very rough approximation, you can assume that everything is independent and probably not be too far off.
How about a deliberate attack at a cryonics convention? There was stuff about nanotech researchers getting bombs in the mail, I don’t see why it wouldn’t happen to cryonicists, especially a couple of decades from now when it cryonics might be more popular (i.e., higher on the public radar) than now.
On that note, at first thought it doesn’t seem like it would take enormous effort for someone to sabotage a cryonics facility. (Remember, you don’t have to destroy it completely, you just need to partially thaw for a short while or otherwise damage the what I imagine are closely-packed brains; given they’re stored in liquid nitrogen, even just fracturing the dewars might be enough: you can’t quite send someone to fix them until the temperatures rises a lot.)
This might not be a big risk today, but if cryonics does get even a little bit mainstream in the future it’s easy to imagine all sorts of people who might want to do that.
(A well-meaning if not quite reasonable person who just wants to save the thousands of frozen people from missing on Heaven is what my brain popped out right now. I’m sure reality will find something even sillier.)
That’s a really good set of points. This almost suggests that a sufficiently selfish cryonist might want to optimize how popular cryonics becomes. Popular enough to provide longterm security and pull but not so popular to be a target.
The other benefit would be on the revival side. My brain’s information is more interesting the fewer peers I have from my own era. These revival problems are actually one of my larger concerns. I can’t imagine why anyone would want to run an upload for more time than it would take to have a few conversations.
I tried to define them to be independent:
So my probabilities were supposed to be like:
In some cases they are probably unrelated. Then we can simplify:
can be almost perfectly approximated as
Right, but some of them are clearly not independent. See my example of forms of deaths where they are essentially exclusive.