Rather, believe the probability of cryonics producing a favorable outcome to be less. This was a confusing question, because it wasn’t specified whether it’s total probability, since if it is, then probability of global catastrophe had to be taken into account, and, depending on your expectation about usefulness of frozen heads to FAI’s value, probability of FAI as well (in addition to the usual failure-of-preservation risks). As a result, even though I’m almost certain that cryonics fundamentally works, I gave only something like 3% probability. Should I really be classified as “doesn’t believe in cryonics”?
(The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn’t happen, so it’s a heavy discount factor. If there is a FAI, it’s also unclear whether original individuals remain and it makes sense to count their individual lifespans.)
The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn’t happen, so it’s a heavy discount factor. If there is a FAI, it’s also unclear whether original individuals remain and it makes sense to count their individual lifespans.
Good point, and I think it explains one of the funny results that I found in the data. There was a relationship between strength of membership in the LW community and the answers to a lot of the questions, but the anti-agathics question was the one case where there was a clear non-monotonic relationship. People with a moderate strength of membership (nonzero but small karma, read 25-50% of the sequences, or been in the LW community for 1-2 years) were the most likely to think that at least one currently living person will reach an age of 1,000 years; those with a stronger or weaker tie to LW gave lower estimates.
There was some suggestion of a similar pattern on the cryonics question, but it was only there for the sequence reading measure of strength of membership and not for the other two.
Below is my attempt to re-do the calculations that led to that conclusion (this time, it’s 4%).
FAI before WBE: 3%; Surviving to WBE: 60%; I assume cryonics revival feasible mostly only after WBE; Given WBE, cryonics revival (actually happening for significant portion of cryonauts) before catastrophe or FAI: 10%; FAI given WBE (but before cryonics revival): 2%; Heads preserved long enough (given no catastrophe): 50%; Heads (equivalently, living humans) mattering/useful to FAI: less than 50%.
In total, 6% for post-WBE revival potential and 4% for FAI revival potential, discounted by 50% preservation probability and 50% mattering-to-FAI probability, this gives 4%.
(By “humans useful to FAI”, I don’t mean that specific people should be discarded, but that the difference to utility of the future between a case where a given human is initially present, and where they are lost, is significantly less than moral value of current human life, so that it might be better to keep them than not, but not that much better, for fungibility reasons.)
I’m trying to sort this out so I can add it to the collection of cryonics fermi calculations. Do I have this right:
Either we get FAI first (3%) or WBE (97%). If WBE, 60% chance we die out first. Once we do get WBE but before revival, 88% chance of catastrophe, 2% chance of FAI, leaving 10% chance of revival. 50% chance heads are still around.
If at any point we get FAI, then 50% chance heads are still around and 50% chance it’s interested in reviving us.
So, combining it all:
(0.5 heads still around)*
((0.03 FAI first)*(0.5 humans useful to FAI) +
(0.97 WBE first)*(0.4 don't die first)*
((.02 FAI before revival)*(0.5 humans useful to FAI) +
(.1 revival with no catastrophe or FAI))))
= .5*(0.03*0.5 + 0.97*0.4*(0.02*0.5 + 0.1))
= 2.9%
This is less than your 4%, but I don’t see where I’m misinterpreting you.
Do you also think that the following events are so close to impossible that approximating them at 0% is reasonable?
The cryonics process doesn’t preserve everything
You die in a situation (location, legality, unfriendly hospital, …) where you can be frozen quickly enough
Heads (equivalently, living humans) mattering/useful to FAI: less than 50%.
For an evidently flexible definition of ‘Friendly’. Along the lines of “Friendly to someone else perhaps but that guy’s a jerk who literally wants me dead!”
I’m not sure how to interpret the uploads-after-WBE-but-not-FAI scenario. Does that mean FAI never gets invented, possibly in a Hansonian world of eternally competing ems?
If you refer to “cryonics revival before catastrophe or FAI”, I mean that catastrophe or FAI could happen (shortly) after, no-catastrophe-or-superintelligence seems very unlikely. I expect catastrophe very likely after WBE, also accounting for most of the probability of revival not happening after WBE. After WBE, greater tech argues for lower FAI-to-catastrophe ratio and better FAI theory argues otherwise.
In total, you’re assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people “in the know”. Do you have any thoughts on what is causing the difference?
I expect that “no catastrophe” is almost the same as “eventually, FAI is built”. I don’t expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn’t ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.
Rather, believe the probability of cryonics producing a favorable outcome to be less. This was a confusing question, because it wasn’t specified whether it’s total probability, since if it is, then probability of global catastrophe had to be taken into account, and, depending on your expectation about usefulness of frozen heads to FAI’s value, probability of FAI as well (in addition to the usual failure-of-preservation risks). As a result, even though I’m almost certain that cryonics fundamentally works, I gave only something like 3% probability. Should I really be classified as “doesn’t believe in cryonics”?
(The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn’t happen, so it’s a heavy discount factor. If there is a FAI, it’s also unclear whether original individuals remain and it makes sense to count their individual lifespans.)
Good point, and I think it explains one of the funny results that I found in the data. There was a relationship between strength of membership in the LW community and the answers to a lot of the questions, but the anti-agathics question was the one case where there was a clear non-monotonic relationship. People with a moderate strength of membership (nonzero but small karma, read 25-50% of the sequences, or been in the LW community for 1-2 years) were the most likely to think that at least one currently living person will reach an age of 1,000 years; those with a stronger or weaker tie to LW gave lower estimates.
There was some suggestion of a similar pattern on the cryonics question, but it was only there for the sequence reading measure of strength of membership and not for the other two.
Do you think catastrophe is extremely probable, do you think frozen heads won’t be useful to a Friendly AI’s value, or is it a combination of both?
Below is my attempt to re-do the calculations that led to that conclusion (this time, it’s 4%).
FAI before WBE: 3%; Surviving to WBE: 60%; I assume cryonics revival feasible mostly only after WBE; Given WBE, cryonics revival (actually happening for significant portion of cryonauts) before catastrophe or FAI: 10%; FAI given WBE (but before cryonics revival): 2%; Heads preserved long enough (given no catastrophe): 50%; Heads (equivalently, living humans) mattering/useful to FAI: less than 50%.
In total, 6% for post-WBE revival potential and 4% for FAI revival potential, discounted by 50% preservation probability and 50% mattering-to-FAI probability, this gives 4%.
(By “humans useful to FAI”, I don’t mean that specific people should be discarded, but that the difference to utility of the future between a case where a given human is initially present, and where they are lost, is significantly less than moral value of current human life, so that it might be better to keep them than not, but not that much better, for fungibility reasons.)
I’m trying to sort this out so I can add it to the collection of cryonics fermi calculations. Do I have this right:
Either we get FAI first (3%) or WBE (97%). If WBE, 60% chance we die out first. Once we do get WBE but before revival, 88% chance of catastrophe, 2% chance of FAI, leaving 10% chance of revival. 50% chance heads are still around.
If at any point we get FAI, then 50% chance heads are still around and 50% chance it’s interested in reviving us.
So, combining it all:
This is less than your 4%, but I don’t see where I’m misinterpreting you.
Do you also think that the following events are so close to impossible that approximating them at 0% is reasonable?
The cryonics process doesn’t preserve everything
You die in a situation (location, legality, unfriendly hospital, …) where you can be frozen quickly enough
The cryonics people screw up in freezing you
For an evidently flexible definition of ‘Friendly’. Along the lines of “Friendly to someone else perhaps but that guy’s a jerk who literally wants me dead!”
I’m not sure how to interpret the uploads-after-WBE-but-not-FAI scenario. Does that mean FAI never gets invented, possibly in a Hansonian world of eternally competing ems?
If you refer to “cryonics revival before catastrophe or FAI”, I mean that catastrophe or FAI could happen (shortly) after, no-catastrophe-or-superintelligence seems very unlikely. I expect catastrophe very likely after WBE, also accounting for most of the probability of revival not happening after WBE. After WBE, greater tech argues for lower FAI-to-catastrophe ratio and better FAI theory argues otherwise.
So the 6% above is where cryonauts get revived by WBE, and then die in a catastrophe anyway?
Yes. Still, if implemented as WBEs, they could live for significant subjective time, and then there’s that 2% of FAI.
In total, you’re assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people “in the know”. Do you have any thoughts on what is causing the difference?
I expect that “no catastrophe” is almost the same as “eventually, FAI is built”. I don’t expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn’t ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.