If you refer to “cryonics revival before catastrophe or FAI”, I mean that catastrophe or FAI could happen (shortly) after, no-catastrophe-or-superintelligence seems very unlikely. I expect catastrophe very likely after WBE, also accounting for most of the probability of revival not happening after WBE. After WBE, greater tech argues for lower FAI-to-catastrophe ratio and better FAI theory argues otherwise.
In total, you’re assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people “in the know”. Do you have any thoughts on what is causing the difference?
I expect that “no catastrophe” is almost the same as “eventually, FAI is built”. I don’t expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn’t ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.
If you refer to “cryonics revival before catastrophe or FAI”, I mean that catastrophe or FAI could happen (shortly) after, no-catastrophe-or-superintelligence seems very unlikely. I expect catastrophe very likely after WBE, also accounting for most of the probability of revival not happening after WBE. After WBE, greater tech argues for lower FAI-to-catastrophe ratio and better FAI theory argues otherwise.
So the 6% above is where cryonauts get revived by WBE, and then die in a catastrophe anyway?
Yes. Still, if implemented as WBEs, they could live for significant subjective time, and then there’s that 2% of FAI.
In total, you’re assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people “in the know”. Do you have any thoughts on what is causing the difference?
I expect that “no catastrophe” is almost the same as “eventually, FAI is built”. I don’t expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn’t ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.