In total, you’re assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people “in the know”. Do you have any thoughts on what is causing the difference?
I expect that “no catastrophe” is almost the same as “eventually, FAI is built”. I don’t expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn’t ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.
So the 6% above is where cryonauts get revived by WBE, and then die in a catastrophe anyway?
Yes. Still, if implemented as WBEs, they could live for significant subjective time, and then there’s that 2% of FAI.
In total, you’re assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people “in the know”. Do you have any thoughts on what is causing the difference?
I expect that “no catastrophe” is almost the same as “eventually, FAI is built”. I don’t expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn’t ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.