I expect that “no catastrophe” is almost the same as “eventually, FAI is built”. I don’t expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn’t ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.
I expect that “no catastrophe” is almost the same as “eventually, FAI is built”. I don’t expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn’t ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.