Largely agree with you that “EVERYTHING becomes hard to predict.”, it is partly what I meant to allude to with the introductory caveat in my comment. I imagine un-graspably transformative superintelligence well within our lifetime, and cannot give much more advice on that scenario, yet I still keep a non-zero probability on the world & socio-economic structures remaining—for whichever reasons—still more recognizable, for which case #1 and #2 seem still reasonably natural defaults. But yes, they may apply in a reasonably narrow band of imaginable AI transformed futures.
Largely agree with you that “EVERYTHING becomes hard to predict.”, it is partly what I meant to allude to with the introductory caveat in my comment. I imagine un-graspably transformative superintelligence well within our lifetime, and cannot give much more advice on that scenario, yet I still keep a non-zero probability on the world & socio-economic structures remaining—for whichever reasons—still more recognizable, for which case #1 and #2 seem still reasonably natural defaults. But yes, they may apply in a reasonably narrow band of imaginable AI transformed futures.