I think I’m more skeptical than you are that it’s possible to do much better (i.e., build functional information-processing institutions) before the world changes a lot for other reasons (e.g., superintelligent AIs are invented)
Where do you think the superintelligent AIs will come from? AFAICT it doesn’t make sense to put more than 20% on AGI before massive international institutional collapse, even being fairly charitable to both AGI projects and prospective longevity of current institutions.
Huh, I notice I’ve not explicitly estimated my timeline distribution for massive international institutional collapse, and that I want to do that. Do you have any links to places where others/you have thought about it?
Where do you think the superintelligent AIs will come from? AFAICT it doesn’t make sense to put more than 20% on AGI before massive international institutional collapse, even being fairly charitable to both AGI projects and prospective longevity of current institutions.
Huh, I notice I’ve not explicitly estimated my timeline distribution for massive international institutional collapse, and that I want to do that. Do you have any links to places where others/you have thought about it?