Can you say more about why you believe this? At first glance, it seems to be like “fundamental instability” is much more tied to how AI development goes, so I would’ve expected it to be more tractable [among LW users].
Maybe “simpler” was the wrong choice of word. I didn’t really mean “more tractable”. I just meant “it’s kind of obvious what needs to happen (even if it’s very hard to get it to happen)”. Whereas with fundamental instability it’s more like it’s unclear if it’s actually a very overdetermined fundamental instability, or what exactly could nudge it to a part of scenario space with stable possibilities.
In a post-catastrophe world, it seems quite plausible to me that the rebounding civilizations would fear existential catastrophes and dangerous technologies and try hard to avoid technology-induced catastrophes.
I agree that it’s hard to reason about this stuff so I’m not super confident in anything. However, my inside view is that this story seems plausible if the catastrophe seems like it was basically an accident, but less plausible for nuclear war. Somewhat more plausible is that rebounding civilizations would create a meaningful world government to avoid repeating history.
Maybe “simpler” was the wrong choice of word. I didn’t really mean “more tractable”. I just meant “it’s kind of obvious what needs to happen (even if it’s very hard to get it to happen)”. Whereas with fundamental instability it’s more like it’s unclear if it’s actually a very overdetermined fundamental instability, or what exactly could nudge it to a part of scenario space with stable possibilities.
I agree that it’s hard to reason about this stuff so I’m not super confident in anything. However, my inside view is that this story seems plausible if the catastrophe seems like it was basically an accident, but less plausible for nuclear war. Somewhat more plausible is that rebounding civilizations would create a meaningful world government to avoid repeating history.