That is a really good point that there are intermediate scenarios—“thump” sounds pretty plausible to me as well, and the likely-to-be-effective mitigation measures are again different.
I also postulate “splat”: one AI/human coalition comes to believe that they are militarily unconquerable, another coalition disagrees, and the resulting military conflict is sufficient to destroy supply chains and also drops us into an equilibrium where supply chains as complex as the ones we have can’t re-form. Technically you don’t need an AI for this one, but if you had an AI tuned to for example pander to an egotistical dictator without having to deal with silly constraints like “being unwilling to advocate for suicidal policies” I could see that AI making this failure mode a lot more likely.
That is a really good point that there are intermediate scenarios—“thump” sounds pretty plausible to me as well, and the likely-to-be-effective mitigation measures are again different.
I also postulate “splat”: one AI/human coalition comes to believe that they are militarily unconquerable, another coalition disagrees, and the resulting military conflict is sufficient to destroy supply chains and also drops us into an equilibrium where supply chains as complex as the ones we have can’t re-form. Technically you don’t need an AI for this one, but if you had an AI tuned to for example pander to an egotistical dictator without having to deal with silly constraints like “being unwilling to advocate for suicidal policies” I could see that AI making this failure mode a lot more likely.