I didn’t follow the link, but in general I think there is some argument for minimal prepping around AGI, where the problems are caused by societal disruption during the early post-AGI days. The problems are probably not even enacted by AGIs, just human institutions going loopy for a time.
My model (of exploratory engineering kind) says that there is at most a 1-3 year period between the first AGI and capability to quickly convert everything on Earth to compute, if AGI-generated technological progress is not held back, or if they immediately escape with enough resources to continue on track. In the more-than-few-months timelines, available compute doesn’t suffice for (strong) superintelligence through algorithmic progress that’s reachable quickly using available compute. So the delay is in moving towards a compute manufacturing megaproject without already having access to superintelligence, only relying on much faster human level research. And also that there is no shortcut to figuring out scalable diamondoid nanotech in that timeframe without superintelligence. (But macroscopic biotech needs to be tractable, or else the timelines can go even further, using human labor to build factories that build robot hands.)
After this period, humanity gets what superintelligence decides. During this time, there isn’t a superintelligence, and humanity isn’t necessarily a top priority, so there isn’t sufficient effective compute that its problems nonetheless necessarily get solved. The possibility of a positive decision on humanity’s fate motivates only not wiping everyone out (if even that, since a backup might suffice). Thus keeping mall shelves full is not a given.
I didn’t follow the link, but in general I think there is some argument for minimal prepping around AGI, where the problems are caused by societal disruption during the early post-AGI days. The problems are probably not even enacted by AGIs, just human institutions going loopy for a time.
My model (of exploratory engineering kind) says that there is at most a 1-3 year period between the first AGI and capability to quickly convert everything on Earth to compute, if AGI-generated technological progress is not held back, or if they immediately escape with enough resources to continue on track. In the more-than-few-months timelines, available compute doesn’t suffice for (strong) superintelligence through algorithmic progress that’s reachable quickly using available compute. So the delay is in moving towards a compute manufacturing megaproject without already having access to superintelligence, only relying on much faster human level research. And also that there is no shortcut to figuring out scalable diamondoid nanotech in that timeframe without superintelligence. (But macroscopic biotech needs to be tractable, or else the timelines can go even further, using human labor to build factories that build robot hands.)
After this period, humanity gets what superintelligence decides. During this time, there isn’t a superintelligence, and humanity isn’t necessarily a top priority, so there isn’t sufficient effective compute that its problems nonetheless necessarily get solved. The possibility of a positive decision on humanity’s fate motivates only not wiping everyone out (if even that, since a backup might suffice). Thus keeping mall shelves full is not a given.