I think it is likely in the case of AGI / ASI that removing humanity from the equation will be either a side effect of it seeking its goals (it will take resources) or the instrumental goal itself (for example to remove risk or to lose fewer resources later on defenses).
In both cases it is likely it will find the optimal value of resources used to eliminate humanity vs the effectiveness of the end result. This means that there may be some survivors, possibly not many, and technologically moved to the stone age at best.
Bunkers likely won’t work. Living with stone tools and without electricity in a hut in the middle of the forest very remote from any cities and having no minable resources under feet may work for some time. Likely AGI won’t bother finding remote camps of single or few humans without any signs of technology being used.
Of course only if that AGI won’t find a low-resource solution to eliminate all humans, no matter where they are. This is possible and then nothing will help, no prepping is possible.
I’m not sure it’s the default though. For very specialized cases like creating nanotechnology to wipe humans in a synchronized manner it might very possibly find out the time or computational resources needed to develop it through simulations is too great and it is not worth the cost vs options that need fewer resources. It is not like computational resources are free and costless for AGI (it won’t pay in money but will do less research/thinking in other fields having to deal with that, it may delay plans to do it this way). It is pretty likely it will use a less sophisticated but very resource-efficient and fast solution that may not kill all humans but enough.
Edit: I want to add a reason why I think that. One may think that very fast ASI will very quickly develop a perfect way to remove all humans effectively and without anyone left (if there is a case that’s the most sensible thing to do to either remove risk or claim all needed resources or other reasons that are instrumental). I think this is wrong because even for ASI there are some bottlenecks. For a sensible and quick plan that also needs some advanced tech like one with nanomachinery or proteins, you need to do some research beyond what we humans already have and know. This means it needs more data and observations, maybe also simulations, to gather more knowledge. ASI might be very quick at reasoning, recalling, and thinking. Still will be limited by data input, experiments machinery accessible, and computational power to make very detailed simulations. So it won’t create such a plan in detail in an instant by pure thought. Therefore it would take into account time and the resources needed to develop plan details and to gather needed data. This means it will see an incentive to make a simpler and faster plan that will remove most of the humans instead of a more complex way to remove all humans. ASI should be good at optimizing such things, not over-focusing on instrumental goals (like often depicted in fiction).
I think it is likely in the case of AGI / ASI that removing humanity from the equation will be either a side effect of it seeking its goals (it will take resources) or the instrumental goal itself (for example to remove risk or to lose fewer resources later on defenses).
In both cases it is likely it will find the optimal value of resources used to eliminate humanity vs the effectiveness of the end result. This means that there may be some survivors, possibly not many, and technologically moved to the stone age at best.
Bunkers likely won’t work. Living with stone tools and without electricity in a hut in the middle of the forest very remote from any cities and having no minable resources under feet may work for some time. Likely AGI won’t bother finding remote camps of single or few humans without any signs of technology being used.
Of course only if that AGI won’t find a low-resource solution to eliminate all humans, no matter where they are. This is possible and then nothing will help, no prepping is possible.
I’m not sure it’s the default though. For very specialized cases like creating nanotechnology to wipe humans in a synchronized manner it might very possibly find out the time or computational resources needed to develop it through simulations is too great and it is not worth the cost vs options that need fewer resources. It is not like computational resources are free and costless for AGI (it won’t pay in money but will do less research/thinking in other fields having to deal with that, it may delay plans to do it this way). It is pretty likely it will use a less sophisticated but very resource-efficient and fast solution that may not kill all humans but enough.
Edit: I want to add a reason why I think that. One may think that very fast ASI will very quickly develop a perfect way to remove all humans effectively and without anyone left (if there is a case that’s the most sensible thing to do to either remove risk or claim all needed resources or other reasons that are instrumental). I think this is wrong because even for ASI there are some bottlenecks. For a sensible and quick plan that also needs some advanced tech like one with nanomachinery or proteins, you need to do some research beyond what we humans already have and know. This means it needs more data and observations, maybe also simulations, to gather more knowledge. ASI might be very quick at reasoning, recalling, and thinking. Still will be limited by data input, experiments machinery accessible, and computational power to make very detailed simulations. So it won’t create such a plan in detail in an instant by pure thought. Therefore it would take into account time and the resources needed to develop plan details and to gather needed data. This means it will see an incentive to make a simpler and faster plan that will remove most of the humans instead of a more complex way to remove all humans. ASI should be good at optimizing such things, not over-focusing on instrumental goals (like often depicted in fiction).