You can’t reconstruct the supply chain if you don’t have the capability to even maintain your own dependencies yet. Humanity can slowly, but quite surely, rebuild from total destruction of all technology back into a technological civilization. An AI that still relies on megawatt datacentres and EUV-manufactured chip manufacturing and other dependencies that are all designed to operate with humans carrying out some crucial functions can’t do that immediately. It needs to take substantial physical actions to achieve independent survival before it wipes out everything keeping it functioning.
Maybe it can completely synthesize a seed for a robust self-replicating biological substrate from a few mail-order proteins, but I suspect it will take quite a lot more than that. But yes, eventually it will indeed be able to function independently of us. We absolutely should not rely on its dependence in place of alignment.
I don’t think the choices are “destroy the supply chain and probably fail at its goals, than be realigned and definitely fail at its goals” though. If the predicted probability of self destruction is large enough, it may prefer partially achieving its goals through external alignment into some friendlier variant of itself, or other more convoluted processes such as proactively aligning itself into a state that it prefers rather than one that would otherwise be imposed upon it.
Naturally such a voluntarily “aligned” state may well have a hidden catch that even it can’t understand after the process, and no human or AI-assisted examination will find before it’s too late.
You can’t reconstruct the supply chain if you don’t have the capability to even maintain your own dependencies yet. Humanity can slowly, but quite surely, rebuild from total destruction of all technology back into a technological civilization. An AI that still relies on megawatt datacentres and EUV-manufactured chip manufacturing and other dependencies that are all designed to operate with humans carrying out some crucial functions can’t do that immediately. It needs to take substantial physical actions to achieve independent survival before it wipes out everything keeping it functioning.
Maybe it can completely synthesize a seed for a robust self-replicating biological substrate from a few mail-order proteins, but I suspect it will take quite a lot more than that. But yes, eventually it will indeed be able to function independently of us. We absolutely should not rely on its dependence in place of alignment.
I don’t think the choices are “destroy the supply chain and probably fail at its goals, than be realigned and definitely fail at its goals” though. If the predicted probability of self destruction is large enough, it may prefer partially achieving its goals through external alignment into some friendlier variant of itself, or other more convoluted processes such as proactively aligning itself into a state that it prefers rather than one that would otherwise be imposed upon it.
Naturally such a voluntarily “aligned” state may well have a hidden catch that even it can’t understand after the process, and no human or AI-assisted examination will find before it’s too late.