Public-at-large’s opinion also doesn’t shift policies at lightning speeds, I think, especially if the AI manages to get a few high-power corporations/politicians on its side (whether via genuine utility, bribery, or blackmail) plus some subsets of the public as well (by benefiting them).
It wouldn’t look like “the AI is gathering power, entire humanity freaks out and shuts it down”, it would at best look like “the AI is gathering power, large subsets of humanity freak out and try to shut it down, but a smaller subset resists this, it turns into a massive socio-politico-legislative conflict that drags on for years”. And that’s already the loss condition: as that will be happening, the AI will be doing more AI research, and as the result, would be improving its ability to wage this conflict (in addition to its mundane power-gathering pursuits), while humanity’s strategists would at best be as good as they ever were. The outcome of this dynamic seems pre-determined. (Hell, you don’t even need to posit this (very rudimentary and obviously possible) version of self-improvement for this argument to go through! All you need is that the AGI be just a touch better than humanity at strategy.)
Public-at-large’s opinion also doesn’t shift policies at lightning speeds, I think, especially if the AI manages to get a few high-power corporations/politicians on its side (whether via genuine utility, bribery, or blackmail) plus some subsets of the public as well (by benefiting them).
It wouldn’t look like “the AI is gathering power, entire humanity freaks out and shuts it down”, it would at best look like “the AI is gathering power, large subsets of humanity freak out and try to shut it down, but a smaller subset resists this, it turns into a massive socio-politico-legislative conflict that drags on for years”. And that’s already the loss condition: as that will be happening, the AI will be doing more AI research, and as the result, would be improving its ability to wage this conflict (in addition to its mundane power-gathering pursuits), while humanity’s strategists would at best be as good as they ever were. The outcome of this dynamic seems pre-determined. (Hell, you don’t even need to posit this (very rudimentary and obviously possible) version of self-improvement for this argument to go through! All you need is that the AGI be just a touch better than humanity at strategy.)
The date of AI Takeover is not the day the AI takes over. If an unaligned human-level-ish AGI is allowed to lodge itself into the human economy and do business at scale, it’s already curtains.