I definitely agree with this take, because I generally hate the society/we abstractions, but one caveat is that AI resistance by a lot of people could be very strong, especially because the public is probably primed for apocalyptic narratives.
These tweets are at least somewhat of an argument that AI will be resisted pretty heavily.
Public-at-large’s opinion also doesn’t shift policies at lightning speeds, I think, especially if the AI manages to get a few high-power corporations/politicians on its side (whether via genuine utility, bribery, or blackmail) plus some subsets of the public as well (by benefiting them).
It wouldn’t look like “the AI is gathering power, entire humanity freaks out and shuts it down”, it would at best look like “the AI is gathering power, large subsets of humanity freak out and try to shut it down, but a smaller subset resists this, it turns into a massive socio-politico-legislative conflict that drags on for years”. And that’s already the loss condition: as that will be happening, the AI will be doing more AI research, and as the result, would be improving its ability to wage this conflict (in addition to its mundane power-gathering pursuits), while humanity’s strategists would at best be as good as they ever were. The outcome of this dynamic seems pre-determined. (Hell, you don’t even need to posit this (very rudimentary and obviously possible) version of self-improvement for this argument to go through! All you need is that the AGI be just a touch better than humanity at strategy.)
I definitely agree with this take, because I generally hate the society/we abstractions, but one caveat is that AI resistance by a lot of people could be very strong, especially because the public is probably primed for apocalyptic narratives.
These tweets are at least somewhat of an argument that AI will be resisted pretty heavily.
https://twitter.com/daniel_271828/status/1696794764136562943
https://twitter.com/daniel_271828/status/1696770364549087310
Public-at-large’s opinion also doesn’t shift policies at lightning speeds, I think, especially if the AI manages to get a few high-power corporations/politicians on its side (whether via genuine utility, bribery, or blackmail) plus some subsets of the public as well (by benefiting them).
It wouldn’t look like “the AI is gathering power, entire humanity freaks out and shuts it down”, it would at best look like “the AI is gathering power, large subsets of humanity freak out and try to shut it down, but a smaller subset resists this, it turns into a massive socio-politico-legislative conflict that drags on for years”. And that’s already the loss condition: as that will be happening, the AI will be doing more AI research, and as the result, would be improving its ability to wage this conflict (in addition to its mundane power-gathering pursuits), while humanity’s strategists would at best be as good as they ever were. The outcome of this dynamic seems pre-determined. (Hell, you don’t even need to posit this (very rudimentary and obviously possible) version of self-improvement for this argument to go through! All you need is that the AGI be just a touch better than humanity at strategy.)
The date of AI Takeover is not the day the AI takes over. If an unaligned human-level-ish AGI is allowed to lodge itself into the human economy and do business at scale, it’s already curtains.