What could a million perfectly-coordinated, tireless copies of a pretty smart, broadly skilled person running at 100x speed do in a couple years?
I this feels like the right analogy to consider.
And in considering this thought experiment, I’m not sure trying to solve alignment is the only/best way to reduce risks. This hypothetical seems open to reducing risk by 1) better understanding how to detect these actors operating at large scale 2) researching resilient plug-pulling strategies
I think both of those things are worth looking into (for the sake of covering all our bases), but by the time alarm bells go off it’s already too late.
It’s a bit like a computer virus. Even after Stuxnet became public knowledge, it wasn’t possible to just turn it off. And unlike Stuxnet, AI-in-the-wild could easily adapt to ongoing changes.
I this feels like the right analogy to consider.
And in considering this thought experiment, I’m not sure trying to solve alignment is the only/best way to reduce risks. This hypothetical seems open to reducing risk by 1) better understanding how to detect these actors operating at large scale 2) researching resilient plug-pulling strategies
I think both of those things are worth looking into (for the sake of covering all our bases), but by the time alarm bells go off it’s already too late.
It’s a bit like a computer virus. Even after Stuxnet became public knowledge, it wasn’t possible to just turn it off. And unlike Stuxnet, AI-in-the-wild could easily adapt to ongoing changes.