a combination of humans and advanced AI tools (that themselves are not ASI) together could be effectively an unopposable ASI
Yeah, I’m not unworried about eternal-dystopia scenarios enabled by this sort of stuff. I’d alluded to it some, when mentioning scaled-up LLMs potentially allowing “perfect-surveillance dirt-cheap totalitarianism”.
But it’s not quite an AGI killing everyone. Fairly different threat model, deserving of its own analysis.
Yeah, I’m not unworried about eternal-dystopia scenarios enabled by this sort of stuff. I’d alluded to it some, when mentioning scaled-up LLMs potentially allowing “perfect-surveillance dirt-cheap totalitarianism”.
But it’s not quite an AGI killing everyone. Fairly different threat model, deserving of its own analysis.