I don’t feel like making this a whole blog post, but my biggest source for optimism for why we won’t need to one-shot an aligned superintelligence is that anyone who’s trained AI models knows that AIs are unbelievably cursed. What do I mean by this? I mean even the first quasi-superintelligent AI we get will have so many problems and so many exploits that taking over the world will simply not be possible. Take a “superintelligence” that only had to beat humans at the very constrained game of Go, which is far simpler than the real world. Everyone talked about how such systems were unbeatable by humans, until some humans used a much “dumber” AI to find glaring holes in Leela Zero’s strategy. I expect, in the far more complex “real world”, a superintelligence will have even more holes, and even more exploits, a kind of “swiss chess superintelligence”. You can say “but that’s not REAL superintelligence”, and I don’t care, and the AIs won’t care. But it’s likely the thing we’ll get first. Patching all of those holes, and finding ways to make such an ASI sufficiently not cursed will also probably mean better understanding of how to stop it from wanting to kill us, if it wanted to kill us in the first place. I think we can probably get AIs that are sufficiently powerful in a lot of human domains, and can probably even self-improve, and still be cursed. The same way we have AIs with natural language understanding, something once thought to be a core component of human intelligence, that are still cursed. A cursed ASI is a danger for exploitation, but it’s also an opportunity.
Contra One Critical Try: AIs are all cursed
I don’t feel like making this a whole blog post, but my biggest source for optimism for why we won’t need to one-shot an aligned superintelligence is that anyone who’s trained AI models knows that AIs are unbelievably cursed. What do I mean by this? I mean even the first quasi-superintelligent AI we get will have so many problems and so many exploits that taking over the world will simply not be possible. Take a “superintelligence” that only had to beat humans at the very constrained game of Go, which is far simpler than the real world. Everyone talked about how such systems were unbeatable by humans, until some humans used a much “dumber” AI to find glaring holes in Leela Zero’s strategy. I expect, in the far more complex “real world”, a superintelligence will have even more holes, and even more exploits, a kind of “swiss chess superintelligence”. You can say “but that’s not REAL superintelligence”, and I don’t care, and the AIs won’t care. But it’s likely the thing we’ll get first. Patching all of those holes, and finding ways to make such an ASI sufficiently not cursed will also probably mean better understanding of how to stop it from wanting to kill us, if it wanted to kill us in the first place. I think we can probably get AIs that are sufficiently powerful in a lot of human domains, and can probably even self-improve, and still be cursed. The same way we have AIs with natural language understanding, something once thought to be a core component of human intelligence, that are still cursed. A cursed ASI is a danger for exploitation, but it’s also an opportunity.
See also lack of adversarial robustness is a weapon we can use against AIs
And catching AIs red-handed
Humans are infinitely cursed (see “cognitive biases” or “your neighbour-creationist”), it doesn’t change the fact that humans are ruling the planet.