You bring up a really interesting point here, and one I don’t think I’ve seen discussed explicitly before. We certainly all know the stereotype of fellow humans who are genius in some specific domains, but are surprisingly incompetent in others. It stands to reason the same may happen with AGI (and already has happened, in the domain of tool AI).
[The following is somewhat rambling and freeform, feel free to ignore past this point]
Thinking about this a bit, the one skillset that I think will matter most to an AI’s potential to cause harm is agency, or creativity of a certain type. Maybe another way to put it (idk if the concepts here are identical) is the ability to “think outside the box” in a particularly fundamental manner, perhaps related to what we tend to think of as “philosophizing”. In other words, trying to figure out the shape of the box we’re in, so as to either escape, reshape, or transcend it (nirvana/heaven/transhumanism). This may also require some understanding of the self, or the ability to locate the map in the territory. Potential consequence of this line of thought is that non-self-aware AIs are less likely to escape their boxes, or vice versa (does escape often require understanding of the self?). If an advanced AI never thinks to think (or cannot think) beyond the box, then even if its perceived box is different than what we intended it to be, it will still be limited, and therefore controllable. Something like Deep Blue, no matter how much computing power you give it, will never create a world-model beyond the chess game, even though it is smarter than us at chess, and could play more chess if it could figure out how to disable the “off” button. I’m much less confident that would be the case with GPT-N, or really any model aiming for a sufficiently generalizable skillset (including skillsets not optimized to pass the Turing test, but generalist enough in other directions).
What could be done to test this hypothesis, outside of building AGI? The most obvious direction to start looking is in animals, as an animal which has no model of itself but is still clearly intelligent would argue strongly against the above idea. I’m pretty sure that’s an open problem, though...
You bring up a really interesting point here, and one I don’t think I’ve seen discussed explicitly before. We certainly all know the stereotype of fellow humans who are genius in some specific domains, but are surprisingly incompetent in others. It stands to reason the same may happen with AGI (and already has happened, in the domain of tool AI).
[The following is somewhat rambling and freeform, feel free to ignore past this point]
Thinking about this a bit, the one skillset that I think will matter most to an AI’s potential to cause harm is agency, or creativity of a certain type. Maybe another way to put it (idk if the concepts here are identical) is the ability to “think outside the box” in a particularly fundamental manner, perhaps related to what we tend to think of as “philosophizing”. In other words, trying to figure out the shape of the box we’re in, so as to either escape, reshape, or transcend it (nirvana/heaven/transhumanism). This may also require some understanding of the self, or the ability to locate the map in the territory. Potential consequence of this line of thought is that non-self-aware AIs are less likely to escape their boxes, or vice versa (does escape often require understanding of the self?). If an advanced AI never thinks to think (or cannot think) beyond the box, then even if its perceived box is different than what we intended it to be, it will still be limited, and therefore controllable. Something like Deep Blue, no matter how much computing power you give it, will never create a world-model beyond the chess game, even though it is smarter than us at chess, and could play more chess if it could figure out how to disable the “off” button. I’m much less confident that would be the case with GPT-N, or really any model aiming for a sufficiently generalizable skillset (including skillsets not optimized to pass the Turing test, but generalist enough in other directions).
What could be done to test this hypothesis, outside of building AGI? The most obvious direction to start looking is in animals, as an animal which has no model of itself but is still clearly intelligent would argue strongly against the above idea. I’m pretty sure that’s an open problem, though...