Having a one-dimensional IQ model is really limiting here, in my opinion. Let me propose a three-dimensional model:
Technical IQ: How sophisticated is the AI in its ability to design novel technologies (nanotech or extinction-plauge) or hack into semi-secured systems?
Competence: How competent is the AI in its ability to make and execute plans in complex domains, and adapt when things don’t go to plan?
Hubris: How competent does the AI perceive itself to be, relative to its actual abilities?
Since it is impossible to estimate your own competence when you’re a bright young model fresh off the GPU, it seems likely that we could have highly incompetent, hubristic systems that try and fail to take over the world. Hubris would presumably not be due to ego, as in humans, but a misaligned AI might decide that it must act ‘now or never’ and just hope its competence is sufficient.
Its also possible to imagine a system that has effectively zero technical IQ, but is able to take over the world just using existing technology and extreme competence. In order for this to be possible, I think we need more automation in the environment. If there was a fully automated factory to produce armed drones, that would be sufficient.
Right, thinking of intelligence as one-dimensional is quite limiting. I wonder if there are some accepted dimensions and ways for how to measure general intelligence that can be applied to AI.
Having a one-dimensional IQ model is really limiting here, in my opinion. Let me propose a three-dimensional model:
Technical IQ: How sophisticated is the AI in its ability to design novel technologies (nanotech or extinction-plauge) or hack into semi-secured systems?
Competence: How competent is the AI in its ability to make and execute plans in complex domains, and adapt when things don’t go to plan?
Hubris: How competent does the AI perceive itself to be, relative to its actual abilities?
Since it is impossible to estimate your own competence when you’re a bright young model fresh off the GPU, it seems likely that we could have highly incompetent, hubristic systems that try and fail to take over the world. Hubris would presumably not be due to ego, as in humans, but a misaligned AI might decide that it must act ‘now or never’ and just hope its competence is sufficient.
Its also possible to imagine a system that has effectively zero technical IQ, but is able to take over the world just using existing technology and extreme competence. In order for this to be possible, I think we need more automation in the environment. If there was a fully automated factory to produce armed drones, that would be sufficient.
Right, thinking of intelligence as one-dimensional is quite limiting. I wonder if there are some accepted dimensions and ways for how to measure general intelligence that can be applied to AI.