Orthogonality of intelligence and agency. I can envision a machine with high intelligence and zero agency, I haven’t seen any convincing argument yet of why both things must necessarily go together (the arguments probably exist, I’m simply ignorant of them!)
The ‘usual’ argument, as I understand it, is as follows. Note I don’t necessarily agree with this.
An intelligence cannot design an arbitrarily complex system.
An intelligence can design a system that is somewhat more capable than its own computational substrate.
As such, the only way for a highly-intelligent AI to exist is if it was designed by a slightly-less-intelligent AI. This recurses down until eventually you get to system 0 designed by a human (or other natural intelligence.)
The computational substrate for a highly-intelligent AI is complex enough that we cannot guarantee that it has no hidden functionality directly, only by querying a somewhat less complex AI.
Alignment issues mean that you can’t trust an AI.
So it’s not so much “they must go together” as it’s “you can’t guarantee they don’t go together”.
The ‘usual’ argument, as I understand it, is as follows. Note I don’t necessarily agree with this.
An intelligence cannot design an arbitrarily complex system.
An intelligence can design a system that is somewhat more capable than its own computational substrate.
As such, the only way for a highly-intelligent AI to exist is if it was designed by a slightly-less-intelligent AI. This recurses down until eventually you get to system 0 designed by a human (or other natural intelligence.)
The computational substrate for a highly-intelligent AI is complex enough that we cannot guarantee that it has no hidden functionality directly, only by querying a somewhat less complex AI.
Alignment issues mean that you can’t trust an AI.
So it’s not so much “they must go together” as it’s “you can’t guarantee they don’t go together”.
I agree with this, see my comment below.