What is the connection between the concepts of intelligence and optimization?
I see that optimization implies intelligence (that optimizing sufficiently hard task sufficiently well requires sufficient intelligence). But it feels like the case for existential risk from superintelligence is dependent on the idea that intelligence is optimization, or implies optimization, or something like that. (If I remember correctly, sometimes people suggest creating “non-agentic AI”, or “AI with no goals/utility”, and EY says that they are trying to invent non-wet water or something like that?)
It makes sense if we describe intelligence as a general problem-solving ability. But intuitively, intelligence is also about making good models of the world, which sounds like it could be done in a non-agentic / non-optimizing way. One example that throws me off if Solomonoff induction—which feels like a superintelligence, and indeed contains good models of the world, but doesn’t seem to be pushing to any specific state of the world.
I know there’s the concept of AIXI, basically an agent armed with Solomonoff induction as their epistemology, but it feels like agency is added separately. Like, there’s the intelligence part (Solomonoff induction) and the agency part and they are clearly different, rather that agency automatically popping out because they’re superintelligent.
What is the connection between the concepts of intelligence and optimization?
I see that optimization implies intelligence (that optimizing sufficiently hard task sufficiently well requires sufficient intelligence). But it feels like the case for existential risk from superintelligence is dependent on the idea that intelligence is optimization, or implies optimization, or something like that. (If I remember correctly, sometimes people suggest creating “non-agentic AI”, or “AI with no goals/utility”, and EY says that they are trying to invent non-wet water or something like that?)
It makes sense if we describe intelligence as a general problem-solving ability. But intuitively, intelligence is also about making good models of the world, which sounds like it could be done in a non-agentic / non-optimizing way. One example that throws me off if Solomonoff induction—which feels like a superintelligence, and indeed contains good models of the world, but doesn’t seem to be pushing to any specific state of the world.
I know there’s the concept of AIXI, basically an agent armed with Solomonoff induction as their epistemology, but it feels like agency is added separately. Like, there’s the intelligence part (Solomonoff induction) and the agency part and they are clearly different, rather that agency automatically popping out because they’re superintelligent.
The idea is that agentic AIs are probably generally more effective at doing things: https://www.lesswrong.com/s/mzgtmmTKKn5MuCzFJ