Interesting, and good job publishing rather than polishing!
I really like terminology of competence vs. intelligence.
I don’t think you want to use the term intelligence for your level 3. I think I see why you want to; but intelligence is currently an umbrella term for any cognitive capacity, so you’re invoking different intuitions when you use it for one particular cognitive capacity.
In either case, I think you should draw the analogy more closely with Level 3 and problem-solving. At least if you think it exists.
Suppose I’m a hunter-gatherer, and there are fruit high up in a tree. This tree has thorns, so my usual strategy of climbing it and shaking branches won’t work. If I figure out, through whatever process of association, simulation, and trial and error that I can get a long branch from another tree, then knock the fruit down, I can incorporate that into my level 2 cognition, and from there into level 1. This type of problem-solving is also probably the single cognitive ability most often referred to as intelligence, thus justifying your use of the term for that level. If I’m right that you’d agree with all of that, that could maake the terminology more intuitive to the reader.
In any case, I’m glad to see you thinking about cognition in relation to alignment. It’s obviously crucial; I’m unclear if most people just aren’t thinking about it, or if it’s all considered too infohazardous.
In your example, I think it is possible that the hunter-gatherer solves the problem through pure level 2 capability, even if they never encountered this specific problem before. Using causal models compositionally to represent the current scene, and computing it to output a novel solution, does not actually require that the human updates their causal models about the world. I am trying to distinguish agents with this sort of compositional world model from ones that just have a bunch of cashed thoughts or habits (which would correspond to level 1), and I think this is perhaps a common case where people would attribute intelligence to a system that imo does not demonstrate level 3 capability.
Of course, this would require that the human in our example already has some sufficiently decontextualised notion of knocking loose objects down, or that generally their concepts are suited to this sort of compositional reasoning. It might be worth elaborating on level 2 to introduce some measure modeling flexibility/compositionality.
I feel like this could be explained better, so I am curious if you think I am being clear.
You are probably right that I should avoid the term intelligence for the time being, but I haven’t quite found an alternative term that resonates. Anyways, thanks for engaging!
Edit: I’ll soon make some changes to the post to better account for this feature of level 2 algorithms to potentially solve novel problems even if no new learning occurs. It’s an important aspect of why I am saying that level 3 capabilities are only indirectly related to competence.
Interesting, and good job publishing rather than polishing!
I really like terminology of competence vs. intelligence.
I don’t think you want to use the term intelligence for your level 3. I think I see why you want to; but intelligence is currently an umbrella term for any cognitive capacity, so you’re invoking different intuitions when you use it for one particular cognitive capacity.
In either case, I think you should draw the analogy more closely with Level 3 and problem-solving. At least if you think it exists.
Suppose I’m a hunter-gatherer, and there are fruit high up in a tree. This tree has thorns, so my usual strategy of climbing it and shaking branches won’t work. If I figure out, through whatever process of association, simulation, and trial and error that I can get a long branch from another tree, then knock the fruit down, I can incorporate that into my level 2 cognition, and from there into level 1. This type of problem-solving is also probably the single cognitive ability most often referred to as intelligence, thus justifying your use of the term for that level. If I’m right that you’d agree with all of that, that could maake the terminology more intuitive to the reader.
In any case, I’m glad to see you thinking about cognition in relation to alignment. It’s obviously crucial; I’m unclear if most people just aren’t thinking about it, or if it’s all considered too infohazardous.
Thanks!
In your example, I think it is possible that the hunter-gatherer solves the problem through pure level 2 capability, even if they never encountered this specific problem before. Using causal models compositionally to represent the current scene, and computing it to output a novel solution, does not actually require that the human updates their causal models about the world.
I am trying to distinguish agents with this sort of compositional world model from ones that just have a bunch of cashed thoughts or habits (which would correspond to level 1), and I think this is perhaps a common case where people would attribute intelligence to a system that imo does not demonstrate level 3 capability.
Of course, this would require that the human in our example already has some sufficiently decontextualised notion of knocking loose objects down, or that generally their concepts are suited to this sort of compositional reasoning. It might be worth elaborating on level 2 to introduce some measure modeling flexibility/compositionality.
I feel like this could be explained better, so I am curious if you think I am being clear.
You are probably right that I should avoid the term intelligence for the time being, but I haven’t quite found an alternative term that resonates. Anyways, thanks for engaging!
Edit: I’ll soon make some changes to the post to better account for this feature of level 2 algorithms to potentially solve novel problems even if no new learning occurs. It’s an important aspect of why I am saying that level 3 capabilities are only indirectly related to competence.