In your example, I think it is possible that the hunter-gatherer solves the problem through pure level 2 capability, even if they never encountered this specific problem before. Using causal models compositionally to represent the current scene, and computing it to output a novel solution, does not actually require that the human updates their causal models about the world. I am trying to distinguish agents with this sort of compositional world model from ones that just have a bunch of cashed thoughts or habits (which would correspond to level 1), and I think this is perhaps a common case where people would attribute intelligence to a system that imo does not demonstrate level 3 capability.
Of course, this would require that the human in our example already has some sufficiently decontextualised notion of knocking loose objects down, or that generally their concepts are suited to this sort of compositional reasoning. It might be worth elaborating on level 2 to introduce some measure modeling flexibility/compositionality.
I feel like this could be explained better, so I am curious if you think I am being clear.
You are probably right that I should avoid the term intelligence for the time being, but I haven’t quite found an alternative term that resonates. Anyways, thanks for engaging!
Edit: I’ll soon make some changes to the post to better account for this feature of level 2 algorithms to potentially solve novel problems even if no new learning occurs. It’s an important aspect of why I am saying that level 3 capabilities are only indirectly related to competence.
Thanks!
In your example, I think it is possible that the hunter-gatherer solves the problem through pure level 2 capability, even if they never encountered this specific problem before. Using causal models compositionally to represent the current scene, and computing it to output a novel solution, does not actually require that the human updates their causal models about the world.
I am trying to distinguish agents with this sort of compositional world model from ones that just have a bunch of cashed thoughts or habits (which would correspond to level 1), and I think this is perhaps a common case where people would attribute intelligence to a system that imo does not demonstrate level 3 capability.
Of course, this would require that the human in our example already has some sufficiently decontextualised notion of knocking loose objects down, or that generally their concepts are suited to this sort of compositional reasoning. It might be worth elaborating on level 2 to introduce some measure modeling flexibility/compositionality.
I feel like this could be explained better, so I am curious if you think I am being clear.
You are probably right that I should avoid the term intelligence for the time being, but I haven’t quite found an alternative term that resonates. Anyways, thanks for engaging!
Edit: I’ll soon make some changes to the post to better account for this feature of level 2 algorithms to potentially solve novel problems even if no new learning occurs. It’s an important aspect of why I am saying that level 3 capabilities are only indirectly related to competence.