“You keep speaking of “good” abstractions as if this were a property of the categories themselves, rather than a ranking in your preference ordering relative to some decision task that makes use of the categories.”
Yes, I believe categories of things do exist in the world in some sense, due to structure that exists in the world. I’ve seen thousands of things where were referred to as “smiley faces” and so there is an abstraction for this category of things in my brain. You have done likewise. While we can agree about many things being smiley faces, in borderline cases, such as the half burnt off face, we might disagree. Something like “solid objects” was an abstraction I formed before I even knew what those words referred to. It’s just part of the structure present in my surroundings.
When I say that pulling this structure out of the environment in certain ways is “good”, I mean that these abstractions allow the agent to efficiently process information about its surroundings and this helps it to achieve a wide range goals (i.e. intelligence as per my formal definition). That’s not to say that I think this process is entirely goal driven (though it clearly significantly is, e.g. via attention). In other words, an agent with general intelligence should identify significant regularities in its environment even if these don’t appear to have any obvious utility at the time: if something about its goals or environment changes, this already constructed knowledge about the structure of the environment could suddenly become very useful.
“You keep speaking of “good” abstractions as if this were a property of the categories themselves, rather than a ranking in your preference ordering relative to some decision task that makes use of the categories.”
Yes, I believe categories of things do exist in the world in some sense, due to structure that exists in the world. I’ve seen thousands of things where were referred to as “smiley faces” and so there is an abstraction for this category of things in my brain. You have done likewise. While we can agree about many things being smiley faces, in borderline cases, such as the half burnt off face, we might disagree. Something like “solid objects” was an abstraction I formed before I even knew what those words referred to. It’s just part of the structure present in my surroundings.
When I say that pulling this structure out of the environment in certain ways is “good”, I mean that these abstractions allow the agent to efficiently process information about its surroundings and this helps it to achieve a wide range goals (i.e. intelligence as per my formal definition). That’s not to say that I think this process is entirely goal driven (though it clearly significantly is, e.g. via attention). In other words, an agent with general intelligence should identify significant regularities in its environment even if these don’t appear to have any obvious utility at the time: if something about its goals or environment changes, this already constructed knowledge about the structure of the environment could suddenly become very useful.