Human concepts aren’t always purely derived from just their instrumental value. We do seem to have an automatic unsupervised learning component which independently constructs models of the environment and gains new modeling capabilities during maturation, as was seen in the children’s height/volume example. Novelty is also one of the things that we find rewarding, and we are driven by curiosity to develop concepts that allow us to compress previous observations more effectively (Schmidhuber 2009). Still, it’s worth noting that most people have specific subjects that they are curious about (which some others find uninteresting) while having other subjects they find uninteresting (which some others find interesting), suggesting that even this intrinsic concept-formation drive is guided and directed by various rewards.
There are plenty of other such caveats that I could have made, like a discussion of how emotions affect our reward function, how there seem to be distinct System 1 and System 2 concepts, and so on. But they would have distracted from the main point. I’ll just note here that I’m aware of the full picture being quite a bit more complicated than this post might make it seem.
I much enjoyed your posts so far Kaj, thanks for creating them.
I’d like to draw attention, in this particular one, to
Viewed in this light, concepts are cognitive tools that are used for getting rewards.
to add a further caveat: though some concepts are related to rewards, and some conceptual clustering is done in a way that maps to the reward of the agent as a whole, much of what goes on in concept formation, simple or complex, is just the wire together, fire together old saying. More specifically, if we are only calling “reward” what is a reward for the whole individual, then most concept formation will not be reward related. At the level of neurons or neural columns, there are reward-like mechanisms taking place, no doubt, but it would be a mereological fallacy to assume that rewardness carries upward from parts to wholes. There are many types of concepts for which indeed, as you contend, rewards are very important, and they deserve as much attention as those which cannot be explained merely by the idea of a single monolithic agent seeking rewards.
A caveat which didn’t fit the flow of the text:
Human concepts aren’t always purely derived from just their instrumental value. We do seem to have an automatic unsupervised learning component which independently constructs models of the environment and gains new modeling capabilities during maturation, as was seen in the children’s height/volume example. Novelty is also one of the things that we find rewarding, and we are driven by curiosity to develop concepts that allow us to compress previous observations more effectively (Schmidhuber 2009). Still, it’s worth noting that most people have specific subjects that they are curious about (which some others find uninteresting) while having other subjects they find uninteresting (which some others find interesting), suggesting that even this intrinsic concept-formation drive is guided and directed by various rewards.
There are plenty of other such caveats that I could have made, like a discussion of how emotions affect our reward function, how there seem to be distinct System 1 and System 2 concepts, and so on. But they would have distracted from the main point. I’ll just note here that I’m aware of the full picture being quite a bit more complicated than this post might make it seem.
I much enjoyed your posts so far Kaj, thanks for creating them.
I’d like to draw attention, in this particular one, to
to add a further caveat: though some concepts are related to rewards, and some conceptual clustering is done in a way that maps to the reward of the agent as a whole, much of what goes on in concept formation, simple or complex, is just the wire together, fire together old saying. More specifically, if we are only calling “reward” what is a reward for the whole individual, then most concept formation will not be reward related. At the level of neurons or neural columns, there are reward-like mechanisms taking place, no doubt, but it would be a mereological fallacy to assume that rewardness carries upward from parts to wholes.
There are many types of concepts for which indeed, as you contend, rewards are very important, and they deserve as much attention as those which cannot be explained merely by the idea of a single monolithic agent seeking rewards.