Watching my kitten learn/play has been interesting from a “how do animals compare to current AIs perspective?” At a high level, I think I’ve updated slightly towards RL agents being further along the evolutionary progress ladder than I’d previously thought.
I’ve seen critiques of RL agents not being able to do long-term planning as evidence for them not being as smart as animals, and while I think that’s probably accurate, I have noticed that my kitten takes a surprisingly long time to learn even 2-step plans. For example, when it plays with a toy on a string, I’ll often try putting the toy on a chair that it only knows how to reach by jumping onto another chair first. It took many attempts before it learned to jump onto the other chair and then climb to where I’d put the toy, even though it had previously done that while exploring many times. And even then, it seems to be at risk of “catastrophic forgetting” where we’ll be playing in the same way later and it won’t remember to do the 2-step move. Related to this, its learning is fairly narrow even for basic skills, e.g. I have 4 identical chairs around a table but it will be afraid of jumping onto one even though it’s very comfortable jumping onto another.
Now part of this may be that cats are known for being biased towards trial-and-error compared to other similarly-sized mammals like dogs (see Gwern’s write-up for more on this) and that adult cats may be better than kittens at “long-term” planning. However, a lot of critiques of RL, such as Josh Tenenbaum’s, argue that our AIs don’t even compare to young children in terms of their abilities. This is undoubtedly true with respect to ability to actually move around in the world, grasp objects, etc. but seems less true than I’d previously thought with respect to “higher level” cognitive abilities such as planning. To make this more concrete, I’m skeptical that my kitten could currently succeed at a real life analogue to Montezuma’s Revenge.
Another thing I’ve observed relates to some recent work by Konrad Kording, Adam Marblestone, and Greg Wayne on integrating deep learning and neuroscience. They postulate that due to the genomic bottleneck, it’s plausible that brains leverage heterogeneous, evolving cost functions to do semi-supervised learning throughout development. While much more work needs to be done investigating this hypothesis (as acknowledged by the authors), it does ring true with some of my observations of my kitten. In particular, I’ve noticed that it recently became much more interested in climbing things and jumping on objects on its own, whereas previously I couldn’t even get it to using treats. This seems like a plausible example of a “switch” being flipped that increased reward for being high up (or something, obviously this is quite hand-wavy).
I’m trying to come up with predictions that I can make regarding the next few months based on these two initial observations but don’t have any great ideas yet.
Watching my kitten learn/play has been interesting from a “how do animals compare to current AIs perspective?” At a high level, I think I’ve updated slightly towards RL agents being further along the evolutionary progress ladder than I’d previously thought.
I’ve seen critiques of RL agents not being able to do long-term planning as evidence for them not being as smart as animals, and while I think that’s probably accurate, I have noticed that my kitten takes a surprisingly long time to learn even 2-step plans. For example, when it plays with a toy on a string, I’ll often try putting the toy on a chair that it only knows how to reach by jumping onto another chair first. It took many attempts before it learned to jump onto the other chair and then climb to where I’d put the toy, even though it had previously done that while exploring many times. And even then, it seems to be at risk of “catastrophic forgetting” where we’ll be playing in the same way later and it won’t remember to do the 2-step move. Related to this, its learning is fairly narrow even for basic skills, e.g. I have 4 identical chairs around a table but it will be afraid of jumping onto one even though it’s very comfortable jumping onto another.
Now part of this may be that cats are known for being biased towards trial-and-error compared to other similarly-sized mammals like dogs (see Gwern’s write-up for more on this) and that adult cats may be better than kittens at “long-term” planning. However, a lot of critiques of RL, such as Josh Tenenbaum’s, argue that our AIs don’t even compare to young children in terms of their abilities. This is undoubtedly true with respect to ability to actually move around in the world, grasp objects, etc. but seems less true than I’d previously thought with respect to “higher level” cognitive abilities such as planning. To make this more concrete, I’m skeptical that my kitten could currently succeed at a real life analogue to Montezuma’s Revenge.
Another thing I’ve observed relates to some recent work by Konrad Kording, Adam Marblestone, and Greg Wayne on integrating deep learning and neuroscience. They postulate that due to the genomic bottleneck, it’s plausible that brains leverage heterogeneous, evolving cost functions to do semi-supervised learning throughout development. While much more work needs to be done investigating this hypothesis (as acknowledged by the authors), it does ring true with some of my observations of my kitten. In particular, I’ve noticed that it recently became much more interested in climbing things and jumping on objects on its own, whereas previously I couldn’t even get it to using treats. This seems like a plausible example of a “switch” being flipped that increased reward for being high up (or something, obviously this is quite hand-wavy).
I’m trying to come up with predictions that I can make regarding the next few months based on these two initial observations but don’t have any great ideas yet.