Thanks for the comment! When I think about it now (8 months later), I have three reasons for continuing to think CCA is broadly right:
Cytoarchitectural (quasi-) uniformity. I agree that this doesn’t definitively prove anything by itself, but it’s highly suggestive. If different parts of the cortex were doing systematically very different computations, well maybe they would start out looking similar when the differentiation first started to arise millions of years ago, but over evolutionary time you would expect them to gradually diverge into superficially-obviously-different endpoints that are more appropriate to their different functions.
Narrowness of the target, sorta. Let’s say there’s a module that takes specific categories of inputs (feedforward, feedback, reward, prediction-error flags) and has certain types of outputs, and it systematically learns to predict the feedforward input and control the outputs according to generative models following this kind of selection criterion (or something like that). This is a very specific and very useful thing. Whatever the reward signal is, this module will construct a theory about what causes that reward signal and make plans to increase it. And this kind of module automatically tiles—you can connect multiple modules and they’ll be able to work together to build more complex composite generative models integrating more inputs to make better reward predictions and better plans. I feel like you can’t just shove some other computation into this system and have it work—it’s either part of this coordinated prediction-and-action mechanism, or not (in which case the coordination prediction-and-action mechanism will learn to predict it and/or control it, just like it does for the motor plant etc.). Anyway, it’s possible that some part of the neocortex is doing a different sort of computation, and not part of the prediction-and-action mechanism. But if so, I would just shrug and say “maybe it’s technically part of the neocortex, but when I say “neocortex”, I’m using the term loosely and excluding that particular part.” After all, I am not an anatomical purist; I am already including part of the thalamus when I say “neocortex” for example (I have a footnote in the article apologizing for that). Sorry if this description is a bit incoherent, I need to think about how to articulate this better.
Although it’s probably just the Dunning-Kruger talking, I do think I at least vaguely understand what the algorithm is doing and how it works, and I feel like I can concretely see how it explains everything about human intelligence including causality, counterfactuals, hierarchical planning, task-switching, deliberation, analogies, concepts, etc. etc.
Thanks for the comment! When I think about it now (8 months later), I have three reasons for continuing to think CCA is broadly right:
Cytoarchitectural (quasi-) uniformity. I agree that this doesn’t definitively prove anything by itself, but it’s highly suggestive. If different parts of the cortex were doing systematically very different computations, well maybe they would start out looking similar when the differentiation first started to arise millions of years ago, but over evolutionary time you would expect them to gradually diverge into superficially-obviously-different endpoints that are more appropriate to their different functions.
Narrowness of the target, sorta. Let’s say there’s a module that takes specific categories of inputs (feedforward, feedback, reward, prediction-error flags) and has certain types of outputs, and it systematically learns to predict the feedforward input and control the outputs according to generative models following this kind of selection criterion (or something like that). This is a very specific and very useful thing. Whatever the reward signal is, this module will construct a theory about what causes that reward signal and make plans to increase it. And this kind of module automatically tiles—you can connect multiple modules and they’ll be able to work together to build more complex composite generative models integrating more inputs to make better reward predictions and better plans. I feel like you can’t just shove some other computation into this system and have it work—it’s either part of this coordinated prediction-and-action mechanism, or not (in which case the coordination prediction-and-action mechanism will learn to predict it and/or control it, just like it does for the motor plant etc.). Anyway, it’s possible that some part of the neocortex is doing a different sort of computation, and not part of the prediction-and-action mechanism. But if so, I would just shrug and say “maybe it’s technically part of the neocortex, but when I say “neocortex”, I’m using the term loosely and excluding that particular part.” After all, I am not an anatomical purist; I am already including part of the thalamus when I say “neocortex” for example (I have a footnote in the article apologizing for that). Sorry if this description is a bit incoherent, I need to think about how to articulate this better.
Although it’s probably just the Dunning-Kruger talking, I do think I at least vaguely understand what the algorithm is doing and how it works, and I feel like I can concretely see how it explains everything about human intelligence including causality, counterfactuals, hierarchical planning, task-switching, deliberation, analogies, concepts, etc. etc.