I think that’s not quite fair. ACT-R has a lot to say about what kinds of processing are happening, as well. Although, for example, it does not have a theory of vision (to my limited understanding anyway), or of how the full motor control stack works, etc. So in that sense I think you are right.
What it does have more to say about is how the working memory associated with each modality works: how you process information in the various working memories, including various important cognitive mechanisms that you might not otherwise think about. In this sense, it’s not just about interconnection like you said.
So essentially, which types of information get routed for processing to which areas during the performance of some behavioral or cognitive algorithm, and what sort of processing each module performs?
That sounds right to me. It gives what types of information are processed in each area, and it gives a very explicit statement about exactly what processing each module performs.
So I look at ACT-R as sort of a minimal set of modules, where if I could figure out how to get neurons to implement the calculations ACT-R specifies in those modules (or something close to them), then I’d have a neural system that could do a very wide variety of psychology-experiment-type-tasks. As far as current progress goes, I’d say we have a pretty decent way to get neurons to implement the core Production system, and the Buffers surrounding it, but much less of a clear story for the other modules.
I think that’s not quite fair. ACT-R has a lot to say about what kinds of processing are happening, as well. Although, for example, it does not have a theory of vision (to my limited understanding anyway), or of how the full motor control stack works, etc. So in that sense I think you are right.
What it does have more to say about is how the working memory associated with each modality works: how you process information in the various working memories, including various important cognitive mechanisms that you might not otherwise think about. In this sense, it’s not just about interconnection like you said.
So essentially, which types of information get routed for processing to which areas during the performance of some behavioral or cognitive algorithm, and what sort of processing each module performs?
That sounds right to me. It gives what types of information are processed in each area, and it gives a very explicit statement about exactly what processing each module performs.
So I look at ACT-R as sort of a minimal set of modules, where if I could figure out how to get neurons to implement the calculations ACT-R specifies in those modules (or something close to them), then I’d have a neural system that could do a very wide variety of psychology-experiment-type-tasks. As far as current progress goes, I’d say we have a pretty decent way to get neurons to implement the core Production system, and the Buffers surrounding it, but much less of a clear story for the other modules.