I think I remember hearing about this from you in the past and looking into it some.
I looked into it again just now and hit a sort of “satiety point” (which I hereby summarize and offer as a comment) when I boiled the idea down to “ACT-R is essentially a programming language with architectural inclinations which cause it to be intuitively easy see 1:1 connections between parts of the programs and parts of neurophysiology, such that diagrams of brain wiring, and diagrams of ACT-R programs, are easy for scientists to perceptually conflate and make analogies between… then also ACT-R more broadly is the high quality conserved work products from such a working milieu that survive various forms of quality assurance”.
Pictures helped! Without them I think I wouldn’t have felt like I understood the gist of it.
This image is a very general version that is offered as an example of how one is likely to use the programming language for some task, I think?
Then you might ask… ok… what does it look like after people have been working on it for a long time? So then this image comes from 2004 research.
My reaction to this latter thing is that I recognize lots of words, and the “Intentional module” being “not identified” jumps out at me and causes me to instantly propose things.
But then, because I imagine that the ACT-R experts presumably are working under self-imposed standards of rigor, I imagine they could object to my proposals with rigorous explanations.
If I said something like “Maybe humans don’t actually have a rigorously strong form of Intentionality in the ways we naively expect, perhaps because we sometimes apply the intentional stance to humans too casually? Like maybe instead we ‘merely’ have imagined goal content hanging out in parts of our brain, that we sometimes flail about and try to generate imaginary motor plans that cause the goal… so you could try to tie the Imaginal, Goal, Retrieval, and ‘Declarative/Frontal’ parts together until you can see how that is the source of what are often called revealed preferences?”
Then they might object “Yeah, that’s an obvious idea, but we tried it, and then looked more carefully and noticed that the ACC doesn’t actually neuro-anatomically link to the VLPFC in the way that would be required to really make it a plausible theory of humans”… or whatever, I have no idea what they would really say because I don’t have all of the parts of the human brain and their connections memorized, and maybe neuroanatomy wouldn’t even be the basis of an ACT-R expert’s objection? Maybe it would be some other objection.
...
After thinking about it for a bit, the coolest thing I could think of doing with ACT-R was applying it to the OpenWorm project somehow, to see about getting a higher level model of worms that relates cleanly to the living behavior of actual worms, and their typical reaction times, and so on.
Then the ACT-R model of a worm could perhaps be used (swiftly! (in software!)) to rule out various operational modes of a higher resolution simulation of a less platonic worm model that has technical challenges when “tuning hyperparameters” related to many fiddly little cellular biology questions?
As someone who can maybe call themselves an ACT-R expert, I think the main thing I’d say about the intentional module being “not identified” is that we don’t have any fMRI data showing activity in any particular part of the brain being correlated to the use of the intentional module in various models. For all of the other parts that have brain areas identified, there’s pretty decent data showing that correlation with activity in particular brain areas. And also, for each of those other areas there’s pretty good arguments that those brain areas have something to do with tasks that involve those modules (brain damage studies, usually).
It’s worth noting that there’s no particular logical reason why there would have to be a direct correlation between modules in ACT-R and brain areas. ACT-R was developed based on looking at human behaviour and separating things out into behaviourally distinct components. There’s no particular reason that separating things out this way must map directly onto physically distinct components. (After all, the web browser and the word processor on a computer are behaviourally distinct, but not physically distinct). But it’s been really neat that in the last 20 years a surprising number of of these modules that have been around in various forms since the 70′s have turned out to map onto physically distinct brain areas.
The idea of the physical brain turning out to be similar to ACT-R after the code had been written based on high level timing data and so on… seems like strong support to me. Nice! Real science! Predicting stuff in advance by accident! <3
My memory from exploring this in the past is that I ran into some research with “math problem solving behavior” with human millisecond timing for answering various math questions that might use different methods… Googling now, this Tenison et al ACT-R arithmetic paper might be similar, or related?
With you being an expert, I was going to ask if you knew of any cool problems other than basic arithmetic that might have been explored like the Trolley Problem or behavioral economics or something…
(Then I realized that after I had formulated the idea in specific keywords I had Google and could just search, and… yup… Trolley Problem in ACT-R occurs in a 2019 Masters Thesis by Thomas Steven Highstead that also has… hahahaha, omg! There’s a couple pages here reviewing ACT-R work on Asimov’s Three Laws!?!)
Maybe a human level question is more like: “As an someone familiar with the field, what is the coolest thing you know of that ACT-R has been used for?” :-)
Yes, that Tenison paper is a great example of arithmetic modelling in ACT-R, and especially connecting it to the modern fMRI approach for validation! For an example of the other sorts of math modelling that’s more psychology-experiment-based, this paper gives some of the low-level detail about how such a model would work, and maps it onto human errors: - “Toward a Dynamic Model of Early Algebra Acquisition” https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.53.5754&rep=rep1&type=pdf
(that work was expanded on a few times, and led to things like “Instructional experiments with ACT-R “SimStudents”” http://act-r.psy.cmu.edu/?post_type=publications&p=13890 where they made a bunch of simulated students and ran them through different teaching regimes)
The flashcard and curriculum experiments seem really awesome in terms of potential for applications. It feels like the beginnings of the kind of software technology that would exist in a science fiction novel where one of the characters goes into a “learning pod” built by a high tech race, and pops out a couple days layer knowing how to “fly their spaceship” or whatever. Generic yet plausible plot-hole-solving super powers! <3
As for mapping ACT-R onto OpenWorm, unfortunately ACT-R’s at a much much higher level than that. It’s really meant for modelling humans—I seem to remember a few attempts to model tasks being performed by other primates by doing things like not including the Goal Buffer, but I don’t think that work went very far, and didn’t map well to simpler animals. :(
I wonder if extremely well trained dogs might work?
Chaser seems likely to have learned nouns, names, verbs… with toy names learned on one trial starting at roughly 5 months of age (albeit with a name forgetting curve so additional later exposures were needed for retention).
Having studied her training process, it seems like they taught her the concept of nouns very thoroughly.
Showing “here are N frisbees, after ‘take frisbee’ each one of them earns a reward” to get the idea of nouns referring to more than one thing demonstrated very thoroughly.
Then maybe “half frisbees, half balls” so that it was clear that “some things are non-frisbees and get no reward”.
I think that sort of task might be modellable with ACT-R—the hardest part might be getting or gathering the animal data to compare to! Most of the time ACT-R models are validated by comparing to human data gathered by taking a room full of undergraduates and making them do some task 100 times each. It’s a bit trickier to do that with animals. But that does seem like something that would be interesting research for someone to do!
This lines up fairly well with how I’ve seen psychology people geek out over ACT-R. That is: I had a psychology professor who was enamored with the ability to line up programming stuff with neuroanatomy. (She didn’t use it in class or anything, she just talked about it like it was the most mind blowing stuff she ever saw as a research psychologist, since normally you just get these isolated little theories about specific things.)
And, yeah, important to view it as a programming language which can model a bunch of stuff, but requires fairly extensive user input to do so. One way I’ve seen this framed is that ACT-R lacks domain knowledge (since it is not in fact an adult human), so you can think of the programming as mostly being about hypothesizing what domain knowledge people invoke to solve a task.
The first of your two images looks broken in my browser.
No particularly strong reason—the main thing is that, when building these models, you also have to build a model of the environment that the system is interacting with. And the codebase for helping people build generic environments is mostly focused on handling key-presses and mouse-movements and visually looking at screens, while there’s a separate codebase for handing auditory stimuli and responses, since that’s a pretty different sort of behaviour.
I think I remember hearing about this from you in the past and looking into it some.
I looked into it again just now and hit a sort of “satiety point” (which I hereby summarize and offer as a comment) when I boiled the idea down to “ACT-R is essentially a programming language with architectural inclinations which cause it to be intuitively easy see 1:1 connections between parts of the programs and parts of neurophysiology, such that diagrams of brain wiring, and diagrams of ACT-R programs, are easy for scientists to perceptually conflate and make analogies between… then also ACT-R more broadly is the high quality conserved work products from such a working milieu that survive various forms of quality assurance”.
Pictures helped! Without them I think I wouldn’t have felt like I understood the gist of it.
This image is a very general version that is offered as an example of how one is likely to use the programming language for some task, I think?
Then you might ask… ok… what does it look like after people have been working on it for a long time? So then this image comes from 2004 research.
My reaction to this latter thing is that I recognize lots of words, and the “Intentional module” being “not identified” jumps out at me and causes me to instantly propose things.
But then, because I imagine that the ACT-R experts presumably are working under self-imposed standards of rigor, I imagine they could object to my proposals with rigorous explanations.
If I said something like “Maybe humans don’t actually have a rigorously strong form of Intentionality in the ways we naively expect, perhaps because we sometimes apply the intentional stance to humans too casually? Like maybe instead we ‘merely’ have imagined goal content hanging out in parts of our brain, that we sometimes flail about and try to generate imaginary motor plans that cause the goal… so you could try to tie the Imaginal, Goal, Retrieval, and ‘Declarative/Frontal’ parts together until you can see how that is the source of what are often called revealed preferences?”
Then they might object “Yeah, that’s an obvious idea, but we tried it, and then looked more carefully and noticed that the ACC doesn’t actually neuro-anatomically link to the VLPFC in the way that would be required to really make it a plausible theory of humans”… or whatever, I have no idea what they would really say because I don’t have all of the parts of the human brain and their connections memorized, and maybe neuroanatomy wouldn’t even be the basis of an ACT-R expert’s objection? Maybe it would be some other objection.
...
After thinking about it for a bit, the coolest thing I could think of doing with ACT-R was applying it to the OpenWorm project somehow, to see about getting a higher level model of worms that relates cleanly to the living behavior of actual worms, and their typical reaction times, and so on.
Then the ACT-R model of a worm could perhaps be used (swiftly! (in software!)) to rule out various operational modes of a higher resolution simulation of a less platonic worm model that has technical challenges when “tuning hyperparameters” related to many fiddly little cellular biology questions?
As someone who can maybe call themselves an ACT-R expert, I think the main thing I’d say about the intentional module being “not identified” is that we don’t have any fMRI data showing activity in any particular part of the brain being correlated to the use of the intentional module in various models. For all of the other parts that have brain areas identified, there’s pretty decent data showing that correlation with activity in particular brain areas. And also, for each of those other areas there’s pretty good arguments that those brain areas have something to do with tasks that involve those modules (brain damage studies, usually).
It’s worth noting that there’s no particular logical reason why there would have to be a direct correlation between modules in ACT-R and brain areas. ACT-R was developed based on looking at human behaviour and separating things out into behaviourally distinct components. There’s no particular reason that separating things out this way must map directly onto physically distinct components. (After all, the web browser and the word processor on a computer are behaviourally distinct, but not physically distinct). But it’s been really neat that in the last 20 years a surprising number of of these modules that have been around in various forms since the 70′s have turned out to map onto physically distinct brain areas.
The idea of the physical brain turning out to be similar to ACT-R after the code had been written based on high level timing data and so on… seems like strong support to me. Nice! Real science! Predicting stuff in advance by accident! <3
My memory from exploring this in the past is that I ran into some research with “math problem solving behavior” with human millisecond timing for answering various math questions that might use different methods… Googling now, this Tenison et al ACT-R arithmetic paper might be similar, or related?
With you being an expert, I was going to ask if you knew of any cool problems other than basic arithmetic that might have been explored like the Trolley Problem or behavioral economics or something…
(Then I realized that after I had formulated the idea in specific keywords I had Google and could just search, and… yup… Trolley Problem in ACT-R occurs in a 2019 Masters Thesis by Thomas Steven Highstead that also has… hahahaha, omg! There’s a couple pages here reviewing ACT-R work on Asimov’s Three Laws!?!)
Maybe a human level question is more like: “As an someone familiar with the field, what is the coolest thing you know of that ACT-R has been used for?” :-)
Yes, that Tenison paper is a great example of arithmetic modelling in ACT-R, and especially connecting it to the modern fMRI approach for validation! For an example of the other sorts of math modelling that’s more psychology-experiment-based, this paper gives some of the low-level detail about how such a model would work, and maps it onto human errors:
- “Toward a Dynamic Model of Early Algebra Acquisition” https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.53.5754&rep=rep1&type=pdf
(that work was expanded on a few times, and led to things like “Instructional experiments with ACT-R “SimStudents”” http://act-r.psy.cmu.edu/?post_type=publications&p=13890 where they made a bunch of simulated students and ran them through different teaching regimes)
As for other cool tasks, the stuff about playing some simple video games is pretty compelling to me, especially in as much as it talks about what sort of learning is necessary for the precise timing that develops. http://act-r.psy.cmu.edu/wordpress/wp-content/uploads/2019/03/paper46a.pdf Of course, this is not as good in terms of getting a high score as modern deep learning game-playing approaches, but it is very good in terms of matching human performance and learning trajectories. Another model I find rather cool a model of driving a car, which then got combined with a model of sleep deprivation to generate a model of sleep-deprived driving: http://act-r.psy.cmu.edu/wordpress/wp-content/uploads/2012/12/9822011-gunzelmann_moore_salvucci_gluck.pdf
One other very cool application, I think is the “SlimStampen” flashcard learning tool developed out of Hedderik van Rijn’s lab at the University of Groningen, in the Netherlands: http://rugsofteng.github.io/Team-5/ The basic idea is to optimize learning facts from flashcards by presenting a flashcard fact just before the ACT-R declarative memory model predicts that a person is going to forget a fact. This seems to improve learning considerably http://act-r.psy.cmu.edu/wordpress/wp-content/uploads/2012/12/867paper200.pdf and seems to be pretty reliable https://onlinelibrary.wiley.com/doi/epdf/10.1111/tops.12183
The flashcard and curriculum experiments seem really awesome in terms of potential for applications. It feels like the beginnings of the kind of software technology that would exist in a science fiction novel where one of the characters goes into a “learning pod” built by a high tech race, and pops out a couple days layer knowing how to “fly their spaceship” or whatever. Generic yet plausible plot-hole-solving super powers! <3
As for mapping ACT-R onto OpenWorm, unfortunately ACT-R’s at a much much higher level than that. It’s really meant for modelling humans—I seem to remember a few attempts to model tasks being performed by other primates by doing things like not including the Goal Buffer, but I don’t think that work went very far, and didn’t map well to simpler animals. :(
I wonder if extremely well trained dogs might work?
Chaser seems likely to have learned nouns, names, verbs… with toy names learned on one trial starting at roughly 5 months of age (albeit with a name forgetting curve so additional later exposures were needed for retention).
Having studied her training process, it seems like they taught her the concept of nouns very thoroughly.
Showing “here are N frisbees, after ‘take frisbee’ each one of them earns a reward” to get the idea of nouns referring to more than one thing demonstrated very thoroughly.
Then maybe “half frisbees, half balls” so that it was clear that “some things are non-frisbees and get no reward”.
In demos of names and verbs, after the training, you can watch her looking at things and thinking. Maybe the looking directions and the thinking times could be modeled?
I think that sort of task might be modellable with ACT-R—the hardest part might be getting or gathering the animal data to compare to! Most of the time ACT-R models are validated by comparing to human data gathered by taking a room full of undergraduates and making them do some task 100 times each. It’s a bit trickier to do that with animals. But that does seem like something that would be interesting research for someone to do!
This lines up fairly well with how I’ve seen psychology people geek out over ACT-R. That is: I had a psychology professor who was enamored with the ability to line up programming stuff with neuroanatomy. (She didn’t use it in class or anything, she just talked about it like it was the most mind blowing stuff she ever saw as a research psychologist, since normally you just get these isolated little theories about specific things.)
And, yeah, important to view it as a programming language which can model a bunch of stuff, but requires fairly extensive user input to do so. One way I’ve seen this framed is that ACT-R lacks domain knowledge (since it is not in fact an adult human), so you can think of the programming as mostly being about hypothesizing what domain knowledge people invoke to solve a task.
The first of your two images looks broken in my browser.
Why do they separate out the auditory world and the environment?
No particularly strong reason—the main thing is that, when building these models, you also have to build a model of the environment that the system is interacting with. And the codebase for helping people build generic environments is mostly focused on handling key-presses and mouse-movements and visually looking at screens, while there’s a separate codebase for handing auditory stimuli and responses, since that’s a pretty different sort of behaviour.