It’s cool that you’re treating Active Inference as a specific model that might or might not apply to particular situations, organisms, brain regions, etc. In fact, that arguably puts you outside the group of people / papers that this blog post is even criticizing in the first place—see Section 0.
A thing that puzzles me, though, is your negative reactions to Sections 3 & 4. From this thread, it seems to me that your reaction to Section 3 should have been:
“If you have an actual mechanical thermostat connected to an actual heater, and that’s literally the whole system, then obviously this is a feedback control system. So anyone who uses Active Inference language to talk about this system, like by saying that it’s ‘predicting’ that the room temperature will stay constant, is off their rocker! And… EITHER …that position is a straw-man, nobody actually says things like that! OR …people do say that, and I join you in criticizing them!”
And similarly for Section 4, for a system that is actually, mechanistically, straightforwardly based on an RL algorithm.
But that wasn’t your reaction, right? Why not? Was it just because you misunderstood my post? Or what’s going on?
I thought your post is an explanation of why you don’t find Active Inference a useful theory/model, rather than criticism of people. I mean, it sort of criticises authors of the papers on FEP for various reasons, but who cares? I care whether the model is useful or not, not whether people who proposed the theory were clear in their earlier writing (as long as you are able to arrive at the actual understanding of the theory). I didn’t see this as a central argument.
So, my original reaction to 3 (the root comment in this thread) was about the usefulness of the theory (vs control theory), not about people.
Re: 4, I already replied that I misunderstood your “mechanistical lizard” assumption. So only the first part of my original reply to 4 (about ontology and conceptualisation, but also about interpretability, communication, hierarchical composability, which I didn’t mention originally, but that is discussed at length in “Designing Ecosystems of Intelligence from First Principles” (Friston et al., Dec 2022)). Again, these are arguments about the usefulness of the model, not about criticising people.
Sorry, I’ll rephrase. I expect you to agree with the following; do you?
“If you have an actual mechanical thermostat connected to an actual heater, and that’s literally the whole system, then this particular system is a feedback control system. And the most useful way to model it and to think about it is as a feedback control system. It would be unhelpful (or maybe downright incorrect?) to call this particular system an Active Inference system, and to say that it’s ‘predicting’ that the room temperature will stay constant.”
“Downright incorrect”—no, because Active Inference model would be simply a mathematical generalisation of (simple) feedback control model in a thermostat. The implication “thermostat is a feedback control system” → “thermostat is an Active Inference agent” has the same “truth property” (sorry, I don’t know the correct term for this in logic) as the implication “A is a group” → “A is a semigroup”. Just a strict mathematical model generalisation.
“and to say that it’s ‘predicting’ that the room temperature will stay constant.”—no, it doesn’t predict predict specifically that “temperature will stay constant”. It predicts (or, “has preference for”) a distribution of temperature states of the room. And tries to act so as the actual distribution of these room temperatures matches this predicted distribution.
It’s cool that you’re treating Active Inference as a specific model that might or might not apply to particular situations, organisms, brain regions, etc. In fact, that arguably puts you outside the group of people / papers that this blog post is even criticizing in the first place—see Section 0.
A thing that puzzles me, though, is your negative reactions to Sections 3 & 4. From this thread, it seems to me that your reaction to Section 3 should have been:
“If you have an actual mechanical thermostat connected to an actual heater, and that’s literally the whole system, then obviously this is a feedback control system. So anyone who uses Active Inference language to talk about this system, like by saying that it’s ‘predicting’ that the room temperature will stay constant, is off their rocker! And… EITHER …that position is a straw-man, nobody actually says things like that! OR …people do say that, and I join you in criticizing them!”
And similarly for Section 4, for a system that is actually, mechanistically, straightforwardly based on an RL algorithm.
But that wasn’t your reaction, right? Why not? Was it just because you misunderstood my post? Or what’s going on?
I thought your post is an explanation of why you don’t find Active Inference a useful theory/model, rather than criticism of people. I mean, it sort of criticises authors of the papers on FEP for various reasons, but who cares? I care whether the model is useful or not, not whether people who proposed the theory were clear in their earlier writing (as long as you are able to arrive at the actual understanding of the theory). I didn’t see this as a central argument.
So, my original reaction to 3 (the root comment in this thread) was about the usefulness of the theory (vs control theory), not about people.
Re: 4, I already replied that I misunderstood your “mechanistical lizard” assumption. So only the first part of my original reply to 4 (about ontology and conceptualisation, but also about interpretability, communication, hierarchical composability, which I didn’t mention originally, but that is discussed at length in “Designing Ecosystems of Intelligence from First Principles” (Friston et al., Dec 2022)). Again, these are arguments about the usefulness of the model, not about criticising people.
Sorry, I’ll rephrase. I expect you to agree with the following; do you?
“If you have an actual mechanical thermostat connected to an actual heater, and that’s literally the whole system, then this particular system is a feedback control system. And the most useful way to model it and to think about it is as a feedback control system. It would be unhelpful (or maybe downright incorrect?) to call this particular system an Active Inference system, and to say that it’s ‘predicting’ that the room temperature will stay constant.”
Unhelpful—yes.
“Downright incorrect”—no, because Active Inference model would be simply a mathematical generalisation of (simple) feedback control model in a thermostat. The implication “thermostat is a feedback control system” → “thermostat is an Active Inference agent” has the same “truth property” (sorry, I don’t know the correct term for this in logic) as the implication “A is a group” → “A is a semigroup”. Just a strict mathematical model generalisation.
“and to say that it’s ‘predicting’ that the room temperature will stay constant.”—no, it doesn’t predict predict specifically that “temperature will stay constant”. It predicts (or, “has preference for”) a distribution of temperature states of the room. And tries to act so as the actual distribution of these room temperatures matches this predicted distribution.