I don’t get it, what does FEP/Active Inference give you for understanding the DishBrain experiment?
It shows that “supposedly non-agentic” matter (neuronal tissue culture) suddenly begins to exhibit rather advanced agentic behaviour (playing Pong, “with the intention to” win). Explaining this without a theory of agency, that is, recovering Pong gameplay dynamics from the dynamics of synaptic gain or whatever happens in that culture, would be very hard (I suppose, but I could be wrong). Or, in doing so, you will effectively recover Active Inference theory, anyway: See chapter 5 in “Active Inference” book (Parr, Pezzulo, and Friston, 2022). Similar work for canonical neural networks was done by Isomura et al. (2022).
Those make things more rather than less confusing to me.
I’m writing a big article (in academic format) on this, do you expect to grok it in three sentences? If these questions were easy, people who research “agent foundations” would stumble upon them again and again without much progress.
It shows that “supposedly non-agentic” matter (neuronal tissue culture) suddenly begins to exhibit rather advanced agentic behaviour (playing Pong, “with the intention to” win).
How is neuronal tissue culture “supposedly non-agentic matter”? Isn’t it the basic building block of human agency?
I’m writing a big article (in academic format) on this, do you expect to grok it in three sentences?
I meam you’re claiming this makes it less confusing, whereas it feels like it makes it more confusing.
If these questions were easy, people who research “agent foundations” would stumble upon them again and again without much progress.
Not sure which specific people’s specific stumblings you have in mind.
How is neuronal tissue culture “supposedly non-agentic matter”? Isn’t it the basic building block of human agency?
Ok, it doesn’t matter whether it is supposedly non-agentic or not, this is sophistry. Let’s return to the origin of this question: Byrnes said that everything that Active Inference predicts is easier to predict with more specific neurobiological theories, taking more specific neurobiological evidence. The thing is, these theories, as far as I know, deal with high-level structural and empirical features of the brain, such as pathways, brain regions, specific chains of neurochemical activation. Brute-force connectome simulation of every neuron doesn’t work, even for C. elegans with its 302 neurons. (I don’t know which paper shows this, but Michael Levin frequently mentions this negative result in his appearances on YouTube.) In case of DishBrain, you either need to accoumplish what was not done for C. elegans, or use the Active Inference theory.
I meam you’re claiming this makes it less confusing, whereas it feels like it makes it more confusing.
Ok, I’m now genuinely interested.
Under Active Inference, a goal (or a preference, these are essentially synonyms) is simply a (multi-factor) probability distribution about observations (or elements of the world model) in the future. (In Active Inference, a “belief” is a purely technical term, which means “multi-factor probability distribution”.) Then, Active Inference “takes” this belief (i.e., probability distribution) and plans and executes action so as to minimise a certain functional (called “expected free energy”, but it doesn’t matter), which takes this belief about the future as a parameter. This is what the entire Active Inference algorithm is.
How this could be more confusing? Is the above explanation still confusing? If yes, what part of it?
(This shall be compared with RL, where people sort of still don’t know what “goal” or “preference” is, in the general sense, because the reward is not that.)
Not sure which specific people’s specific stumblings you have in mind.
At least half of all posts by the tag agency on LW. Don’t see a point in calling out specific writings.
Ok, it doesn’t matter whether it is supposedly non-agentic or not, this is sophistry. Let’s return to the origin of this question: Byrnes said that everything that Active Inference predicts is easier to predict with more specific neurobiological theories, taking more specific neurobiological evidence. The thing is, these theories, as far as I know, deal with high-level structural and empirical features of the brain, such as pathways, brain regions, specific chains of neurochemical activation. Brute-force connectome simulation of every neuron doesn’t work, even for C. elegans with its 302 neurons. (I don’t know which paper shows this, but Michael Levin frequently mentions this negative result in his appearances on YouTube.) In case of DishBrain, you either need to accoumplish what was not done for C. elegans, or use the Active Inference theory.
I’m not trying to engage in sophistry, it’s just that I don’t know much neuroscience and also find the paper in question hard to understand.
Like ok, so the headline claim is that it learns to play pong. They show a representative example in video S2, but video S2 doesn’t seem to show something that has clearly learned pong to me. At first it is really bad, but maybe that is because it hasn’t learned yet; however in the later part of the video it also doesn’t look great, and it seems like the performance might be mostly due to luck (the ball ends up hitting the edge of the paddle).
They present some statistics in figure 5. If I understand correctly, “rally length” refers to the number of times it hits the ball; it seems to be approximately 1 on average, which seems to me to indicate very little learning, because the paddle seems to take up almost half the screen and so I feel like we should expect an average score close to 1 if it had no learning.
As proof of the learning, they point out that the red bars (representing the 6-20 minute interval) are higher than the green bars (representing the 0-5 minute interval). Which I guess is true but it looks to me like they are only very slightly better.
If I take this at face value, I would conclude that their experiment was a failure and that the paper is statistical spin. But since you’re citing it, I assume you checked it and found it to be legit, so I assume I’m misunderstanding something about the paper.
But supposing the result in their paper is legit, I still don’t know enough neuroscience to understand what’s going on. Like you are saying that it is naturally explained by Active Inference, and that there’s no other easy-to-understand explanation. And you might totally be right about that! But if Steven Byrnes comes up and says that actually the neurons they used had some property that implements reinforcement learning, I would have no way of deciding which of you are right, because I don’t understand the mechanisms involved well enough.
The best I could do would be to go by priors and say that the Active Inference people write a lot of incomprehensible stuff while Steven Byrnes writes a lot of super easy to understand stuff, and being able to explain a phenomenon in a way that is easy to understand usually seems like a good proxy for understanding the phenomenon well, so Steven Byrnes seems more reliable.
Ok, I’m now genuinely interested.
Under Active Inference, a goal (or a preference, these are essentially synonyms) is simply a (multi-factor) probability distribution about observations (or elements of the world model) in the future. (In Active Inference, a “belief” is a purely technical term, which means “multi-factor probability distribution”.) Then, Active Inference “takes” this belief (i.e., probability distribution) and plans and executes action so as to minimise a certain functional (called “expected free energy”, but it doesn’t matter), which takes this belief about the future as a parameter. This is what the entire Active Inference algorithm is.
How this could be more confusing? Is the above explanation still confusing? If yes, what part of it?
Why not take the goal to be a utility function instead of the free energy of a probability distribution? Unless there’s something that probability distributions specifically give you, this just seems like mathematical reshuffling of terms that makes things more confusing.
At least half of all posts by the tag agency on LW. Don’t see a point in calling out specific writings.
I sometimes see people who I would consider confused about goals and beliefs, and am wondering whether my sense of people being confused agrees with your sense of people being confused.
I suspect it doesn’t because I would expect active inference to make the people I have in mind more confused. As such, it seems like you have a different cluster of confusions in mind than I do, and I find it odd that I haven’t noticed it.
The best I could do would be to go by priors and say that the Active Inference people write a lot of incomprehensible stuff while Steven Byrnes writes a lot of super easy to understand stuff, and being able to explain a phenomenon in a way that is easy to understand usually seems like a good proxy for understanding the phenomenon well, so Steven Byrnes seems more reliable.
Steven’s original post (to which my post is a reply) may be easy to understand, but don’t you find that it is a rather bad quality post, and its rating reflecting more of “ah yes, I also think Free Energy Principle is bullshit, so I’ll upvote”?
Even taking aside that he largely misunderstood FEP before/while writing his post, which he is sort of “justified for” because FEP literature itself is confusing (and the FEP theory itself progressed significantly in the last year alone, as I noted), many of his positions are just expressed opinions, without any explanation or argumentation. Also, criticising FEP from the perspective of philosophy of science and philosophy of mind (realism/instrumentalism and enactivism/representationalism) requires at least some familiarity with what FEP theorists and philosophers write themselves on these subjects, which he clearly didn’t demonstrate in the post.
My priors about “philosophical” writing on AI safety on LW (which is the majority of AI safety LW, except for rarer breed of more purely “technical” posts such as SolidGoldMagikarp) is that I pay attention to writing that 1) cites sources (and has these sources, to begin with) 2) demonstrates acquaintance with some branches of analytic philosophy.
Steven’s original post (to which my post is a reply) may be easy to understand, but don’t you find that it is a rather bad quality post, and its rating reflecting more of “ah yes, I also think Free Energy Principle is bullshit, so I’ll upvote”?
Looking over the points from his post, I still find myself nodding in agreement about them.
Even taking aside that he largely misunderstood FEP before/while writing his post, which he is sort of “justified for” because FEP literature itself is confusing (and the FEP theory itself progressed significantly in the last year alone, as I noted), many of his positions are just expressed opinions, without any explanation or argumentation.
I encountered many of the same problems while trying to understand FEP, so I don’t feel like I need explanation or argumentation. The post seems great for establishing common knowledge of the problems with FEP, even if it relies on the fact that there was already lots of individual knowledge about them.
If you are dissatisfied with these opinions, blame FEPers for generating a literature that makes people converge on them, not the readers of FEP stuff for forming them.
Also, criticising FEP from the perspective of philosophy of science and philosophy of mind (realism/instrumentalism and enactivism/representationalism) requires at least some familiarity with what FEP theorists and philosophers write themselves on these subjects, which he clearly didn’t demonstrate in the post.
I see zero mentions of realism/instrumentalism/enactivism/representationalism in the OP.
My priors about “philosophical” writing on AI safety on LW (which is the majority of AI safety LW, except for rarer breed of more purely “technical” posts such as SolidGoldMagikarp) is that I pay attention to writing that 1) cites sources (and has these sources, to begin with) 2) demonstrates acquaintance with some branches of analytic philosophy.
That’s not my priors. I can’t think of a single time where I’ve paid attention to philosophical sources that have been cited on LW. Usually the post presents the philosophical ideas and arguments in question directly rather than relying on sources.
I see zero mentions of realism/instrumentalism/enactivism/representationalism in the OP.
Steven’t “explicit and implicit predictions” are (probably, because Steven haven’t confirmed this) representationalism and enactivism in philosophy of mind. If he (or his readers) are not even familiar with this terminology and therefore not familiar with the megatonnes of literature already written on this subject, probably something that they will say on the very same subject won’t be high-quality or original philosophical thought? What would make you think otherwise?
Same with realism/instrumentalism, not using these words and not realising that FEP theorists themselves (and their academic critics) discussed the FEP from the philosophy of science perspective, doesn’t provide a good prior that new, original writing on this will be a fresh, quality development on the discourse.
I am okay with getting a few wrong ideas about FEP leaking out in the LessWrong memespace as a side-effect of making the fundamental facts of FEP (that it is bad) common knowledge. Like ideally there would be maximum accuracy but there’s tradeoffs in time and such. FEPers can correct the wrong ideas if they become a problem.
TBH, I didn’t engage much with the paper (I skimmed it beyond the abstract). I just deferred to the results. Here’s extended academic correspondence on this paper between groups of scientists and the authors, if you are interested. I also didn’t read this correspondence, but it doesn’t seem that the critics claim that the work was really a statistical failure.
But supposing the result in their paper is legit, I still don’t know enough neuroscience to understand what’s going on. Like you are saying that it is naturally explained by Active Inference, and that there’s no other easy-to-understand explanation. And you might totally be right about that! But if Steven Byrnes comes up and says that actually the neurons they used had some property that implements reinforcement learning, I would have no way of deciding which of you are right, because I don’t understand the mechanisms involved well enough.
As I wrote in section 4, Steven could come up and say that the neurons actually perform maximum entropy RL (which should also be reward-free, because nobody told the culture to play Pong), because these are indistinguishable at the multi-cellular scale already. But note (again, I trace to the very origin of the argument) that Steven’s argument contra Active Inference was here that there would be more “direct” modes of explanation of the behaviour than Active Inference. For a uniform neuronal culture, there are no intermediate modes of explanation between direct connectome computation, and Active Inference or maximum entropy RL (unlike full-sized brains, where there are high-level structures and mechanisms, identified by neuroscientists).
Now, you may ask, why would we choose to use Active Inference rather than maximum entropy reward-free RL as a general theory of agency (at least, of somewhat sizeable agents, but any agents of interest would definitely have the minimum needed size)? The answer is twofold:
Active Inference provides more interpretable (and, thus, more relatable, and controllable) ontology, specifically of beliefs about the future, which we discuss below. RL doesn’t.
Active Inference is multi-scale composable (I haven’t heard of such work wrt. RL), which means that we can align with AI in a “shared intelligence” style in a principled way.
Why not take the goal to be a utility function instead of the free energy of a probability distribution? Unless there’s something that probability distributions specifically give you, this just seems like mathematical reshuffling of terms that makes things more confusing.
Goal = utility function doesn’t make sense, if you think about it. What your utility function is applied to, exactly? It’s observational data, I suppose (because your brain, as well as any other physical agent, doesn’t have anything else). Then, when you note that real goals of real agents are (almost) never (arguably, just never) “sharp”, you arrive at the same definition of the goal that I gave.
Also note that in the section 4 of the post, in this picture:
There is expected utility theory in the last column. So, utility function (take E[logP(o)]) is recoverable from Active Inference.
Yeah, to me this all just sounds like standard Free Energy Principle/Active Inference obscurantism. Especially if you haven’t even read the paper that you claim is evidence for FEP.
Yeah, to me this all just sounds like standard Free Energy Principle/Active Inference obscurantism.
Again (maybe the last time), I would kindly ask you to point what is obscure in this simple conjecture:
Goal = utility function doesn’t make sense, if you think about it. What your utility function is applied to, exactly? It’s observational data, I suppose (because your brain, as well as any other physical agent, doesn’t have anything else). Then, when you note that real goals of real agents are (almost) never (arguably, just never) “sharp”, you arrive at the same definition of the goal that I gave.
I genuinely want to make my exposition more understandable, but I don’t see anything that wouldn’t be trivially self-evident in this passage.
Especially if you haven’t even read the paper that you claim is evidence for FEP.
DishBrain was not brought up as “evidence for Active Inference” (not FEP, FEP doesn’t need any evidence, it’s just mathematical tool). DishBrain was brought up in the reply to the very first argument by Steven: “I have yet to see any concrete algorithmic claim about the brain that was not more easily and intuitively [from my perspective] discussed without mentioning FEP” (he also should have said Active Inference here).
These are two different things. There is massive evidence for Active Inference in general, as well as RL in the brain and other agents. The (implied) Steven’s argument was like, “I haven’t seen a real-world agent, barring AIs explicitly engineered as Active Inference agents, for which Active Inference would be “first line” explanation through the stack of abstractions”.
This is sort of a niche argument that doesn’t even necessarily need a direct reply, because, as I wrote in the post and in other comments, there are other reasons to use Active Inference (precisely because it’s a general, abstract, high-level theory). Yet, I attempted to provide such an example. Even if this example fails (at least in your eyes), it couldn’t invalidate all the rest of the arguments in the post, and says nothing about obscurantism of the definition of the “goal” that we discuss above (so, I don’t understand your use of “especially”, connecting the two sentences of your comment).
Your original post asked me to put a lot of effort into understanding a neurological study. This study may very well be a hoax, which you hadn’t even bothered checking despite including it in your post.
I’m not sure how much energy I feel like putting into processing your post, at least until you’ve confirmed that you’ve purged all the hoaxy stuff and the only bits remaining are good.
“Unless you can demonstrate that it’s easy” was not an ask of Steven (or you, or any other reader of the post) to demonstrate this, because regardless of whether DishBrain is hoax or not, that would be large research project worth of work to demonstrate this: “easiness” refers anyway to the final result (“this specific model of neuronal interaction easily explains the culture of neurons playing pong”), not to the process of obtaining this result.
So, I thought it is clear that this phrase is a rhetorical interjection.
And, again, as I said above, the entire first argument by Steven is niche and not central (as well as our lengthy discussion of my reply to it), so feel free to skip it.
It shows that “supposedly non-agentic” matter (neuronal tissue culture) suddenly begins to exhibit rather advanced agentic behaviour (playing Pong, “with the intention to” win). Explaining this without a theory of agency, that is, recovering Pong gameplay dynamics from the dynamics of synaptic gain or whatever happens in that culture, would be very hard (I suppose, but I could be wrong). Or, in doing so, you will effectively recover Active Inference theory, anyway: See chapter 5 in “Active Inference” book (Parr, Pezzulo, and Friston, 2022). Similar work for canonical neural networks was done by Isomura et al. (2022).
I’m writing a big article (in academic format) on this, do you expect to grok it in three sentences? If these questions were easy, people who research “agent foundations” would stumble upon them again and again without much progress.
How is neuronal tissue culture “supposedly non-agentic matter”? Isn’t it the basic building block of human agency?
I meam you’re claiming this makes it less confusing, whereas it feels like it makes it more confusing.
Not sure which specific people’s specific stumblings you have in mind.
Ok, it doesn’t matter whether it is supposedly non-agentic or not, this is sophistry. Let’s return to the origin of this question: Byrnes said that everything that Active Inference predicts is easier to predict with more specific neurobiological theories, taking more specific neurobiological evidence. The thing is, these theories, as far as I know, deal with high-level structural and empirical features of the brain, such as pathways, brain regions, specific chains of neurochemical activation. Brute-force connectome simulation of every neuron doesn’t work, even for C. elegans with its 302 neurons. (I don’t know which paper shows this, but Michael Levin frequently mentions this negative result in his appearances on YouTube.) In case of DishBrain, you either need to accoumplish what was not done for C. elegans, or use the Active Inference theory.
Ok, I’m now genuinely interested.
Under Active Inference, a goal (or a preference, these are essentially synonyms) is simply a (multi-factor) probability distribution about observations (or elements of the world model) in the future. (In Active Inference, a “belief” is a purely technical term, which means “multi-factor probability distribution”.) Then, Active Inference “takes” this belief (i.e., probability distribution) and plans and executes action so as to minimise a certain functional (called “expected free energy”, but it doesn’t matter), which takes this belief about the future as a parameter. This is what the entire Active Inference algorithm is.
How this could be more confusing? Is the above explanation still confusing? If yes, what part of it?
(This shall be compared with RL, where people sort of still don’t know what “goal” or “preference” is, in the general sense, because the reward is not that.)
At least half of all posts by the tag agency on LW. Don’t see a point in calling out specific writings.
I’m not trying to engage in sophistry, it’s just that I don’t know much neuroscience and also find the paper in question hard to understand.
Like ok, so the headline claim is that it learns to play pong. They show a representative example in video S2, but video S2 doesn’t seem to show something that has clearly learned pong to me. At first it is really bad, but maybe that is because it hasn’t learned yet; however in the later part of the video it also doesn’t look great, and it seems like the performance might be mostly due to luck (the ball ends up hitting the edge of the paddle).
They present some statistics in figure 5. If I understand correctly, “rally length” refers to the number of times it hits the ball; it seems to be approximately 1 on average, which seems to me to indicate very little learning, because the paddle seems to take up almost half the screen and so I feel like we should expect an average score close to 1 if it had no learning.
As proof of the learning, they point out that the red bars (representing the 6-20 minute interval) are higher than the green bars (representing the 0-5 minute interval). Which I guess is true but it looks to me like they are only very slightly better.
If I take this at face value, I would conclude that their experiment was a failure and that the paper is statistical spin. But since you’re citing it, I assume you checked it and found it to be legit, so I assume I’m misunderstanding something about the paper.
But supposing the result in their paper is legit, I still don’t know enough neuroscience to understand what’s going on. Like you are saying that it is naturally explained by Active Inference, and that there’s no other easy-to-understand explanation. And you might totally be right about that! But if Steven Byrnes comes up and says that actually the neurons they used had some property that implements reinforcement learning, I would have no way of deciding which of you are right, because I don’t understand the mechanisms involved well enough.
The best I could do would be to go by priors and say that the Active Inference people write a lot of incomprehensible stuff while Steven Byrnes writes a lot of super easy to understand stuff, and being able to explain a phenomenon in a way that is easy to understand usually seems like a good proxy for understanding the phenomenon well, so Steven Byrnes seems more reliable.
Why not take the goal to be a utility function instead of the free energy of a probability distribution? Unless there’s something that probability distributions specifically give you, this just seems like mathematical reshuffling of terms that makes things more confusing.
(Is it because probability distributions make things covariant rather than contravariant?)
I sometimes see people who I would consider confused about goals and beliefs, and am wondering whether my sense of people being confused agrees with your sense of people being confused.
I suspect it doesn’t because I would expect active inference to make the people I have in mind more confused. As such, it seems like you have a different cluster of confusions in mind than I do, and I find it odd that I haven’t noticed it.
Steven’s original post (to which my post is a reply) may be easy to understand, but don’t you find that it is a rather bad quality post, and its rating reflecting more of “ah yes, I also think Free Energy Principle is bullshit, so I’ll upvote”?
Even taking aside that he largely misunderstood FEP before/while writing his post, which he is sort of “justified for” because FEP literature itself is confusing (and the FEP theory itself progressed significantly in the last year alone, as I noted), many of his positions are just expressed opinions, without any explanation or argumentation. Also, criticising FEP from the perspective of philosophy of science and philosophy of mind (realism/instrumentalism and enactivism/representationalism) requires at least some familiarity with what FEP theorists and philosophers write themselves on these subjects, which he clearly didn’t demonstrate in the post.
My priors about “philosophical” writing on AI safety on LW (which is the majority of AI safety LW, except for rarer breed of more purely “technical” posts such as SolidGoldMagikarp) is that I pay attention to writing that 1) cites sources (and has these sources, to begin with) 2) demonstrates acquaintance with some branches of analytic philosophy.
Looking over the points from his post, I still find myself nodding in agreement about them.
I encountered many of the same problems while trying to understand FEP, so I don’t feel like I need explanation or argumentation. The post seems great for establishing common knowledge of the problems with FEP, even if it relies on the fact that there was already lots of individual knowledge about them.
If you are dissatisfied with these opinions, blame FEPers for generating a literature that makes people converge on them, not the readers of FEP stuff for forming them.
I see zero mentions of realism/instrumentalism/enactivism/representationalism in the OP.
That’s not my priors. I can’t think of a single time where I’ve paid attention to philosophical sources that have been cited on LW. Usually the post presents the philosophical ideas and arguments in question directly rather than relying on sources.
Steven’t “explicit and implicit predictions” are (probably, because Steven haven’t confirmed this) representationalism and enactivism in philosophy of mind. If he (or his readers) are not even familiar with this terminology and therefore not familiar with the megatonnes of literature already written on this subject, probably something that they will say on the very same subject won’t be high-quality or original philosophical thought? What would make you think otherwise?
Same with realism/instrumentalism, not using these words and not realising that FEP theorists themselves (and their academic critics) discussed the FEP from the philosophy of science perspective, doesn’t provide a good prior that new, original writing on this will be a fresh, quality development on the discourse.
I am okay with getting a few wrong ideas about FEP leaking out in the LessWrong memespace as a side-effect of making the fundamental facts of FEP (that it is bad) common knowledge. Like ideally there would be maximum accuracy but there’s tradeoffs in time and such. FEPers can correct the wrong ideas if they become a problem.
TBH, I didn’t engage much with the paper (I skimmed it beyond the abstract). I just deferred to the results. Here’s extended academic correspondence on this paper between groups of scientists and the authors, if you are interested. I also didn’t read this correspondence, but it doesn’t seem that the critics claim that the work was really a statistical failure.
As I wrote in section 4, Steven could come up and say that the neurons actually perform maximum entropy RL (which should also be reward-free, because nobody told the culture to play Pong), because these are indistinguishable at the multi-cellular scale already. But note (again, I trace to the very origin of the argument) that Steven’s argument contra Active Inference was here that there would be more “direct” modes of explanation of the behaviour than Active Inference. For a uniform neuronal culture, there are no intermediate modes of explanation between direct connectome computation, and Active Inference or maximum entropy RL (unlike full-sized brains, where there are high-level structures and mechanisms, identified by neuroscientists).
Now, you may ask, why would we choose to use Active Inference rather than maximum entropy reward-free RL as a general theory of agency (at least, of somewhat sizeable agents, but any agents of interest would definitely have the minimum needed size)? The answer is twofold:
Active Inference provides more interpretable (and, thus, more relatable, and controllable) ontology, specifically of beliefs about the future, which we discuss below. RL doesn’t.
Active Inference is multi-scale composable (I haven’t heard of such work wrt. RL), which means that we can align with AI in a “shared intelligence” style in a principled way.
More on this in “Designing Ecosystems of Intelligence from First Principles” (Friston et al., Dec 2022).
Goal = utility function doesn’t make sense, if you think about it. What your utility function is applied to, exactly? It’s observational data, I suppose (because your brain, as well as any other physical agent, doesn’t have anything else). Then, when you note that real goals of real agents are (almost) never (arguably, just never) “sharp”, you arrive at the same definition of the goal that I gave.
Also note that in the section 4 of the post, in this picture:
There is expected utility theory in the last column. So, utility function (take E[logP(o)]) is recoverable from Active Inference.
Yeah, to me this all just sounds like standard Free Energy Principle/Active Inference obscurantism. Especially if you haven’t even read the paper that you claim is evidence for FEP.
Again (maybe the last time), I would kindly ask you to point what is obscure in this simple conjecture:
I genuinely want to make my exposition more understandable, but I don’t see anything that wouldn’t be trivially self-evident in this passage.
DishBrain was not brought up as “evidence for Active Inference” (not FEP, FEP doesn’t need any evidence, it’s just mathematical tool). DishBrain was brought up in the reply to the very first argument by Steven: “I have yet to see any concrete algorithmic claim about the brain that was not more easily and intuitively [from my perspective] discussed without mentioning FEP” (he also should have said Active Inference here).
These are two different things. There is massive evidence for Active Inference in general, as well as RL in the brain and other agents. The (implied) Steven’s argument was like, “I haven’t seen a real-world agent, barring AIs explicitly engineered as Active Inference agents, for which Active Inference would be “first line” explanation through the stack of abstractions”.
This is sort of a niche argument that doesn’t even necessarily need a direct reply, because, as I wrote in the post and in other comments, there are other reasons to use Active Inference (precisely because it’s a general, abstract, high-level theory). Yet, I attempted to provide such an example. Even if this example fails (at least in your eyes), it couldn’t invalidate all the rest of the arguments in the post, and says nothing about obscurantism of the definition of the “goal” that we discuss above (so, I don’t understand your use of “especially”, connecting the two sentences of your comment).
Your original post asked me to put a lot of effort into understanding a neurological study. This study may very well be a hoax, which you hadn’t even bothered checking despite including it in your post.
I’m not sure how much energy I feel like putting into processing your post, at least until you’ve confirmed that you’ve purged all the hoaxy stuff and the only bits remaining are good.
“Unless you can demonstrate that it’s easy” was not an ask of Steven (or you, or any other reader of the post) to demonstrate this, because regardless of whether DishBrain is hoax or not, that would be large research project worth of work to demonstrate this: “easiness” refers anyway to the final result (“this specific model of neuronal interaction easily explains the culture of neurons playing pong”), not to the process of obtaining this result.
So, I thought it is clear that this phrase is a rhetorical interjection.
And, again, as I said above, the entire first argument by Steven is niche and not central (as well as our lengthy discussion of my reply to it), so feel free to skip it.