The best I could do would be to go by priors and say that the Active Inference people write a lot of incomprehensible stuff while Steven Byrnes writes a lot of super easy to understand stuff, and being able to explain a phenomenon in a way that is easy to understand usually seems like a good proxy for understanding the phenomenon well, so Steven Byrnes seems more reliable.
Steven’s original post (to which my post is a reply) may be easy to understand, but don’t you find that it is a rather bad quality post, and its rating reflecting more of “ah yes, I also think Free Energy Principle is bullshit, so I’ll upvote”?
Even taking aside that he largely misunderstood FEP before/while writing his post, which he is sort of “justified for” because FEP literature itself is confusing (and the FEP theory itself progressed significantly in the last year alone, as I noted), many of his positions are just expressed opinions, without any explanation or argumentation. Also, criticising FEP from the perspective of philosophy of science and philosophy of mind (realism/instrumentalism and enactivism/representationalism) requires at least some familiarity with what FEP theorists and philosophers write themselves on these subjects, which he clearly didn’t demonstrate in the post.
My priors about “philosophical” writing on AI safety on LW (which is the majority of AI safety LW, except for rarer breed of more purely “technical” posts such as SolidGoldMagikarp) is that I pay attention to writing that 1) cites sources (and has these sources, to begin with) 2) demonstrates acquaintance with some branches of analytic philosophy.
Steven’s original post (to which my post is a reply) may be easy to understand, but don’t you find that it is a rather bad quality post, and its rating reflecting more of “ah yes, I also think Free Energy Principle is bullshit, so I’ll upvote”?
Looking over the points from his post, I still find myself nodding in agreement about them.
Even taking aside that he largely misunderstood FEP before/while writing his post, which he is sort of “justified for” because FEP literature itself is confusing (and the FEP theory itself progressed significantly in the last year alone, as I noted), many of his positions are just expressed opinions, without any explanation or argumentation.
I encountered many of the same problems while trying to understand FEP, so I don’t feel like I need explanation or argumentation. The post seems great for establishing common knowledge of the problems with FEP, even if it relies on the fact that there was already lots of individual knowledge about them.
If you are dissatisfied with these opinions, blame FEPers for generating a literature that makes people converge on them, not the readers of FEP stuff for forming them.
Also, criticising FEP from the perspective of philosophy of science and philosophy of mind (realism/instrumentalism and enactivism/representationalism) requires at least some familiarity with what FEP theorists and philosophers write themselves on these subjects, which he clearly didn’t demonstrate in the post.
I see zero mentions of realism/instrumentalism/enactivism/representationalism in the OP.
My priors about “philosophical” writing on AI safety on LW (which is the majority of AI safety LW, except for rarer breed of more purely “technical” posts such as SolidGoldMagikarp) is that I pay attention to writing that 1) cites sources (and has these sources, to begin with) 2) demonstrates acquaintance with some branches of analytic philosophy.
That’s not my priors. I can’t think of a single time where I’ve paid attention to philosophical sources that have been cited on LW. Usually the post presents the philosophical ideas and arguments in question directly rather than relying on sources.
I see zero mentions of realism/instrumentalism/enactivism/representationalism in the OP.
Steven’t “explicit and implicit predictions” are (probably, because Steven haven’t confirmed this) representationalism and enactivism in philosophy of mind. If he (or his readers) are not even familiar with this terminology and therefore not familiar with the megatonnes of literature already written on this subject, probably something that they will say on the very same subject won’t be high-quality or original philosophical thought? What would make you think otherwise?
Same with realism/instrumentalism, not using these words and not realising that FEP theorists themselves (and their academic critics) discussed the FEP from the philosophy of science perspective, doesn’t provide a good prior that new, original writing on this will be a fresh, quality development on the discourse.
I am okay with getting a few wrong ideas about FEP leaking out in the LessWrong memespace as a side-effect of making the fundamental facts of FEP (that it is bad) common knowledge. Like ideally there would be maximum accuracy but there’s tradeoffs in time and such. FEPers can correct the wrong ideas if they become a problem.
Steven’s original post (to which my post is a reply) may be easy to understand, but don’t you find that it is a rather bad quality post, and its rating reflecting more of “ah yes, I also think Free Energy Principle is bullshit, so I’ll upvote”?
Even taking aside that he largely misunderstood FEP before/while writing his post, which he is sort of “justified for” because FEP literature itself is confusing (and the FEP theory itself progressed significantly in the last year alone, as I noted), many of his positions are just expressed opinions, without any explanation or argumentation. Also, criticising FEP from the perspective of philosophy of science and philosophy of mind (realism/instrumentalism and enactivism/representationalism) requires at least some familiarity with what FEP theorists and philosophers write themselves on these subjects, which he clearly didn’t demonstrate in the post.
My priors about “philosophical” writing on AI safety on LW (which is the majority of AI safety LW, except for rarer breed of more purely “technical” posts such as SolidGoldMagikarp) is that I pay attention to writing that 1) cites sources (and has these sources, to begin with) 2) demonstrates acquaintance with some branches of analytic philosophy.
Looking over the points from his post, I still find myself nodding in agreement about them.
I encountered many of the same problems while trying to understand FEP, so I don’t feel like I need explanation or argumentation. The post seems great for establishing common knowledge of the problems with FEP, even if it relies on the fact that there was already lots of individual knowledge about them.
If you are dissatisfied with these opinions, blame FEPers for generating a literature that makes people converge on them, not the readers of FEP stuff for forming them.
I see zero mentions of realism/instrumentalism/enactivism/representationalism in the OP.
That’s not my priors. I can’t think of a single time where I’ve paid attention to philosophical sources that have been cited on LW. Usually the post presents the philosophical ideas and arguments in question directly rather than relying on sources.
Steven’t “explicit and implicit predictions” are (probably, because Steven haven’t confirmed this) representationalism and enactivism in philosophy of mind. If he (or his readers) are not even familiar with this terminology and therefore not familiar with the megatonnes of literature already written on this subject, probably something that they will say on the very same subject won’t be high-quality or original philosophical thought? What would make you think otherwise?
Same with realism/instrumentalism, not using these words and not realising that FEP theorists themselves (and their academic critics) discussed the FEP from the philosophy of science perspective, doesn’t provide a good prior that new, original writing on this will be a fresh, quality development on the discourse.
I am okay with getting a few wrong ideas about FEP leaking out in the LessWrong memespace as a side-effect of making the fundamental facts of FEP (that it is bad) common knowledge. Like ideally there would be maximum accuracy but there’s tradeoffs in time and such. FEPers can correct the wrong ideas if they become a problem.