I agree that it’s broadly relevant to the partial-agency sequence, but I’m curious what particular reminiscence you’re seeing here.
I would say that “if the free energy principle is a good way of looking at things” then it is the solution to at least one of the riddles I’m thinking about here. However, I haven’t so far been very convinced.
I haven’t looked into the technical details of Friston’s work very much myself. However, a very mathematically sophisticated friend has tried going through Friston papers on a couple of occasions and found them riddled with mathematical errors. This does not, of course, mean that every version of free-energy/predictive-processing ideas is wrong, but it does make me hesitant to take purported results at face value.
The relevance that I’m seeing is that of self-fulfilling prophecies.
My understanding of FEP/predictive processing is that you’re looking at brains/agency as a sort of thermodynamic machine that reaches equilibrium when its predictions match its perceptions. The idea is that both ways are available to minimize prediction error: you can update your beliefs, or you can change the world to fit your beliefs. That means that there might not be much difference at all between belief, decision and action. If you want to do something, you just, by some act of will, believe really hard that it should happen, and let thermodynamics run its course.
More simply put, changing your mind changes the state of the world by changing your brain, so it really is some kind of action. In the case of predict-o-matic, its predictions literally influence the world, since people are following its prophecies, and yet it still has to make accurate predictions; so in order to have accurate beliefs it actually has to choose one of many possible prediction-outcome fixed points.
Now, FEP says that, for living systems, all choices are like this. The only choice we have is which fixed point to believe in.
I find the basic ideas of FEP pretty compelling, especially because there are lots of similar theories in other fields (e.g. good regulators in cybernetics, internal models in control systems, and in my opinion Löb’s theorem as a degenerate case). I haven’t looked into the formalism yet. I would definitely not be surprised to see errors in the math, given that it’s very applied math-flavored and yet very theoretical.
I agree that it’s broadly relevant to the partial-agency sequence, but I’m curious what particular reminiscence you’re seeing here.
I would say that “if the free energy principle is a good way of looking at things” then it is the solution to at least one of the riddles I’m thinking about here. However, I haven’t so far been very convinced.
I haven’t looked into the technical details of Friston’s work very much myself. However, a very mathematically sophisticated friend has tried going through Friston papers on a couple of occasions and found them riddled with mathematical errors. This does not, of course, mean that every version of free-energy/predictive-processing ideas is wrong, but it does make me hesitant to take purported results at face value.
The relevance that I’m seeing is that of self-fulfilling prophecies.
My understanding of FEP/predictive processing is that you’re looking at brains/agency as a sort of thermodynamic machine that reaches equilibrium when its predictions match its perceptions. The idea is that both ways are available to minimize prediction error: you can update your beliefs, or you can change the world to fit your beliefs. That means that there might not be much difference at all between belief, decision and action. If you want to do something, you just, by some act of will, believe really hard that it should happen, and let thermodynamics run its course.
More simply put, changing your mind changes the state of the world by changing your brain, so it really is some kind of action. In the case of predict-o-matic, its predictions literally influence the world, since people are following its prophecies, and yet it still has to make accurate predictions; so in order to have accurate beliefs it actually has to choose one of many possible prediction-outcome fixed points.
Now, FEP says that, for living systems, all choices are like this. The only choice we have is which fixed point to believe in.
I find the basic ideas of FEP pretty compelling, especially because there are lots of similar theories in other fields (e.g. good regulators in cybernetics, internal models in control systems, and in my opinion Löb’s theorem as a degenerate case). I haven’t looked into the formalism yet. I would definitely not be surprised to see errors in the math, given that it’s very applied math-flavored and yet very theoretical.