It seems like people are talking in circles around each other in these comments, and I think the reason is that Gordon and other people who likes predictive processing theory are misusing the world “prediction”
By misuse I mean clearly deviating from common use. I don’t really care about sticking to common use, but if you deviate from the expected meaning of a word it is good to let people know.
Lets say I have a model of the future in my head. If I try to adjust the model to fit reality this model is a prediction. If I try to fit reality to my model, this model is an intention.
If you have a control system that tries to minimise “prediction error” with respect to a “prediction” that it is not able to chance, so that the system resort to change reality instead, then that is not really a prediction anymore.
As I understand it predictive processing theory suggest that both updating predictions and executing intentions are optimising for the same thing, which is aligning reality with my internal model. However there is an important difference with is what is variables and what is constants in solving that problem. Gordon is mentioning at some places that sometimes “predictions” can’t be updated.
This means that it won’t always be the case that a control system is globally trying to minimize prediction error, but instead is locally trying to minimize prediction error, although it may not be able to become less wrong over time because it can’t change the prediction to better predict the input.
There are probably some actual disagreement here (in this comment section) too, but we will not figure that out if we don’t agree on what words mean first.
Yeah, this has been a really good comment section for figuring out how my internal models are not as easily conveyed to others as I had hoped. I’ll likely write a follow up post trying to explain this idea again with some revised language to make the point possibly clearer and lean more on specifics from existing research on these models since there seem to be some inferential gaps that I have forgotten about such that what feels like the exciting new part to me (prediction error signal = valence = ground of value) is maybe the least interesting and least important aspect to evaluate for others who lack the same beliefs I have about how I think what I’m gesturing at with “predictive coding” and “minimization of prediction error” work.
Because what you are trying to say makes very much sense to me, if and only if I replace “prediction” with “set point value” for cases when the so called prediction is fixed.
Set point (control system vocabulary) = Intention/goal (agent vocabulary)
From my understanding I’m happy to talk just in terms of set points if that helps avoid confusion. Things like predictions, goals, intentions, learning, etc. seem to me like ways of talking about control systems with set points and possible set point update mechanisms that function in particular ways that we identify with those things. My original use of “prediction” seems to be confusing to many so I guess I should just stick to “set point” or make more clear what “prediction” means here since I assume (although can’t remember) that I picked up using “prediction” to mean “set point” from the relevant literature.
It seems like people are talking in circles around each other in these comments, and I think the reason is that Gordon and other people who likes predictive processing theory are misusing the world “prediction”
By misuse I mean clearly deviating from common use. I don’t really care about sticking to common use, but if you deviate from the expected meaning of a word it is good to let people know.
Lets say I have a model of the future in my head. If I try to adjust the model to fit reality this model is a prediction. If I try to fit reality to my model, this model is an intention.
If you have a control system that tries to minimise “prediction error” with respect to a “prediction” that it is not able to chance, so that the system resort to change reality instead, then that is not really a prediction anymore.
As I understand it predictive processing theory suggest that both updating predictions and executing intentions are optimising for the same thing, which is aligning reality with my internal model. However there is an important difference with is what is variables and what is constants in solving that problem. Gordon is mentioning at some places that sometimes “predictions” can’t be updated.
There are probably some actual disagreement here (in this comment section) too, but we will not figure that out if we don’t agree on what words mean first.
Yeah, this has been a really good comment section for figuring out how my internal models are not as easily conveyed to others as I had hoped. I’ll likely write a follow up post trying to explain this idea again with some revised language to make the point possibly clearer and lean more on specifics from existing research on these models since there seem to be some inferential gaps that I have forgotten about such that what feels like the exciting new part to me (prediction error signal = valence = ground of value) is maybe the least interesting and least important aspect to evaluate for others who lack the same beliefs I have about how I think what I’m gesturing at with “predictive coding” and “minimization of prediction error” work.
Do you agree with my clarification?
Because what you are trying to say makes very much sense to me, if and only if I replace “prediction” with “set point value” for cases when the so called prediction is fixed.
Set point (control system vocabulary) = Intention/goal (agent vocabulary)
From my understanding I’m happy to talk just in terms of set points if that helps avoid confusion. Things like predictions, goals, intentions, learning, etc. seem to me like ways of talking about control systems with set points and possible set point update mechanisms that function in particular ways that we identify with those things. My original use of “prediction” seems to be confusing to many so I guess I should just stick to “set point” or make more clear what “prediction” means here since I assume (although can’t remember) that I picked up using “prediction” to mean “set point” from the relevant literature.