Yeah, this has been a really good comment section for figuring out how my internal models are not as easily conveyed to others as I had hoped. I’ll likely write a follow up post trying to explain this idea again with some revised language to make the point possibly clearer and lean more on specifics from existing research on these models since there seem to be some inferential gaps that I have forgotten about such that what feels like the exciting new part to me (prediction error signal = valence = ground of value) is maybe the least interesting and least important aspect to evaluate for others who lack the same beliefs I have about how I think what I’m gesturing at with “predictive coding” and “minimization of prediction error” work.
Because what you are trying to say makes very much sense to me, if and only if I replace “prediction” with “set point value” for cases when the so called prediction is fixed.
Set point (control system vocabulary) = Intention/goal (agent vocabulary)
From my understanding I’m happy to talk just in terms of set points if that helps avoid confusion. Things like predictions, goals, intentions, learning, etc. seem to me like ways of talking about control systems with set points and possible set point update mechanisms that function in particular ways that we identify with those things. My original use of “prediction” seems to be confusing to many so I guess I should just stick to “set point” or make more clear what “prediction” means here since I assume (although can’t remember) that I picked up using “prediction” to mean “set point” from the relevant literature.
Yeah, this has been a really good comment section for figuring out how my internal models are not as easily conveyed to others as I had hoped. I’ll likely write a follow up post trying to explain this idea again with some revised language to make the point possibly clearer and lean more on specifics from existing research on these models since there seem to be some inferential gaps that I have forgotten about such that what feels like the exciting new part to me (prediction error signal = valence = ground of value) is maybe the least interesting and least important aspect to evaluate for others who lack the same beliefs I have about how I think what I’m gesturing at with “predictive coding” and “minimization of prediction error” work.
Do you agree with my clarification?
Because what you are trying to say makes very much sense to me, if and only if I replace “prediction” with “set point value” for cases when the so called prediction is fixed.
Set point (control system vocabulary) = Intention/goal (agent vocabulary)
From my understanding I’m happy to talk just in terms of set points if that helps avoid confusion. Things like predictions, goals, intentions, learning, etc. seem to me like ways of talking about control systems with set points and possible set point update mechanisms that function in particular ways that we identify with those things. My original use of “prediction” seems to be confusing to many so I guess I should just stick to “set point” or make more clear what “prediction” means here since I assume (although can’t remember) that I picked up using “prediction” to mean “set point” from the relevant literature.