I find speaking in terms of minimization of prediction error useful to my own intuitions, but it does increasingly look like what I’m really thinking of are just generic homeostatic control systems. I like talking in terms of prediction error because I think it makes the translation to other similar theories easier (I’m thinking other Bayesian brain theories and Friston’s free energy theory), but I think it’s right to think I’m just thinking about a control system sending signals to hit a set point, even if some of those control systems do learn in a way that looks like Bayesian updating or minimization of prediction error and others don’t.
The sense in which I think of this theory as parsimonious is that I don’t believe there is a simpler mechanism that can explain what we see. If we could talk about these phenomena in terms of control systems without using signals about distance from set points I’d prefer that, and I think the complexity we get from having to build things out of such simple components is the right move in terms of parsimony rather than having to postulate additional mechanisms. As long as I can explain things adequately without having to introduce more moving parts I’ll consider it maximally parsimonious as far as my current knowledge and needs go.
I’m still interested if you can say more about how you view it as minimizing a warped prediction. I mentioned that of you fix some parts of the network, they seem to end up getting ignored rather than producing goal-directed behaviour. Do you have an alternate picture in which this doesn’t happen? (I’m not asking you to justify yourself rigorously; I’m curious for whatever thoughts or vague images you have here, though of course all the better if it really works)
Ah, I guess I don’t expect it to end up ignoring the parts of the network that can’t learn because I don’t think error minimization, learning, or anything else is a top level goal of the network. That is, there are only low-level control systems interacting, and parts of the network get not ignored by their being more powerful in various ways, probably by being positioned such that they are located in the network such that they have more influence on behavior than other parts of the network that perform Bayesian learning. This does mean I expect those parts of the network don’t learn or learn inefficiently, but they do that because it’s adaptive.
For example, I would guess something in humans like the neocortex is capable of Bayesian learning, but it only influences the rest of the system through narrow channels that prevent it from “taking over” and making humans true prediction error minimizers, instead forcing them to do things that satisfy other set points. In buzz words you might say human minds are “complex, adaptive, emergent systems” built out of neurons with most of the function coming bottom up from the neurons or “from the middle”, if you will, in terms of network topology.
I find speaking in terms of minimization of prediction error useful to my own intuitions, but it does increasingly look like what I’m really thinking of are just generic homeostatic control systems. I like talking in terms of prediction error because I think it makes the translation to other similar theories easier (I’m thinking other Bayesian brain theories and Friston’s free energy theory), but I think it’s right to think I’m just thinking about a control system sending signals to hit a set point, even if some of those control systems do learn in a way that looks like Bayesian updating or minimization of prediction error and others don’t.
The sense in which I think of this theory as parsimonious is that I don’t believe there is a simpler mechanism that can explain what we see. If we could talk about these phenomena in terms of control systems without using signals about distance from set points I’d prefer that, and I think the complexity we get from having to build things out of such simple components is the right move in terms of parsimony rather than having to postulate additional mechanisms. As long as I can explain things adequately without having to introduce more moving parts I’ll consider it maximally parsimonious as far as my current knowledge and needs go.
I’m still interested if you can say more about how you view it as minimizing a warped prediction. I mentioned that of you fix some parts of the network, they seem to end up getting ignored rather than producing goal-directed behaviour. Do you have an alternate picture in which this doesn’t happen? (I’m not asking you to justify yourself rigorously; I’m curious for whatever thoughts or vague images you have here, though of course all the better if it really works)
Ah, I guess I don’t expect it to end up ignoring the parts of the network that can’t learn because I don’t think error minimization, learning, or anything else is a top level goal of the network. That is, there are only low-level control systems interacting, and parts of the network get not ignored by their being more powerful in various ways, probably by being positioned such that they are located in the network such that they have more influence on behavior than other parts of the network that perform Bayesian learning. This does mean I expect those parts of the network don’t learn or learn inefficiently, but they do that because it’s adaptive.
For example, I would guess something in humans like the neocortex is capable of Bayesian learning, but it only influences the rest of the system through narrow channels that prevent it from “taking over” and making humans true prediction error minimizers, instead forcing them to do things that satisfy other set points. In buzz words you might say human minds are “complex, adaptive, emergent systems” built out of neurons with most of the function coming bottom up from the neurons or “from the middle”, if you will, in terms of network topology.