One is that it’s elegant, simple, and parsimonious.
I certainly agree here. Furthermore I think it makes sense to try and unify prediction with other aspects of cognition, so I can get that part of the motivation (although I don’t expect that humans have simple values). I just think this makes bad predictions.
Control systems are simple, they look to me to be the simplest thing we might reasonably call “alive” or “conscious” if we try to redefine those terms in ways that are not anchored on our experience here on Earth.
No disagreement here.
and this is claimed to always contain a signal of positive, negative, or neutral judgement.
Yeah, this seems like an interesting claim. I basically agree with the phenomenological claim. This seems to me like evidence in favor of a hierarchy-of-thermostats model (with one major reservation which I’ll describe later). However, not particularly like evidence of the prediction-error-minimization perspective. We can have a network of controllers which express wishes to each other separately of predictions. Yes, that’s less parsimonious, but I don’t see a way to make the first work without dubious compromises.
Here’s the reservation which I promised—if we have a big pile of controllers, how would we know (based on phenomenal experience) that controllers attach positive/negative valence “locally” to every percept?
Forget controllers for a moment, and just suppose that there’s any hierarchy at all. It could be made of controller-like pieces, or neural networks learning via backprop, etc. As a proxy for conscious awareness, let’s ask: what kind of thing can we verbally report? There isn’t any direct access to things inside the hierarchy; there’s only the summary of information which gets passed up the hierarchy.
In other words: it makes sense that low-level features like edge detectors and colors get combined into increasingly high-level features until we recognize an object. However, it’s notable that our high-level cognition can also purposefully attend to low-level features such as lines. This isn’t really predicted by the basic hierarchy picture—more needs to be said about how this works.
So, similarly, we can’t predict that you or I verbally report positive/negative/neutral attaching to percepts from the claim that the sensory hierarchy is composed of units which are controllers. A controller has valence in that it has goals and how-it’s-doing on those goals, but why should we expect that humans verbally report the direct experience of that? Humans don’t have direct conscious experience of everything going on in neural circuitry.
This is not at all a problem with minimization of prediction error; it’s more a question about hierarchies of controllers.
So, similarly, we can’t predict that you or I verbally report positive/negative/neutral attaching to percepts from the claim that the sensory hierarchy is composed of units which are controllers. A controller has valence in that it has goals and how-it’s-doing on those goals, but why should we expect that humans verbally report the direct experience of that? Humans don’t have direct conscious experience of everything going on in neural circuitry.
Yeah this is s good point and I agree it’s one of the things that I am looking for others to verify with better brain imaging technology. I find myself in the position of working ahead of what we can completely verify now because I’m willing to take the bet that it’s right or at least right enough that however it’s wrong won’t throw out the work I do.
I certainly agree here. Furthermore I think it makes sense to try and unify prediction with other aspects of cognition, so I can get that part of the motivation (although I don’t expect that humans have simple values). I just think this makes bad predictions.
No disagreement here.
Yeah, this seems like an interesting claim. I basically agree with the phenomenological claim. This seems to me like evidence in favor of a hierarchy-of-thermostats model (with one major reservation which I’ll describe later). However, not particularly like evidence of the prediction-error-minimization perspective. We can have a network of controllers which express wishes to each other separately of predictions. Yes, that’s less parsimonious, but I don’t see a way to make the first work without dubious compromises.
Here’s the reservation which I promised—if we have a big pile of controllers, how would we know (based on phenomenal experience) that controllers attach positive/negative valence “locally” to every percept?
Forget controllers for a moment, and just suppose that there’s any hierarchy at all. It could be made of controller-like pieces, or neural networks learning via backprop, etc. As a proxy for conscious awareness, let’s ask: what kind of thing can we verbally report? There isn’t any direct access to things inside the hierarchy; there’s only the summary of information which gets passed up the hierarchy.
In other words: it makes sense that low-level features like edge detectors and colors get combined into increasingly high-level features until we recognize an object. However, it’s notable that our high-level cognition can also purposefully attend to low-level features such as lines. This isn’t really predicted by the basic hierarchy picture—more needs to be said about how this works.
So, similarly, we can’t predict that you or I verbally report positive/negative/neutral attaching to percepts from the claim that the sensory hierarchy is composed of units which are controllers. A controller has valence in that it has goals and how-it’s-doing on those goals, but why should we expect that humans verbally report the direct experience of that? Humans don’t have direct conscious experience of everything going on in neural circuitry.
This is not at all a problem with minimization of prediction error; it’s more a question about hierarchies of controllers.
Yeah this is s good point and I agree it’s one of the things that I am looking for others to verify with better brain imaging technology. I find myself in the position of working ahead of what we can completely verify now because I’m willing to take the bet that it’s right or at least right enough that however it’s wrong won’t throw out the work I do.