The thing you are minimizing by going outside isn’t prediction error for sense data, it’s a sort of expected prediction error over a spatial extent in your model. I think both of these are valid concepts to think about, so it’s not like this argument shows that prediction error is “really” about building a model of the world and then ensuring that it’s both correct and complete—it’s an argument about what’s more reasonable to model humans as doing.
Of course, once you have two possibilities, that usually means you have infinite possibilities. I see where this could lead to people generating a whole family of formalisms. But I still feel like this route leads to oversimplification.
For example, sometimes people are happy to just fool their sense-data—we take anesthetics, or look at pornography, or drink diet soda. But sometimes people aren’t—the pictures-of-relationships industry is much smaller than the porn industry, people buy free-range beef, or a genuine Rembrandt.
Oh, I wasn’t really trying at all to talk about what prediction-error minimization “really does” there, more to point out that it changes radically depending on your modeling assumptions.
The “distal causes” bit is also something I really want to find the time and expertise to formalize. There are studies of causal judgements grounding moral responsibility of agents and I’d really like to see if we can use the notion of distal causation to generalize from there to how people learn causal models that capture action-affordances.
The thing you are minimizing by going outside isn’t prediction error for sense data, it’s a sort of expected prediction error over a spatial extent in your model. I think both of these are valid concepts to think about, so it’s not like this argument shows that prediction error is “really” about building a model of the world and then ensuring that it’s both correct and complete—it’s an argument about what’s more reasonable to model humans as doing.
Of course, once you have two possibilities, that usually means you have infinite possibilities. I see where this could lead to people generating a whole family of formalisms. But I still feel like this route leads to oversimplification.
For example, sometimes people are happy to just fool their sense-data—we take anesthetics, or look at pornography, or drink diet soda. But sometimes people aren’t—the pictures-of-relationships industry is much smaller than the porn industry, people buy free-range beef, or a genuine Rembrandt.
Oh, I wasn’t really trying at all to talk about what prediction-error minimization “really does” there, more to point out that it changes radically depending on your modeling assumptions.
The “distal causes” bit is also something I really want to find the time and expertise to formalize. There are studies of causal judgements grounding moral responsibility of agents and I’d really like to see if we can use the notion of distal causation to generalize from there to how people learn causal models that capture action-affordances.