Brain is a multi-level predictive error minimization machine, at least according to a number of SSC posts and reviews, and that matches my intuition as well. So, ultimately predictive power is an instrumental goal toward the terminal goal of minimizing the prediction error.
A territory is a sometimes useful model, and the distinction between an approximate map and as-good-as-possible map called territory is another useful meta-model. Since there is nothing but models, there is nothing to deny or to be agnostic about.
Is the therapy example a true model of the world or a useful fiction?
You are using terms that do not correspond to anything in my ontology. I’m guessing by “the world” you mean that territory thing, which is a sometimes useful model, but not in that setup. “A useful fiction” is another term for a good model, as far as I am concerned, as long as it gets you where you intend to be.
I don’t claim what is true, what exists, or what is real. In fact, I explicitly avoid all these 3 terms as devoid of meaning. That is reading too much into it. I’m simply pointing out that one can make accurate predictions of future observations without postulating anything but models of past observations.
How is predictive error, as opposed to our perception of predictive error, defined if not relative to the territory?
There is no such thing as “perception of predictive error” or actual “prediction error”. There is only observed prediction error. You are falling back on your default implicit ontology of objective reality when asking those questions.
Why do you assume that future predictions would follow from past predictions? It seems like there has to be an implicit underlying model there to make that assump
Why do you assume that future predictions would follow from past predictions?
That’s a meta-model that has been confirmed pretty reliably: it is possible to make reasonably accurate predictions in various areas based on past observations. In fact, if this were not possible at any level, we would not be talking about it :)
It seems like there has to be an implicit underlying model there to make that assump
Yes, that’s the (meta-)model, that accurate predictions are possible.
How can you confirm the model of “past predictions predict future predictions” with the data that “in the past past predictions have predicted future predictions?” Isn’t that circular?
The meta-observation (and the first implicit and trivially simple meta-model) is that accurate predictions are possible. Translated to the realist’s speak it would say something like “the universe is predictable, to some degree”. Which is just as circular, since without predictability there would be no agents to talk about predictability.
Once you postulate the territory behind your observations, you start using misleading and ill-defined terms like “exists”, “real” and “true”, and argue, say, which interpretation of QM is “true” or whether numbers “exist”, or whether unicorns are “real”. If you stick to models only, none of these are meaningful statements and so there is no reason to argue about them. Let’s go through these examples:
The orthodox interpretation of quantum mechanics is useful in calculating the cross sections, because it deals with the results of a measurement. The many-worlds interpretation is useful in pushing the limits of our understanding of the interface between quantum and classical, like in the Wigner’s friend setup.
Numbers are a useful mental tool in multiple situations, they make many other models more accurate.
Unicorns are real in a context of a relevant story, or as a plushie, or in a hallucination. They are a poor model of the kind of observation that lets us see, say, horses, but an excellent one if you are wandering through a toy store.
The term truth has many meanings. If you mean the first one on wikipedia
Truth is most often used to mean being in accord with fact or reality
then it is very much possible to not use that definition at all. In fact, try to taboo the terms truth, existence and reality, and phrase your statements without them, it might be an illuminating exercise. Certainly it worked for Thomas Kuhn, he wrote one of the most influential books on philosophy of science without ever using the concept of truth, except in reference to how others use it.
I really like this line of thinking. I don’t think it is necessarily opposed to the typical map-territory model, however.
You could in theory explain all there is to know about the territory with a single map, however that map would become really dense and hard to decipher. Instead having multiple maps, one with altitude, another with temperature, is instrumentally useful for best understanding the territory.
We cannot comprehend the entire territory at once, so it’s instrumentally useful to view the world through different lenses and see what new information about the world the lens allows us to see.
You could then go the step further, which I think is what you’re doing, and say that all that is meaningful to talk about are the different maps. But then I start becoming a bit confused about how you would evaluate any map’s usefulness, because if you answered me: ‘whether it’s instrumentally useful or not’, I’d question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.
I’d question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.
Not in terms of other maps, but in terms of its predictive power: Something is more useful if it allows you to more accurately predict future observations. The observations themselves, of course, go through many layers of processing before we get a chance to compare them with the model in question. I warmly recommend the relevant SSC blog posts:
The whole point of the therapy thing is that you don’t know how to describe the real world.
But there’s a lot of evidence that it is a useful model… and there’s evidence that it is a useful thing… and that it’s a useful thing… and in fact I have a big, strong intuition that it is a useful thing… and so it isn’t really an example of “gifts you away”. (You have to interpret the evidence to see what it’s like, or you have to interpret it to see what it’s like, or you have to interpret it to see what it’s like, etc.)
Brain is a multi-level predictive error minimization machine, at least according to a number of SSC posts and reviews, and that matches my intuition as well. So, ultimately predictive power is an instrumental goal toward the terminal goal of minimizing the prediction error.
A territory is a sometimes useful model, and the distinction between an approximate map and as-good-as-possible map called territory is another useful meta-model. Since there is nothing but models, there is nothing to deny or to be agnostic about.
You are using terms that do not correspond to anything in my ontology. I’m guessing by “the world” you mean that territory thing, which is a sometimes useful model, but not in that setup. “A useful fiction” is another term for a good model, as far as I am concerned, as long as it gets you where you intend to be.
How is predictive error, as opposed to our perception of predictive error, defined if not relative to the territory?
If there is nothing but models, why is your claim that there is nothing but models true, as opposed to merely being a useful model?
I don’t claim what is true, what exists, or what is real. In fact, I explicitly avoid all these 3 terms as devoid of meaning. That is reading too much into it. I’m simply pointing out that one can make accurate predictions of future observations without postulating anything but models of past observations.
There is no such thing as “perception of predictive error” or actual “prediction error”. There is only observed prediction error. You are falling back on your default implicit ontology of objective reality when asking those questions.
Why do you assume that future predictions would follow from past predictions? It seems like there has to be an implicit underlying model there to make that assump
That’s a meta-model that has been confirmed pretty reliably: it is possible to make reasonably accurate predictions in various areas based on past observations. In fact, if this were not possible at any level, we would not be talking about it :)
Yes, that’s the (meta-)model, that accurate predictions are possible.
How can you confirm the model of “past predictions predict future predictions” with the data that “in the past past predictions have predicted future predictions?” Isn’t that circular?
The meta-observation (and the first implicit and trivially simple meta-model) is that accurate predictions are possible. Translated to the realist’s speak it would say something like “the universe is predictable, to some degree”. Which is just as circular, since without predictability there would be no agents to talk about predictability.
In what way is your meta-observation of consistency different than the belief in a territory?
Once you postulate the territory behind your observations, you start using misleading and ill-defined terms like “exists”, “real” and “true”, and argue, say, which interpretation of QM is “true” or whether numbers “exist”, or whether unicorns are “real”. If you stick to models only, none of these are meaningful statements and so there is no reason to argue about them. Let’s go through these examples:
The orthodox interpretation of quantum mechanics is useful in calculating the cross sections, because it deals with the results of a measurement. The many-worlds interpretation is useful in pushing the limits of our understanding of the interface between quantum and classical, like in the Wigner’s friend setup.
Numbers are a useful mental tool in multiple situations, they make many other models more accurate.
Unicorns are real in a context of a relevant story, or as a plushie, or in a hallucination. They are a poor model of the kind of observation that lets us see, say, horses, but an excellent one if you are wandering through a toy store.
Why can’t you just believe in the territory without trying g to confuse it with maps?
To me belief in the territory is the confused one :)
Because you don’t believe territory “exists” or because it’s simpler to not model it twice—once on a map, once outside?
The latter. Also postulating immutable territory outside all maps means asking toxic questions about what exists, what is real and what is a fact.
What kind of claim is the one that one can make accurate predictions of future observations if not a claim of truth?
The term truth has many meanings. If you mean the first one on wikipedia
then it is very much possible to not use that definition at all. In fact, try to taboo the terms truth, existence and reality, and phrase your statements without them, it might be an illuminating exercise. Certainly it worked for Thomas Kuhn, he wrote one of the most influential books on philosophy of science without ever using the concept of truth, except in reference to how others use it.
I really like this line of thinking. I don’t think it is necessarily opposed to the typical map-territory model, however.
You could in theory explain all there is to know about the territory with a single map, however that map would become really dense and hard to decipher. Instead having multiple maps, one with altitude, another with temperature, is instrumentally useful for best understanding the territory.
We cannot comprehend the entire territory at once, so it’s instrumentally useful to view the world through different lenses and see what new information about the world the lens allows us to see.
You could then go the step further, which I think is what you’re doing, and say that all that is meaningful to talk about are the different maps. But then I start becoming a bit confused about how you would evaluate any map’s usefulness, because if you answered me: ‘whether it’s instrumentally useful or not’, I’d question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.
Not in terms of other maps, but in terms of its predictive power: Something is more useful if it allows you to more accurately predict future observations. The observations themselves, of course, go through many layers of processing before we get a chance to compare them with the model in question. I warmly recommend the relevant SSC blog posts:
https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
https://slatestarcodex.com/2017/09/06/predictive-processing-and-perceptual-control/
https://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/
https://slatestarcodex.com/2019/03/20/translating-predictive-coding-into-perceptual-control/
The whole point of the therapy thing is that you don’t know how to describe the real world.
But there’s a lot of evidence that it is a useful model… and there’s evidence that it is a useful thing… and that it’s a useful thing… and in fact I have a big, strong intuition that it is a useful thing… and so it isn’t really an example of “gifts you away”. (You have to interpret the evidence to see what it’s like, or you have to interpret it to see what it’s like, or you have to interpret it to see what it’s like, etc.)