First, “usefulness” means only one thing: predictive power, which is accuracy in predicting future inputs (observations). The territory is not a useful model in multiple situations.
In physics, especially quantum mechanics, it leads to an argument about “what is real?” as opposed to “what can we measure and what can we predict?”, which soon slides into arguments about unobservables and untestables. Are particles real? Nope, they are an asymptotically flat interaction-free approximations of the QFT in curved spacetimes. Are fields real? Who knows, we cannot observe them directly, only their effects. They are certainly a useful model, without a doubt though.
Another example: are numbers real? Who cares, they are certainly useful. Do they exist in the mind or outside of it? Depends on your definitions, so an answer to this question says more about human cognition and human biases than about anything math- or physics-related.
Another example is in psychology: if you ever go to therapist for, say, couples counseling, the first thing a good one would explain is that there is no single “truth”, there is “his truth” and “her truth” (fix the pronouns as desired), and the goal of therapy would be to figure out a mutually agreeable future, not to figure out who was right and who was wrong and what really happened, and who thought what and said what exactly and when.
No ordinary goal requires anything outside of predictive accuracy. To achieve a goal, all you need to do is predict what sequence of actions will bring it about (though I note, not all predictive apparatuses are useful. A machine that did something very specific abnormal like.. looking at a photo of a tree and predicting whether there is a human tooth inside it, for instance, would not find many applications.)
What claim about truth can’t be described as a prediction or tool for prediction?
Brain is a multi-level predictive error minimization machine, at least according to a number of SSC posts and reviews, and that matches my intuition as well. So, ultimately predictive power is an instrumental goal toward the terminal goal of minimizing the prediction error.
A territory is a sometimes useful model, and the distinction between an approximate map and as-good-as-possible map called territory is another useful meta-model. Since there is nothing but models, there is nothing to deny or to be agnostic about.
Is the therapy example a true model of the world or a useful fiction?
You are using terms that do not correspond to anything in my ontology. I’m guessing by “the world” you mean that territory thing, which is a sometimes useful model, but not in that setup. “A useful fiction” is another term for a good model, as far as I am concerned, as long as it gets you where you intend to be.
I don’t claim what is true, what exists, or what is real. In fact, I explicitly avoid all these 3 terms as devoid of meaning. That is reading too much into it. I’m simply pointing out that one can make accurate predictions of future observations without postulating anything but models of past observations.
How is predictive error, as opposed to our perception of predictive error, defined if not relative to the territory?
There is no such thing as “perception of predictive error” or actual “prediction error”. There is only observed prediction error. You are falling back on your default implicit ontology of objective reality when asking those questions.
Why do you assume that future predictions would follow from past predictions? It seems like there has to be an implicit underlying model there to make that assump
Why do you assume that future predictions would follow from past predictions?
That’s a meta-model that has been confirmed pretty reliably: it is possible to make reasonably accurate predictions in various areas based on past observations. In fact, if this were not possible at any level, we would not be talking about it :)
It seems like there has to be an implicit underlying model there to make that assump
Yes, that’s the (meta-)model, that accurate predictions are possible.
How can you confirm the model of “past predictions predict future predictions” with the data that “in the past past predictions have predicted future predictions?” Isn’t that circular?
The meta-observation (and the first implicit and trivially simple meta-model) is that accurate predictions are possible. Translated to the realist’s speak it would say something like “the universe is predictable, to some degree”. Which is just as circular, since without predictability there would be no agents to talk about predictability.
Once you postulate the territory behind your observations, you start using misleading and ill-defined terms like “exists”, “real” and “true”, and argue, say, which interpretation of QM is “true” or whether numbers “exist”, or whether unicorns are “real”. If you stick to models only, none of these are meaningful statements and so there is no reason to argue about them. Let’s go through these examples:
The orthodox interpretation of quantum mechanics is useful in calculating the cross sections, because it deals with the results of a measurement. The many-worlds interpretation is useful in pushing the limits of our understanding of the interface between quantum and classical, like in the Wigner’s friend setup.
Numbers are a useful mental tool in multiple situations, they make many other models more accurate.
Unicorns are real in a context of a relevant story, or as a plushie, or in a hallucination. They are a poor model of the kind of observation that lets us see, say, horses, but an excellent one if you are wandering through a toy store.
The term truth has many meanings. If you mean the first one on wikipedia
Truth is most often used to mean being in accord with fact or reality
then it is very much possible to not use that definition at all. In fact, try to taboo the terms truth, existence and reality, and phrase your statements without them, it might be an illuminating exercise. Certainly it worked for Thomas Kuhn, he wrote one of the most influential books on philosophy of science without ever using the concept of truth, except in reference to how others use it.
I really like this line of thinking. I don’t think it is necessarily opposed to the typical map-territory model, however.
You could in theory explain all there is to know about the territory with a single map, however that map would become really dense and hard to decipher. Instead having multiple maps, one with altitude, another with temperature, is instrumentally useful for best understanding the territory.
We cannot comprehend the entire territory at once, so it’s instrumentally useful to view the world through different lenses and see what new information about the world the lens allows us to see.
You could then go the step further, which I think is what you’re doing, and say that all that is meaningful to talk about are the different maps. But then I start becoming a bit confused about how you would evaluate any map’s usefulness, because if you answered me: ‘whether it’s instrumentally useful or not’, I’d question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.
I’d question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.
Not in terms of other maps, but in terms of its predictive power: Something is more useful if it allows you to more accurately predict future observations. The observations themselves, of course, go through many layers of processing before we get a chance to compare them with the model in question. I warmly recommend the relevant SSC blog posts:
The whole point of the therapy thing is that you don’t know how to describe the real world.
But there’s a lot of evidence that it is a useful model… and there’s evidence that it is a useful thing… and that it’s a useful thing… and in fact I have a big, strong intuition that it is a useful thing… and so it isn’t really an example of “gifts you away”. (You have to interpret the evidence to see what it’s like, or you have to interpret it to see what it’s like, or you have to interpret it to see what it’s like, etc.)
[EDIT: Some commenters pointed to “The Secret of Pica,” which I should have read as an appropriate description of the field; see here.]
I’m interested in people’s independent opinions, especially their opinions expressed here before I’ve received any feedback.
Please reply to my comment below saying I am aware of no such thing as psychotherapy.
Consider the following research while learning about psychotherapy. It is interesting because I do not have access to the full scientific data on the topic being studied. It is also highly addictive, and has fairly high attrition rates.
Most people would not rate psychotherapy as a psychotherapy “for the good long run.” Some would say that it is dangerous, especially until they are disabled or in a negatively altered state. Most people would agree that it is not. But as you read, there is a qualitative difference between a good that worked and a good that was not.
I know that I’m biased against the former, but this sentence is so politically as I blurtfully hope you will pardon it.
This was surprising; in this context I had thought “useful” meant ‘helps one achieve one’s goals’, rather than being short for “useful for making predictions”.
Consider reading the link above and the rest of the SSC posts on the topic. In the model discussed there brain is nothing but a prediction error minimization machine. Which happens to match my views quite well.
If the brain can’t do anything except make predictions, where making predictions is defined defined to exclude seeking metaphysical truth, then you have nothing to object to, since it would be literally impossible for anyone to do other than as you recommend.
Since people can engage in metaphysical truth seeking, it is either a sub-variety of prediction, or the theory that the brain is nothing but a prediction error minimisation machine is false.
If I want to say something about my own subjective experience, I could write that paragraph from a story I’ve been told, and say “Hey, I don’t have to believe any more”, and then leave it at that.
I’m not a fan of the first one. That is, my subjective experience (as opposed to the story I was told by) does not have any relevance to my real experience of that scene, so I can’t say for certain which one in particular seems to be the right one.
I also have a very important factual issue with having a similar scene (to an outsider) in which a different person can’t help but help, which I do find confusing; and in that case, if my real feelings about the scene are somewhat similar to the feelings about the scene, the scene will make it seem very awkward.
So if someone can help me with this stuff, I can’t ask to be arrested for letting anyone out on the street, for providing any evidence that they’re “trying to pretend”.
(I’m also assuming that the scene has to be generated by some kind of randomly-generated random generator or some technique which doesn’t produce anything in the original text.)
First, “usefulness” means only one thing: predictive power, which is accuracy in predicting future inputs (observations). The territory is not a useful model in multiple situations.
In physics, especially quantum mechanics, it leads to an argument about “what is real?” as opposed to “what can we measure and what can we predict?”, which soon slides into arguments about unobservables and untestables. Are particles real? Nope, they are an asymptotically flat interaction-free approximations of the QFT in curved spacetimes. Are fields real? Who knows, we cannot observe them directly, only their effects. They are certainly a useful model, without a doubt though.
Another example: are numbers real? Who cares, they are certainly useful. Do they exist in the mind or outside of it? Depends on your definitions, so an answer to this question says more about human cognition and human biases than about anything math- or physics-related.
Another example is in psychology: if you ever go to therapist for, say, couples counseling, the first thing a good one would explain is that there is no single “truth”, there is “his truth” and “her truth” (fix the pronouns as desired), and the goal of therapy would be to figure out a mutually agreeable future, not to figure out who was right and who was wrong and what really happened, and who thought what and said what exactly and when.
If ones goal require something beyond predictive accuracy, such as correspondence truth, why would you limit yourself to seeking predictive accuracy?
No ordinary goal requires anything outside of predictive accuracy. To achieve a goal, all you need to do is predict what sequence of actions will bring it about (though I note, not all predictive apparatuses are useful. A machine that did something very specific abnormal like.. looking at a photo of a tree and predicting whether there is a human tooth inside it, for instance, would not find many applications.)
What claim about truth can’t be described as a prediction or tool for prediction?
Is predictive power an instrumental or terminal goal?
Is your view a denial of the territory or agnosticism about it?
Is the therapy example a true model of the world or a useful fiction?
Brain is a multi-level predictive error minimization machine, at least according to a number of SSC posts and reviews, and that matches my intuition as well. So, ultimately predictive power is an instrumental goal toward the terminal goal of minimizing the prediction error.
A territory is a sometimes useful model, and the distinction between an approximate map and as-good-as-possible map called territory is another useful meta-model. Since there is nothing but models, there is nothing to deny or to be agnostic about.
You are using terms that do not correspond to anything in my ontology. I’m guessing by “the world” you mean that territory thing, which is a sometimes useful model, but not in that setup. “A useful fiction” is another term for a good model, as far as I am concerned, as long as it gets you where you intend to be.
How is predictive error, as opposed to our perception of predictive error, defined if not relative to the territory?
If there is nothing but models, why is your claim that there is nothing but models true, as opposed to merely being a useful model?
I don’t claim what is true, what exists, or what is real. In fact, I explicitly avoid all these 3 terms as devoid of meaning. That is reading too much into it. I’m simply pointing out that one can make accurate predictions of future observations without postulating anything but models of past observations.
There is no such thing as “perception of predictive error” or actual “prediction error”. There is only observed prediction error. You are falling back on your default implicit ontology of objective reality when asking those questions.
Why do you assume that future predictions would follow from past predictions? It seems like there has to be an implicit underlying model there to make that assump
That’s a meta-model that has been confirmed pretty reliably: it is possible to make reasonably accurate predictions in various areas based on past observations. In fact, if this were not possible at any level, we would not be talking about it :)
Yes, that’s the (meta-)model, that accurate predictions are possible.
How can you confirm the model of “past predictions predict future predictions” with the data that “in the past past predictions have predicted future predictions?” Isn’t that circular?
The meta-observation (and the first implicit and trivially simple meta-model) is that accurate predictions are possible. Translated to the realist’s speak it would say something like “the universe is predictable, to some degree”. Which is just as circular, since without predictability there would be no agents to talk about predictability.
In what way is your meta-observation of consistency different than the belief in a territory?
Once you postulate the territory behind your observations, you start using misleading and ill-defined terms like “exists”, “real” and “true”, and argue, say, which interpretation of QM is “true” or whether numbers “exist”, or whether unicorns are “real”. If you stick to models only, none of these are meaningful statements and so there is no reason to argue about them. Let’s go through these examples:
The orthodox interpretation of quantum mechanics is useful in calculating the cross sections, because it deals with the results of a measurement. The many-worlds interpretation is useful in pushing the limits of our understanding of the interface between quantum and classical, like in the Wigner’s friend setup.
Numbers are a useful mental tool in multiple situations, they make many other models more accurate.
Unicorns are real in a context of a relevant story, or as a plushie, or in a hallucination. They are a poor model of the kind of observation that lets us see, say, horses, but an excellent one if you are wandering through a toy store.
Why can’t you just believe in the territory without trying g to confuse it with maps?
To me belief in the territory is the confused one :)
Because you don’t believe territory “exists” or because it’s simpler to not model it twice—once on a map, once outside?
The latter. Also postulating immutable territory outside all maps means asking toxic questions about what exists, what is real and what is a fact.
What kind of claim is the one that one can make accurate predictions of future observations if not a claim of truth?
The term truth has many meanings. If you mean the first one on wikipedia
then it is very much possible to not use that definition at all. In fact, try to taboo the terms truth, existence and reality, and phrase your statements without them, it might be an illuminating exercise. Certainly it worked for Thomas Kuhn, he wrote one of the most influential books on philosophy of science without ever using the concept of truth, except in reference to how others use it.
I really like this line of thinking. I don’t think it is necessarily opposed to the typical map-territory model, however.
You could in theory explain all there is to know about the territory with a single map, however that map would become really dense and hard to decipher. Instead having multiple maps, one with altitude, another with temperature, is instrumentally useful for best understanding the territory.
We cannot comprehend the entire territory at once, so it’s instrumentally useful to view the world through different lenses and see what new information about the world the lens allows us to see.
You could then go the step further, which I think is what you’re doing, and say that all that is meaningful to talk about are the different maps. But then I start becoming a bit confused about how you would evaluate any map’s usefulness, because if you answered me: ‘whether it’s instrumentally useful or not’, I’d question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.
Not in terms of other maps, but in terms of its predictive power: Something is more useful if it allows you to more accurately predict future observations. The observations themselves, of course, go through many layers of processing before we get a chance to compare them with the model in question. I warmly recommend the relevant SSC blog posts:
https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
https://slatestarcodex.com/2017/09/06/predictive-processing-and-perceptual-control/
https://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/
https://slatestarcodex.com/2019/03/20/translating-predictive-coding-into-perceptual-control/
The whole point of the therapy thing is that you don’t know how to describe the real world.
But there’s a lot of evidence that it is a useful model… and there’s evidence that it is a useful thing… and that it’s a useful thing… and in fact I have a big, strong intuition that it is a useful thing… and so it isn’t really an example of “gifts you away”. (You have to interpret the evidence to see what it’s like, or you have to interpret it to see what it’s like, or you have to interpret it to see what it’s like, etc.)
[EDIT: Some commenters pointed to “The Secret of Pica,” which I should have read as an appropriate description of the field; see here.]
I’m interested in people’s independent opinions, especially their opinions expressed here before I’ve received any feedback.
Please reply to my comment below saying I am aware of no such thing as psychotherapy.
Consider the following research while learning about psychotherapy. It is interesting because I do not have access to the full scientific data on the topic being studied. It is also highly addictive, and has fairly high attrition rates.
Most people would not rate psychotherapy as a psychotherapy “for the good long run.” Some would say that it is dangerous, especially until they are disabled or in a negatively altered state. Most people would agree that it is not. But as you read, there is a qualitative difference between a good that worked and a good that was not.
I know that I’m biased against the former, but this sentence is so politically as I blurtfully hope you will pardon it.
This was surprising; in this context I had thought “useful” meant ‘helps one achieve one’s goals’, rather than being short for “useful for making predictions”.
What is the difference? Achieving goals relies on making accurate predictions. See https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
Does achieving goals rely on accurate predictions and nothing else?
Consider reading the link above and the rest of the SSC posts on the topic. In the model discussed there brain is nothing but a prediction error minimization machine. Which happens to match my views quite well.
If the brain can’t do anything except make predictions, where making predictions is defined defined to exclude seeking metaphysical truth, then you have nothing to object to, since it would be literally impossible for anyone to do other than as you recommend.
Since people can engage in metaphysical truth seeking, it is either a sub-variety of prediction, or the theory that the brain is nothing but a prediction error minimisation machine is false.
Downvotes for not being Socratic.
If I want to say something about my own subjective experience, I could write that paragraph from a story I’ve been told, and say “Hey, I don’t have to believe any more”, and then leave it at that.
I’m not a fan of the first one. That is, my subjective experience (as opposed to the story I was told by) does not have any relevance to my real experience of that scene, so I can’t say for certain which one in particular seems to be the right one.
I also have a very important factual issue with having a similar scene (to an outsider) in which a different person can’t help but help, which I do find confusing; and in that case, if my real feelings about the scene are somewhat similar to the feelings about the scene, the scene will make it seem very awkward.
So if someone can help me with this stuff, I can’t ask to be arrested for letting anyone out on the street, for providing any evidence that they’re “trying to pretend”.
(I’m also assuming that the scene has to be generated by some kind of randomly-generated random generator or some technique which doesn’t produce anything in the original text.)