The Simple Truth shows that bad models are bad. It is not an argument for or against a specific concept of truth, despite what Eliezer might have intended by it.
Ok I suspect we are using different definitions of ‘correspond’. ‘Correspond’ means “a close similarity; match or agree almost exactly.” In this context I have always interpreted ‘correspond to reality’ when applied to a model as meaning that models predictions have a close similarity; match or agree almost exactly with observation. That is to say, a model which corresponds to reality correctly predicts reality, by definition.
If my model says the sky should be blue, and I go out and look and the sky is blue, my model corresponds to reality. If my theory says the sky should be green, and I go out and look and discover the sky to be blue, then my model does not correspond to reality. It seems to me that a model which corresponds to reality and yet is incorrect (does not match the world) is a logical impossibility.
Therefore, I presume you must be using some other definition of ‘correspond’. What might that be?
If my model says the sky should be blue, and I go out and look and the sky is blue, my model corresponds to reality.
My wording:
If my model says the sky should be blue, and I go out and look and the sky is blue, my model is accurate/useful, etc.
I.e. I make no claims about reality beyond it occasionally being a useful metamodel.
It seems to me that a model which corresponds to reality and yet is incorrect (does not match the world) is a logical impossibility.
In the dualist reality+models ontology, yes. If you don’t make any ontological assumptions about anything ’existing” beyond models, the above statement is not impossible, it is meaningless, as it uses undefined terms.
Yes. For example, you don’t bother arguing about untestables. Is MWI true? Who cares, unless you can construct a testable prediction out of this model, it is not even a meaningful question. What about Tegmark 4? Same thing.
You may care about different worlds to different extents, with “truth” of a possible world being the degree of caring. In that case, it may be useful for evaluating (the relative weights of) consequences of decisions, which may be different for different worlds, even if the worlds can’t be distinguished based on observation.
And that’s another “actionable difference”. I care about possible/counter-factual worlds only to the degree they can become actual. I don’t worry about potential multiple copies of me in the infinite universe, because “what if they are me?”, not until there is a measurable effect associated with it.
Heh, my intuition is the opposite. What I felt but so far refrained from saying today was “Stop arguing about whether reality exists or not! it doesn’t change anything.” It seems we agree on that at least.
I.e. I make no claims about reality beyond it occasionally being a useful metamodel.
It’s really about the accuracy of your model in terms of predictions it makes, whether or not we can find any correspondence between those hidden variables and other observables?
If my model says the sky should be blue, and I go out and look and the sky is blue, my model corresponds to reality.
It corresponds to appearance. Models posit causal mechanisms, and the wrong mechanism can predict the right observations.
In general, the correspondence theory of truth means that a proposition is true when reality, or some chunk of reality, is the way the proposition says it is. Translating that as directly as possible into physical science, a science, a theory would be true if it’s posits, the things it claims exist, actually exist. For instance, the phlogiston theory is true if something with the properties of phlogiston exists. The important thing is that correspondence in that sense, let’s say “correspondence of ontological content”, is not the same as predictive accuracy. To be sure, a theory would that is not empirically predictive is rejected as being ontological inaccurate as well.....but that does not mean empirical predictiveness is a sufficient criterion of ontological accuracy...we cannot say that a theory tells it like it is, just because it allows us to predict observations.
For one thing, instrumentalists and others who interpret science non realistically, still agree that theories are rendered true other false by evidence,
Another way of making this point is that basically wrong theories can be very accurate.
For instance, the Ptolemaic system can be made as accurate as you want for generating predictions, by adding extra epicycles … although it is false, in the sense of lacking ontological accuracy, since epicycles don’t exist.
Another way, still, is to notice that theories with different ontologies can make equivalent predictions, like wave particle duality in physics.
The fourth way is based on sceptical hypotheses, such as Brain in a Vat and the Matrix. Sceptical hypotheses can be rejected, for instance by appeals to Occams Razor, but they cannot be refuted empirically, since any piece of empirical evidence is subject to sceptical interpretation. Occams’s Razor is not empirical
Science conceives of perception as based in causation, and causation as being comprised of chains of causes and effects, with only the ultimate effect, the sensation evoked in the observer, being directly accessible to the observer. The cause of the sensation, the other end of the causal chain, the thing observed, has to be inferred from the sensation, the ultimate effect—and it cannot be inferred uniquely, since, in general, more than one cause can produce the same effect. All illusions, from holograms to stage conjuring, work by producing the effect, the percept, in an unexpected way. A BIV or Matrix observer would assume that the precept of a horse is caused by a horse, but it would actually by a mad scientist pressing buttons.
A BIV or Matrix observer could come up with science that works, that is useful, for many purposes, so long as their virtual reality had some stable rules. They could infer that dropping an (apparent) brick onto their (apparent) foot would cause pain, and so on. It would be like the player of a computer game being skilled in the game. But the workability of their science is limited to relating apparent causes to apparent effects, not to grounding causes and effects in ultimate reality.
A man flying in a hot air balloon realizes he is lost. He reduces his altitude and spots a man in a field down below. He lowers the balloon further and shouts, “Excuse me, can you tell me where I am?” The man below says, “Yes, you’re in a hot air balloon, about 30 feet above this field.” “You must be an mathematician,” says the balloonist. “I am. How did you know?” “Everything you told me is technically correct, but it’s of no use to anyone.”
“Very clever! And you must be a manager,” says the guy in the field. “Amazing! How did you work it out?” asks the balloonist. “Well, there you are in your elevated position generating hot air, you have no idea where you are or what you’re doing, but somehow you’ve decided it’s my problem.”
It’s a funny joke but beside the point. Knowing that he is in a balloon about 30 feet above a field is actually very useful. It’s just useless to tell him what he clearly already knows.
Sorry I’m dense. What does this have to do with anything? It is true that the balloonist is in a hot air balloon 30 feet above a field. These are correct facts. Are you arguing for a concept of truth which would not qualify “Yes, you’re in a hot air balloon, about 30 feet above this field” to be a true statement?
I think Lumifer is suggesting that a model can correspond accurately to reality (e.g., representing the fact that X is in a hot air balloon 30 feet above Y’s current location) but none the less be useless (e.g., because all X wants to know is how to get to Vladivostok, and knowing he’s in a balloon 30 feet above Y doesn’t help with that). And that this is an example of how a model can be “bad” other than inaccurate correspondence with reality, which is what you were asking for a few comments upthread.
The Simple Truth shows that bad models are bad. It is not an argument for or against a specific concept of truth, despite what Eliezer might have intended by it.
What makes a bad model “bad”, other than that it does not correspond to reality?
The predictions it makes are incorrect.
Ok I suspect we are using different definitions of ‘correspond’. ‘Correspond’ means “a close similarity; match or agree almost exactly.” In this context I have always interpreted ‘correspond to reality’ when applied to a model as meaning that models predictions have a close similarity; match or agree almost exactly with observation. That is to say, a model which corresponds to reality correctly predicts reality, by definition.
If my model says the sky should be blue, and I go out and look and the sky is blue, my model corresponds to reality. If my theory says the sky should be green, and I go out and look and discover the sky to be blue, then my model does not correspond to reality. It seems to me that a model which corresponds to reality and yet is incorrect (does not match the world) is a logical impossibility.
Therefore, I presume you must be using some other definition of ‘correspond’. What might that be?
Your wording:
My wording:
I.e. I make no claims about reality beyond it occasionally being a useful metamodel.
In the dualist reality+models ontology, yes. If you don’t make any ontological assumptions about anything ’existing” beyond models, the above statement is not impossible, it is meaningless, as it uses undefined terms.
Is there any actionable difference between the two viewpoints?
Yes. For example, you don’t bother arguing about untestables. Is MWI true? Who cares, unless you can construct a testable prediction out of this model, it is not even a meaningful question. What about Tegmark 4? Same thing.
You may care about different worlds to different extents, with “truth” of a possible world being the degree of caring. In that case, it may be useful for evaluating (the relative weights of) consequences of decisions, which may be different for different worlds, even if the worlds can’t be distinguished based on observation.
And that’s another “actionable difference”. I care about possible/counter-factual worlds only to the degree they can become actual. I don’t worry about potential multiple copies of me in the infinite universe, because “what if they are me?”, not until there is a measurable effect associated with it.
Heh, my intuition is the opposite. What I felt but so far refrained from saying today was “Stop arguing about whether reality exists or not! it doesn’t change anything.” It seems we agree on that at least.
It’s really about the accuracy of your model in terms of predictions it makes, whether or not we can find any correspondence between those hidden variables and other observables?
Is that what you’re getting at?
I don’t understand what you mean by hidden variables in this context.
It corresponds to appearance. Models posit causal mechanisms, and the wrong mechanism can predict the right observations.
In general, the correspondence theory of truth means that a proposition is true when reality, or some chunk of reality, is the way the proposition says it is. Translating that as directly as possible into physical science, a science, a theory would be true if it’s posits, the things it claims exist, actually exist. For instance, the phlogiston theory is true if something with the properties of phlogiston exists. The important thing is that correspondence in that sense, let’s say “correspondence of ontological content”, is not the same as predictive accuracy. To be sure, a theory would that is not empirically predictive is rejected as being ontological inaccurate as well.....but that does not mean empirical predictiveness is a sufficient criterion of ontological accuracy...we cannot say that a theory tells it like it is, just because it allows us to predict observations.
For one thing, instrumentalists and others who interpret science non realistically, still agree that theories are rendered true other false by evidence,
Another way of making this point is that basically wrong theories can be very accurate. For instance, the Ptolemaic system can be made as accurate as you want for generating predictions, by adding extra epicycles … although it is false, in the sense of lacking ontological accuracy, since epicycles don’t exist.
Another way, still, is to notice that theories with different ontologies can make equivalent predictions, like wave particle duality in physics.
The fourth way is based on sceptical hypotheses, such as Brain in a Vat and the Matrix. Sceptical hypotheses can be rejected, for instance by appeals to Occams Razor, but they cannot be refuted empirically, since any piece of empirical evidence is subject to sceptical interpretation. Occams’s Razor is not empirical
Science conceives of perception as based in causation, and causation as being comprised of chains of causes and effects, with only the ultimate effect, the sensation evoked in the observer, being directly accessible to the observer. The cause of the sensation, the other end of the causal chain, the thing observed, has to be inferred from the sensation, the ultimate effect—and it cannot be inferred uniquely, since, in general, more than one cause can produce the same effect. All illusions, from holograms to stage conjuring, work by producing the effect, the percept, in an unexpected way. A BIV or Matrix observer would assume that the precept of a horse is caused by a horse, but it would actually by a mad scientist pressing buttons.
A BIV or Matrix observer could come up with science that works, that is useful, for many purposes, so long as their virtual reality had some stable rules. They could infer that dropping an (apparent) brick onto their (apparent) foot would cause pain, and so on. It would be like the player of a computer game being skilled in the game. But the workability of their science is limited to relating apparent causes to apparent effects, not to grounding causes and effects in ultimate reality.
For example, uselessness.
Please forgive my continuation of the Socratic method, but what in what ways can a model be useless that differ from it not corresponding to reality?
Recall an old joke:
“Very clever! And you must be a manager,” says the guy in the field. “Amazing! How did you work it out?” asks the balloonist. “Well, there you are in your elevated position generating hot air, you have no idea where you are or what you’re doing, but somehow you’ve decided it’s my problem.”
Yep. Moral of the story: never let the twain meet :-)
It’s a funny joke but beside the point. Knowing that he is in a balloon about 30 feet above a field is actually very useful. It’s just useless to tell him what he clearly already knows.
Sorry I’m dense. What does this have to do with anything? It is true that the balloonist is in a hot air balloon 30 feet above a field. These are correct facts. Are you arguing for a concept of truth which would not qualify “Yes, you’re in a hot air balloon, about 30 feet above this field” to be a true statement?
I think Lumifer is suggesting that a model can correspond accurately to reality (e.g., representing the fact that X is in a hot air balloon 30 feet above Y’s current location) but none the less be useless (e.g., because all X wants to know is how to get to Vladivostok, and knowing he’s in a balloon 30 feet above Y doesn’t help with that). And that this is an example of how a model can be “bad” other than inaccurate correspondence with reality, which is what you were asking for a few comments upthread.
Indeed they are. That is, actually, the point.
Recall your own question (emphasis mine): “in what ways can a model be useless that differ from it not corresponding to reality?”
A model can be useful without corresponding, though,
The Ptolemaic system can be made as accurate as you want for generating predictions.