But this is not the dictionary definition of the geocentric model we are talking about—this we have twisted it to have the exact same predictions as the modern astronomical model. So it no longer asserts the same things about the territory as the original geocentric model—its assertions are now identical to the modern model. So why should it still hold the same meaning as the original geocentric model?
So if I copied the encyclopedia definition of the heliocentric model, and changed the title to “geocentric” model, it would be a “bad, wrong , neo-geocentric theory [that] is still a geocentric theory”?
I’m not sure I follow—what do you mean by “didn’t work”? Shouldn’t it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?
OK, I continued reading, and in Decoherence is Simple Eliezer makes a good case for Occam’s Razor as more than just a useful tool.
In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs—but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.
So, if a simple belief A started with −10 decibels and a complicated belief B started with −20 decibels, and we get 15 decibels of evidence supporting both, the posterior credibility of the beliefs are 5 and −5 - so we should favor A. Even if we get another 10 decibels of evidence and the credibility of B becomes 5, the credibility of A is now 15 so we should still favor it. The only way we can favor B is if we get enough evidence that support B but not A.
Of course—this doesn’t mean that A is true and B is false, only that we assign a higher probability to A.
So, if we go back to astronomy—our neogeocentric model has a higher burden of proof than the modern model, because it contains additional mysterious forces. We prove gravity and relativity and the work out how centrifugal forces work and that’s (more or less) enough for the modern model, and the exact same evidences also support the neogeocentric model—but they are not enough for it because we also need evidence for the new forces we came up with.
Do note, though, that the claim that “there is no mysterious force” is simpler than “there is a mysterious force” is taken for granted here...
I’m not sure I follow—what do you mean by “didn’t work”? Shouldn’t it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?
If you take a heliocentric theory, and substitute “geocentric” for “heliocentric”, you get a theory that doens’t work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.
In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs—but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.
What does “true” mean when you use it?
A geocentric theory can match any observation, providing you complicate it endelessly.
This discussion is about your claim that two theories are the same iff their empirical predictions are the same. But if that is the case, why does complexity matter?
EY is a realist and a correspondence theorist. He thinks that “true” means “corresponds to reality”, and he thinks that complexity matters, because, all other things being equal, a more complex theory is less likely to correspond than a simpler one. So his support of Occam’s Razor, his belief in correspondence-truth, and his realism all hang together.
But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are
denying that they are have any semantic (non empirical content), and, as an implication of that, that they “mean” or “say” nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?
If you take a heliocentric theory, and substitute “geocentric” for “heliocentric”, you get a theory that doens’t work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.
I only change the title, I don’t change anything they theory says. So its predictions are still the same as the heliocentric model.
But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are denying that they are have any semantic (non empirical content), and, as an implication of that, that they “mean” or “say” nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?
The semantics are still very important as a compact representation of predictions. The predictions are infinite—the belief will have to give a prediction for every possible scenario, and scenariospace is infinite. Even if the belief is only relevant for a finite subset of scenarios, it’d still have to say “I don’t care about this scenario” an infinite number of times.
Actually, it would make more sense to talk about belief systems than individual beliefs, where the belief system is simply the probability function P. But we can still talk about single beliefs if we remember that they need to be connected to a belief system in order to give predictions, and that when we compare two competing beliefs we are actually comparing two belief systems where the only difference is that one has belief A and the other has belief B.
Human minds, being finite, cannot contain infinite representations—we need finite representations for our beliefs. And that’s where the semantics come in—they are compact rules that can be used to generate predictions for every given scenario. And they are also important because the amount of predictions we can test is also finite. So even if we could comprehend the infinite prediction field over scenariospace, we wouldn’t be able to confirm a belief based on a finite number of experiments.
Also, with that kind of representation, we can’t even come up with the full representation of the belief. Consider a limited scenario space with just three scenarios X, Y and Z. We know what happened in X and Y, and write a belief based on it. But what would that belief say about Z? If the belief is represented as just its predictions, without connections between distinct predictions, how can we fill up the predictions table?
The semantics help us with that because they have less degrees of freedom. With N degrees of freedom we can match any K≤N number of observations, so we need M>N observations to even start counting them as evidence. I not sure how to come up with a formula for the number of degrees of freedom a semantic representation of a belief has—this depends not only on the numerical constants but also on the semantics—but some properties of it are obvious:
The prediction table representation has infinite degrees of freedom, since it can give a prediction for each scenario independently from the predictions given to the other scenarios.
A semantic representation that’s strictly more simple than another semantic representation—that is, you can go from the simple one to the complex one just by adding rules—then the simpler one has less degrees of freedom than the complicated one. This is because the complicated one has all the degrees of freedom the simpler one had, plus more degrees of freedom from the new rules (just adding the rules is some degrees of freedom, even if the rule itself does not contain anything that can be tweaked)
So the simplicity of the semantic representation is meaningful because it means less degrees of freedom and thus requires less evidence, but it does not make the belief “truer”—only the infinite prediction table determines how true the belief is.
That isn’t what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .
Actually, what I need to show is that the semantics say nothing extra about the territory that is meaningful. My argument is that the predictions are canonical representation of the belief, so it’s fine if the semantics say things about the territory that the predictions can’t say, as long as everything it says that does not affect the predictions is meaningless. At least, meaningless in the territory.
The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called “gravity”. If you call that force “travigy” instead, it will cause no difference in the predictions. This is because the name of the force if a property of the map, not the territory—if it was meaningful in the territory it should have had impact on the predictions.
And I claim that the “center of the universe” is similar—it has no meaning in the territory. The universe has no “center”—you can think of “center of mass” or “center of bounding volume” of a group of objects, but there is no single point you can naturally call “the center”. There can be good or bad choices for the center, but not right or wrong choices—the center is a property of the map, not the territory.
If it had any effect at all on the territory, it should have somehow affected the predictions.
My argument is that the predictions are canonical representation of the belief, so it’s fine if the semantics say things about the territory that the predictions can’t say, as long as everything it says that does not affect the predictions is meaningless.
How can you say something, but say something meaningless?
Why does not saying anything (meaningful) about the territory buy you? What’s the advantage?
Realists are realists because they place a terminal value in knowing what the territory is above and beyond making predictions. They can say what the advantage is … to them. If you don’t personally value knowing what the territory is, that need not apply to others.
The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called “gravity”. If you call that force “travigy” instead, it will cause no difference in the predictions
Travigy means nothing, or it means gravity. Either way , it doesnt affect my argument.
You don’t seem to understand what semantics is. It’s not just a matter of spelling changes or textual changes. A semantic change doesn’t mean that two strings fail strcmp() , it means that terms have been substituted with meaningful terms that mean something different.
And I claim that the “center of the universe” is similar—it has no meaning in the territory
“There is a centre of the universe” is considered false in modern cosmology.
So there is no real thing corresponding to the meaning of string “centre of the universe”. Which is to say that the string “centre of the universe” has a meaning , unlike the string “flibble na dar wobble”.
If it had any effect at all on the territory, it should have somehow affected the predictions.
The territory can be different ways that produce the same predictions.
But this is not the dictionary definition of the geocentric model we are talking about—this we have twisted it to have the exact same predictions as the modern astronomical model. So it no longer asserts the same things about the territory as the original geocentric model—its assertions are now identical to the modern model. So why should it still hold the same meaning as the original geocentric model?
Dictionaries don’t define complex scientific theories.
Our complicated , bad, wrong , neo-geocentric theory is still a geocentric theory.
Therefore it makes different assertions about the territory than heliocentricism.
So if I copied the encyclopedia definition of the heliocentric model, and changed the title to “geocentric” model, it would be a “bad, wrong , neo-geocentric theory [that] is still a geocentric theory”?
It would be a theory that didn’t work, because you only changed one thing.
I’m not sure I follow—what do you mean by “didn’t work”? Shouldn’t it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?
OK, I continued reading, and in Decoherence is Simple Eliezer makes a good case for Occam’s Razor as more than just a useful tool.
In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs—but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.
So, if a simple belief A started with −10 decibels and a complicated belief B started with −20 decibels, and we get 15 decibels of evidence supporting both, the posterior credibility of the beliefs are 5 and −5 - so we should favor A. Even if we get another 10 decibels of evidence and the credibility of B becomes 5, the credibility of A is now 15 so we should still favor it. The only way we can favor B is if we get enough evidence that support B but not A.
Of course—this doesn’t mean that A is true and B is false, only that we assign a higher probability to A.
So, if we go back to astronomy—our neogeocentric model has a higher burden of proof than the modern model, because it contains additional mysterious forces. We prove gravity and relativity and the work out how centrifugal forces work and that’s (more or less) enough for the modern model, and the exact same evidences also support the neogeocentric model—but they are not enough for it because we also need evidence for the new forces we came up with.
Do note, though, that the claim that “there is no mysterious force” is simpler than “there is a mysterious force” is taken for granted here...
If you take a heliocentric theory, and substitute “geocentric” for “heliocentric”, you get a theory that doens’t work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.
What does “true” mean when you use it?
A geocentric theory can match any observation, providing you complicate it endelessly.
This discussion is about your claim that two theories are the same iff their empirical predictions are the same. But if that is the case, why does complexity matter?
EY is a realist and a correspondence theorist. He thinks that “true” means “corresponds to reality”, and he thinks that complexity matters, because, all other things being equal, a more complex theory is less likely to correspond than a simpler one. So his support of Occam’s Razor, his belief in correspondence-truth, and his realism all hang together.
But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are denying that they are have any semantic (non empirical content), and, as an implication of that, that they “mean” or “say” nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?
I only change the title, I don’t change anything they theory says. So its predictions are still the same as the heliocentric model.
The semantics are still very important as a compact representation of predictions. The predictions are infinite—the belief will have to give a prediction for every possible scenario, and scenariospace is infinite. Even if the belief is only relevant for a finite subset of scenarios, it’d still have to say “I don’t care about this scenario” an infinite number of times.
Actually, it would make more sense to talk about belief systems than individual beliefs, where the belief system is simply the probability function P. But we can still talk about single beliefs if we remember that they need to be connected to a belief system in order to give predictions, and that when we compare two competing beliefs we are actually comparing two belief systems where the only difference is that one has belief A and the other has belief B.
Human minds, being finite, cannot contain infinite representations—we need finite representations for our beliefs. And that’s where the semantics come in—they are compact rules that can be used to generate predictions for every given scenario. And they are also important because the amount of predictions we can test is also finite. So even if we could comprehend the infinite prediction field over scenariospace, we wouldn’t be able to confirm a belief based on a finite number of experiments.
Also, with that kind of representation, we can’t even come up with the full representation of the belief. Consider a limited scenario space with just three scenarios X, Y and Z. We know what happened in X and Y, and write a belief based on it. But what would that belief say about Z? If the belief is represented as just its predictions, without connections between distinct predictions, how can we fill up the predictions table?
The semantics help us with that because they have less degrees of freedom. With N degrees of freedom we can match any K≤N number of observations, so we need M>N observations to even start counting them as evidence. I not sure how to come up with a formula for the number of degrees of freedom a semantic representation of a belief has—this depends not only on the numerical constants but also on the semantics—but some properties of it are obvious:
The prediction table representation has infinite degrees of freedom, since it can give a prediction for each scenario independently from the predictions given to the other scenarios.
A semantic representation that’s strictly more simple than another semantic representation—that is, you can go from the simple one to the complex one just by adding rules—then the simpler one has less degrees of freedom than the complicated one. This is because the complicated one has all the degrees of freedom the simpler one had, plus more degrees of freedom from the new rules (just adding the rules is some degrees of freedom, even if the rule itself does not contain anything that can be tweaked)
So the simplicity of the semantic representation is meaningful because it means less degrees of freedom and thus requires less evidence, but it does not make the belief “truer”—only the infinite prediction table determines how true the belief is.
Maybe you do, but it’s my thought experiment!
That isn’t what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .
Actually, what I need to show is that the semantics say nothing extra about the territory that is meaningful. My argument is that the predictions are canonical representation of the belief, so it’s fine if the semantics say things about the territory that the predictions can’t say, as long as everything it says that does not affect the predictions is meaningless. At least, meaningless in the territory.
The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called “gravity”. If you call that force “travigy” instead, it will cause no difference in the predictions. This is because the name of the force if a property of the map, not the territory—if it was meaningful in the territory it should have had impact on the predictions.
And I claim that the “center of the universe” is similar—it has no meaning in the territory. The universe has no “center”—you can think of “center of mass” or “center of bounding volume” of a group of objects, but there is no single point you can naturally call “the center”. There can be good or bad choices for the center, but not right or wrong choices—the center is a property of the map, not the territory.
If it had any effect at all on the territory, it should have somehow affected the predictions.
How can you say something, but say something meaningless?
Why does not saying anything (meaningful) about the territory buy you? What’s the advantage?
Realists are realists because they place a terminal value in knowing what the territory is above and beyond making predictions. They can say what the advantage is … to them. If you don’t personally value knowing what the territory is, that need not apply to others.
Travigy means nothing, or it means gravity. Either way , it doesnt affect my argument.
You don’t seem to understand what semantics is. It’s not just a matter of spelling changes or textual changes. A semantic change doesn’t mean that two strings fail strcmp() , it means that terms have been substituted with meaningful terms that mean something different.
“There is a centre of the universe” is considered false in modern cosmology. So there is no real thing corresponding to the meaning of string “centre of the universe”. Which is to say that the string “centre of the universe” has a meaning , unlike the string “flibble na dar wobble”.
The territory can be different ways that produce the same predictions.