Semantically and ontologically. The dictionary meanings of the words heliocentric and geocentric are opposites, so they assert different things about the territory.
Note that this the default hypothesis. Whatever I just called “dictionary meaning” is what is usually called “meaning” simpliciter.
Attempts to resist this conclusion are based on putting forward non standard
definitions of “meaning”, which need to because argued for, not just assumed.
But this is not the dictionary definition of the geocentric model we are talking about—this we have twisted it to have the exact same predictions as the modern astronomical model. So it no longer asserts the same things about the territory as the original geocentric model—its assertions are now identical to the modern model. So why should it still hold the same meaning as the original geocentric model?
So if I copied the encyclopedia definition of the heliocentric model, and changed the title to “geocentric” model, it would be a “bad, wrong , neo-geocentric theory [that] is still a geocentric theory”?
I’m not sure I follow—what do you mean by “didn’t work”? Shouldn’t it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?
OK, I continued reading, and in Decoherence is Simple Eliezer makes a good case for Occam’s Razor as more than just a useful tool.
In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs—but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.
So, if a simple belief A started with −10 decibels and a complicated belief B started with −20 decibels, and we get 15 decibels of evidence supporting both, the posterior credibility of the beliefs are 5 and −5 - so we should favor A. Even if we get another 10 decibels of evidence and the credibility of B becomes 5, the credibility of A is now 15 so we should still favor it. The only way we can favor B is if we get enough evidence that support B but not A.
Of course—this doesn’t mean that A is true and B is false, only that we assign a higher probability to A.
So, if we go back to astronomy—our neogeocentric model has a higher burden of proof than the modern model, because it contains additional mysterious forces. We prove gravity and relativity and the work out how centrifugal forces work and that’s (more or less) enough for the modern model, and the exact same evidences also support the neogeocentric model—but they are not enough for it because we also need evidence for the new forces we came up with.
Do note, though, that the claim that “there is no mysterious force” is simpler than “there is a mysterious force” is taken for granted here...
I’m not sure I follow—what do you mean by “didn’t work”? Shouldn’t it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?
If you take a heliocentric theory, and substitute “geocentric” for “heliocentric”, you get a theory that doens’t work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.
In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs—but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.
What does “true” mean when you use it?
A geocentric theory can match any observation, providing you complicate it endelessly.
This discussion is about your claim that two theories are the same iff their empirical predictions are the same. But if that is the case, why does complexity matter?
EY is a realist and a correspondence theorist. He thinks that “true” means “corresponds to reality”, and he thinks that complexity matters, because, all other things being equal, a more complex theory is less likely to correspond than a simpler one. So his support of Occam’s Razor, his belief in correspondence-truth, and his realism all hang together.
But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are
denying that they are have any semantic (non empirical content), and, as an implication of that, that they “mean” or “say” nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?
If you take a heliocentric theory, and substitute “geocentric” for “heliocentric”, you get a theory that doens’t work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.
I only change the title, I don’t change anything they theory says. So its predictions are still the same as the heliocentric model.
But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are denying that they are have any semantic (non empirical content), and, as an implication of that, that they “mean” or “say” nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?
The semantics are still very important as a compact representation of predictions. The predictions are infinite—the belief will have to give a prediction for every possible scenario, and scenariospace is infinite. Even if the belief is only relevant for a finite subset of scenarios, it’d still have to say “I don’t care about this scenario” an infinite number of times.
Actually, it would make more sense to talk about belief systems than individual beliefs, where the belief system is simply the probability function P. But we can still talk about single beliefs if we remember that they need to be connected to a belief system in order to give predictions, and that when we compare two competing beliefs we are actually comparing two belief systems where the only difference is that one has belief A and the other has belief B.
Human minds, being finite, cannot contain infinite representations—we need finite representations for our beliefs. And that’s where the semantics come in—they are compact rules that can be used to generate predictions for every given scenario. And they are also important because the amount of predictions we can test is also finite. So even if we could comprehend the infinite prediction field over scenariospace, we wouldn’t be able to confirm a belief based on a finite number of experiments.
Also, with that kind of representation, we can’t even come up with the full representation of the belief. Consider a limited scenario space with just three scenarios X, Y and Z. We know what happened in X and Y, and write a belief based on it. But what would that belief say about Z? If the belief is represented as just its predictions, without connections between distinct predictions, how can we fill up the predictions table?
The semantics help us with that because they have less degrees of freedom. With N degrees of freedom we can match any K≤N number of observations, so we need M>N observations to even start counting them as evidence. I not sure how to come up with a formula for the number of degrees of freedom a semantic representation of a belief has—this depends not only on the numerical constants but also on the semantics—but some properties of it are obvious:
The prediction table representation has infinite degrees of freedom, since it can give a prediction for each scenario independently from the predictions given to the other scenarios.
A semantic representation that’s strictly more simple than another semantic representation—that is, you can go from the simple one to the complex one just by adding rules—then the simpler one has less degrees of freedom than the complicated one. This is because the complicated one has all the degrees of freedom the simpler one had, plus more degrees of freedom from the new rules (just adding the rules is some degrees of freedom, even if the rule itself does not contain anything that can be tweaked)
So the simplicity of the semantic representation is meaningful because it means less degrees of freedom and thus requires less evidence, but it does not make the belief “truer”—only the infinite prediction table determines how true the belief is.
That isn’t what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .
Actually, what I need to show is that the semantics say nothing extra about the territory that is meaningful. My argument is that the predictions are canonical representation of the belief, so it’s fine if the semantics say things about the territory that the predictions can’t say, as long as everything it says that does not affect the predictions is meaningless. At least, meaningless in the territory.
The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called “gravity”. If you call that force “travigy” instead, it will cause no difference in the predictions. This is because the name of the force if a property of the map, not the territory—if it was meaningful in the territory it should have had impact on the predictions.
And I claim that the “center of the universe” is similar—it has no meaning in the territory. The universe has no “center”—you can think of “center of mass” or “center of bounding volume” of a group of objects, but there is no single point you can naturally call “the center”. There can be good or bad choices for the center, but not right or wrong choices—the center is a property of the map, not the territory.
If it had any effect at all on the territory, it should have somehow affected the predictions.
My argument is that the predictions are canonical representation of the belief, so it’s fine if the semantics say things about the territory that the predictions can’t say, as long as everything it says that does not affect the predictions is meaningless.
How can you say something, but say something meaningless?
Why does not saying anything (meaningful) about the territory buy you? What’s the advantage?
Realists are realists because they place a terminal value in knowing what the territory is above and beyond making predictions. They can say what the advantage is … to them. If you don’t personally value knowing what the territory is, that need not apply to others.
The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called “gravity”. If you call that force “travigy” instead, it will cause no difference in the predictions
Travigy means nothing, or it means gravity. Either way , it doesnt affect my argument.
You don’t seem to understand what semantics is. It’s not just a matter of spelling changes or textual changes. A semantic change doesn’t mean that two strings fail strcmp() , it means that terms have been substituted with meaningful terms that mean something different.
And I claim that the “center of the universe” is similar—it has no meaning in the territory
“There is a centre of the universe” is considered false in modern cosmology.
So there is no real thing corresponding to the meaning of string “centre of the universe”. Which is to say that the string “centre of the universe” has a meaning , unlike the string “flibble na dar wobble”.
If it had any effect at all on the territory, it should have somehow affected the predictions.
The territory can be different ways that produce the same predictions.
The debate is apparently about the meaning of ‘different’. Someone might define different as, ‘predicting different observations’ and another as ‘different ontological content’.
If there is a box in front of you, which either contains as $20 or $100 note. However, you have very strong reasons to believe that the content of the box shall be unknown to you, forever. Is the question, “Is there a $20 or $100 note in the box?” meaningful. Is the belief in the presence of a $20 note different from the belief in the presence of a $100 note? That is essentially, similar to the problem of identical models.
Whether something is empiricly unknowable forever is itself unknowable … it’s an acute form of the problem of induction.
it doesn’t matter what’s inside it
But that isn’t quite the same as say ing that statements about what’s inside are meaningless. A statement can be meaningful without mattering. And you have to be able to interpret the meaning, in the ordinary sense, in order to be able to notice that it doesn’t matter.
If a universe where the statement is true is indistinguishable from a universe where the statement is false, then the statement is meaningless. And if the set of universes where statement A is true is identical to the set of universes where statement B is true, then statement A and statement B have the same meaning whether or not you can “algebraically” convert one to the other.
If A and B assert different things, we can test for these differences. Maybe not with current technology, but in principle. They yield different predictions and are therefore different beliefs.
I’d argue not. Even though Eliezer and Scott brought the gods in for the theatrical and rhetorical impact, evolution is the same old evolution and competition is the same old competition. Describing the idea differently does not automatically make it a different idea—just like describing f(x)=(x+1)2 as g(x)=x2+2x+1 does not make it a different function.
In case of mathematic functions we have a simple equivalence law: f≡g⟺∀xf(x)=g(x). I’d argue we can have a similar equivalence law for beliefs - A≡B⟺∀XP(X∣A)=P(X∣B) where A and B are beliefs and X is an observation.
This condition is obviously necessary because if A≡B even though ∃YP(Y∣A)≠P(Y∣B) and we find that P(Y)=P(Y∣A), that would support A and therefore also B (because they are equivalent) which means an observation that does not match the belief’s predictions supports it.
Is it sufficient? My argument for its sufficiency is not as analytical as the one for its necessity, so this may be the weak point of my claim, but here it goes: If A≢B, even though they give the same predictions, then something other than the state and laws of the universe is deciding whether a belief is true or false (actually—how much accurate is it). This undermines the core idea of both science and Bayesianism that beliefs should be judged by empirical evidences. Now, maybe this concept is wrong—but if it is, Occam’s Razor itself becomes meaningless because if the explanation does not need to match the evidences, then the simplest explanation can always be “Magic!”.
The Quotation is not the Referent. Just because the text describing them is different doesn’t mean the assertions themselves are different.
..because exact synonymy is possible. Exact synonymy is also rare, and it gets less probable the longer the text is.
You need to be clear whether you are claiming that two theories are the same because their empirical content is the same, or because their semantic content is the same.
just like describing f(x)=(x+1)2 as g(x)=x2+2x+1 does not make it a different function.
Those are different...computationally. They would take a different amount of time to execute.
Pure maths is exceptional in its lack of semantics.
f=ma
and
P=IV
..are identical mathematically, but have different semantics in physics.
If A≢B, even though they give the same predictions, then something other than the state and laws of the universe is deciding whether a belief is true or false (actually—how much accurate is it)
If two theories are identical empirically and ontologically, then some mysterious third thing would be needed to explain any difference. But that is not what we are talking about. What we are discussing is your claim that empirical difference is the only possible difference , equivalently that the empirical content of a theory is all its content.
Then the answer to “what further difference could there be” is “what the theories say about reality”.
In the thought experiment we are considering , the contents of the box can be er be tested. Nonetheless $10 and $100 mean different things.
I’m not sure you realize how strong a statement “the contents of the box can be never be tested” is. It means even if we crack open the box we won’t be able to read the writing on the bill. It means that even if we somehow tracked all the $20 and all the $100 bills that were ever printed, their current location, and whether or not they were destroyed, we won’t be able to find one which is missing and deduce that it is inside the box. It means that even if we had a powerful atom-level scanner that can accurately map all the atoms in a given volume and put the box inside it, it won’t be able to detect if the atoms are arranged like a $20 bill or like a $100 bill. It means that even if a superinteligent AI capable of time reversal calculations tried to simulate a time reversal it wouldn’t be able to determine the bill’s value.
It means, that the amount printed on that bill has no effect on the universe, and was never affected by the universe.
Can you think of a scenario where that happens, but the value of dollar bill is still meaningful? Because I can easily describe a scenario where it isn’t:
Dollar bills were originally “promises” for gold. They were signed by the Treasurer and the secretary of the Treasury because the Treasury is the one responsible for fulfilling that promise. Even after the gold standard was abandoned, the principle that the Treasury is the one casting the value into the dollar bills remains. This is why the bills are still signed by the Treasury’s representatives.
So, the scenario I have in mind is that the bill inside the box is a special bill—instead of a fixed amount, it says the Treasurer will decide if it is worth 20 or 100 dollars. The bill is still signed by the Treasurer and the secretary of the Treasury, and thus has the same authority as regular bills. And, in order to fulfill the condition that the value of the bill is never known—the Treasurer is committed to never decide the worth of that bill.
Is it still meaningful to ask, in this scenario, if the bill is worth $20 or $100?
I can understand that your revised scenario is unverifiable, by understanding the words you wrote, ie. by grasping their meaning. As usual, the claim that some things are unverifiable is parasitic on the existence of a kind of meaning that has nothing to do with verifiability.
It takes more than literal epicycles , but there are any number of ways of complicating a theory to meet the facts.
Of course it is different. Heliocentricism says something different about reality than geocentricism.
Different… how? In what meaningful ways is it different?
Semantically and ontologically. The dictionary meanings of the words heliocentric and geocentric are opposites, so they assert different things about the territory.
Note that this the default hypothesis. Whatever I just called “dictionary meaning” is what is usually called “meaning” simpliciter.
Attempts to resist this conclusion are based on putting forward non standard definitions of “meaning”, which need to because argued for, not just assumed.
But this is not the dictionary definition of the geocentric model we are talking about—this we have twisted it to have the exact same predictions as the modern astronomical model. So it no longer asserts the same things about the territory as the original geocentric model—its assertions are now identical to the modern model. So why should it still hold the same meaning as the original geocentric model?
Dictionaries don’t define complex scientific theories.
Our complicated , bad, wrong , neo-geocentric theory is still a geocentric theory.
Therefore it makes different assertions about the territory than heliocentricism.
So if I copied the encyclopedia definition of the heliocentric model, and changed the title to “geocentric” model, it would be a “bad, wrong , neo-geocentric theory [that] is still a geocentric theory”?
It would be a theory that didn’t work, because you only changed one thing.
I’m not sure I follow—what do you mean by “didn’t work”? Shouldn’t it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?
OK, I continued reading, and in Decoherence is Simple Eliezer makes a good case for Occam’s Razor as more than just a useful tool.
In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs—but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.
So, if a simple belief A started with −10 decibels and a complicated belief B started with −20 decibels, and we get 15 decibels of evidence supporting both, the posterior credibility of the beliefs are 5 and −5 - so we should favor A. Even if we get another 10 decibels of evidence and the credibility of B becomes 5, the credibility of A is now 15 so we should still favor it. The only way we can favor B is if we get enough evidence that support B but not A.
Of course—this doesn’t mean that A is true and B is false, only that we assign a higher probability to A.
So, if we go back to astronomy—our neogeocentric model has a higher burden of proof than the modern model, because it contains additional mysterious forces. We prove gravity and relativity and the work out how centrifugal forces work and that’s (more or less) enough for the modern model, and the exact same evidences also support the neogeocentric model—but they are not enough for it because we also need evidence for the new forces we came up with.
Do note, though, that the claim that “there is no mysterious force” is simpler than “there is a mysterious force” is taken for granted here...
If you take a heliocentric theory, and substitute “geocentric” for “heliocentric”, you get a theory that doens’t work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.
What does “true” mean when you use it?
A geocentric theory can match any observation, providing you complicate it endelessly.
This discussion is about your claim that two theories are the same iff their empirical predictions are the same. But if that is the case, why does complexity matter?
EY is a realist and a correspondence theorist. He thinks that “true” means “corresponds to reality”, and he thinks that complexity matters, because, all other things being equal, a more complex theory is less likely to correspond than a simpler one. So his support of Occam’s Razor, his belief in correspondence-truth, and his realism all hang together.
But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are denying that they are have any semantic (non empirical content), and, as an implication of that, that they “mean” or “say” nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?
I only change the title, I don’t change anything they theory says. So its predictions are still the same as the heliocentric model.
The semantics are still very important as a compact representation of predictions. The predictions are infinite—the belief will have to give a prediction for every possible scenario, and scenariospace is infinite. Even if the belief is only relevant for a finite subset of scenarios, it’d still have to say “I don’t care about this scenario” an infinite number of times.
Actually, it would make more sense to talk about belief systems than individual beliefs, where the belief system is simply the probability function P. But we can still talk about single beliefs if we remember that they need to be connected to a belief system in order to give predictions, and that when we compare two competing beliefs we are actually comparing two belief systems where the only difference is that one has belief A and the other has belief B.
Human minds, being finite, cannot contain infinite representations—we need finite representations for our beliefs. And that’s where the semantics come in—they are compact rules that can be used to generate predictions for every given scenario. And they are also important because the amount of predictions we can test is also finite. So even if we could comprehend the infinite prediction field over scenariospace, we wouldn’t be able to confirm a belief based on a finite number of experiments.
Also, with that kind of representation, we can’t even come up with the full representation of the belief. Consider a limited scenario space with just three scenarios X, Y and Z. We know what happened in X and Y, and write a belief based on it. But what would that belief say about Z? If the belief is represented as just its predictions, without connections between distinct predictions, how can we fill up the predictions table?
The semantics help us with that because they have less degrees of freedom. With N degrees of freedom we can match any K≤N number of observations, so we need M>N observations to even start counting them as evidence. I not sure how to come up with a formula for the number of degrees of freedom a semantic representation of a belief has—this depends not only on the numerical constants but also on the semantics—but some properties of it are obvious:
The prediction table representation has infinite degrees of freedom, since it can give a prediction for each scenario independently from the predictions given to the other scenarios.
A semantic representation that’s strictly more simple than another semantic representation—that is, you can go from the simple one to the complex one just by adding rules—then the simpler one has less degrees of freedom than the complicated one. This is because the complicated one has all the degrees of freedom the simpler one had, plus more degrees of freedom from the new rules (just adding the rules is some degrees of freedom, even if the rule itself does not contain anything that can be tweaked)
So the simplicity of the semantic representation is meaningful because it means less degrees of freedom and thus requires less evidence, but it does not make the belief “truer”—only the infinite prediction table determines how true the belief is.
Maybe you do, but it’s my thought experiment!
That isn’t what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .
Actually, what I need to show is that the semantics say nothing extra about the territory that is meaningful. My argument is that the predictions are canonical representation of the belief, so it’s fine if the semantics say things about the territory that the predictions can’t say, as long as everything it says that does not affect the predictions is meaningless. At least, meaningless in the territory.
The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called “gravity”. If you call that force “travigy” instead, it will cause no difference in the predictions. This is because the name of the force if a property of the map, not the territory—if it was meaningful in the territory it should have had impact on the predictions.
And I claim that the “center of the universe” is similar—it has no meaning in the territory. The universe has no “center”—you can think of “center of mass” or “center of bounding volume” of a group of objects, but there is no single point you can naturally call “the center”. There can be good or bad choices for the center, but not right or wrong choices—the center is a property of the map, not the territory.
If it had any effect at all on the territory, it should have somehow affected the predictions.
How can you say something, but say something meaningless?
Why does not saying anything (meaningful) about the territory buy you? What’s the advantage?
Realists are realists because they place a terminal value in knowing what the territory is above and beyond making predictions. They can say what the advantage is … to them. If you don’t personally value knowing what the territory is, that need not apply to others.
Travigy means nothing, or it means gravity. Either way , it doesnt affect my argument.
You don’t seem to understand what semantics is. It’s not just a matter of spelling changes or textual changes. A semantic change doesn’t mean that two strings fail strcmp() , it means that terms have been substituted with meaningful terms that mean something different.
“There is a centre of the universe” is considered false in modern cosmology. So there is no real thing corresponding to the meaning of string “centre of the universe”. Which is to say that the string “centre of the universe” has a meaning , unlike the string “flibble na dar wobble”.
The territory can be different ways that produce the same predictions.
The debate is apparently about the meaning of ‘different’. Someone might define different as, ‘predicting different observations’ and another as ‘different ontological content’.
If there is a box in front of you, which either contains as $20 or $100 note. However, you have very strong reasons to believe that the content of the box shall be unknown to you, forever. Is the question, “Is there a $20 or $100 note in the box?” meaningful. Is the belief in the presence of a $20 note different from the belief in the presence of a $100 note? That is essentially, similar to the problem of identical models.
If the content of the box is unknown forever, that means that it doesn’t matter what’s inside it because we can’t get it out.
Whether something is empiricly unknowable forever is itself unknowable … it’s an acute form of the problem of induction.
But that isn’t quite the same as say ing that statements about what’s inside are meaningless. A statement can be meaningful without mattering. And you have to be able to interpret the meaning, in the ordinary sense, in order to be able to notice that it doesn’t matter.
If a universe where the statement is true is indistinguishable from a universe where the statement is false, then the statement is meaningless. And if the set of universes where statement A is true is identical to the set of universes where statement B is true, then statement A and statement B have the same meaning whether or not you can “algebraically” convert one to the other.
They’re not, because A and B assert different things.
If A and B assert different things, we can test for these differences. Maybe not with current technology, but in principle. They yield different predictions and are therefore different beliefs.
You keep assuming verificationism in order to prove verificationism.
They assert different things because they mean different things, because the dictionary meanings are different.
In the thought experiment we are considering , the contents of the box can be er be tested. Nonetheless $10 and $100 mean different things.
The Quotation is not the Referent. Just because the text describing them is different doesn’t mean the assertions themselves are different.
Eliezer identified evolution with the blind idiot god Azathoth. Does this make evolution a religious Lovecraftian concept?
Scott Alexander identified the Canaanite god Moloch with the principle that forces you to sacrifice your values for the competition. Does this make that principle an actual god? Should we pray to it?
I’d argue not. Even though Eliezer and Scott brought the gods in for the theatrical and rhetorical impact, evolution is the same old evolution and competition is the same old competition. Describing the idea differently does not automatically make it a different idea—just like describing f(x)=(x+1)2 as g(x)=x2+2x+1 does not make it a different function.
In case of mathematic functions we have a simple equivalence law: f≡g⟺∀xf(x)=g(x). I’d argue we can have a similar equivalence law for beliefs - A≡B⟺∀XP(X∣A)=P(X∣B) where A and B are beliefs and X is an observation.
This condition is obviously necessary because if A≡B even though ∃YP(Y∣A)≠P(Y∣B) and we find that P(Y)=P(Y∣A), that would support A and therefore also B (because they are equivalent) which means an observation that does not match the belief’s predictions supports it.
Is it sufficient? My argument for its sufficiency is not as analytical as the one for its necessity, so this may be the weak point of my claim, but here it goes: If A≢B, even though they give the same predictions, then something other than the state and laws of the universe is deciding whether a belief is true or false (actually—how much accurate is it). This undermines the core idea of both science and Bayesianism that beliefs should be judged by empirical evidences. Now, maybe this concept is wrong—but if it is, Occam’s Razor itself becomes meaningless because if the explanation does not need to match the evidences, then the simplest explanation can always be “Magic!”.
..because exact synonymy is possible. Exact synonymy is also rare, and it gets less probable the longer the text is.
You need to be clear whether you are claiming that two theories are the same because their empirical content is the same, or because their semantic content is the same.
Those are different...computationally. They would take a different amount of time to execute.
Pure maths is exceptional in its lack of semantics.
f=ma
and
P=IV
..are identical mathematically, but have different semantics in physics.
If two theories are identical empirically and ontologically, then some mysterious third thing would be needed to explain any difference. But that is not what we are talking about. What we are discussing is your claim that empirical difference is the only possible difference , equivalently that the empirical content of a theory is all its content.
Then the answer to “what further difference could there be” is “what the theories say about reality”.
I’m not sure you realize how strong a statement “the contents of the box can be never be tested” is. It means even if we crack open the box we won’t be able to read the writing on the bill. It means that even if we somehow tracked all the $20 and all the $100 bills that were ever printed, their current location, and whether or not they were destroyed, we won’t be able to find one which is missing and deduce that it is inside the box. It means that even if we had a powerful atom-level scanner that can accurately map all the atoms in a given volume and put the box inside it, it won’t be able to detect if the atoms are arranged like a $20 bill or like a $100 bill. It means that even if a superinteligent AI capable of time reversal calculations tried to simulate a time reversal it wouldn’t be able to determine the bill’s value.
It means, that the amount printed on that bill has no effect on the universe, and was never affected by the universe.
Can you think of a scenario where that happens, but the value of dollar bill is still meaningful? Because I can easily describe a scenario where it isn’t:
Dollar bills were originally “promises” for gold. They were signed by the Treasurer and the secretary of the Treasury because the Treasury is the one responsible for fulfilling that promise. Even after the gold standard was abandoned, the principle that the Treasury is the one casting the value into the dollar bills remains. This is why the bills are still signed by the Treasury’s representatives.
So, the scenario I have in mind is that the bill inside the box is a special bill—instead of a fixed amount, it says the Treasurer will decide if it is worth 20 or 100 dollars. The bill is still signed by the Treasurer and the secretary of the Treasury, and thus has the same authority as regular bills. And, in order to fulfill the condition that the value of the bill is never known—the Treasurer is committed to never decide the worth of that bill.
Is it still meaningful to ask, in this scenario, if the bill is worth $20 or $100?
I can understand that your revised scenario is unverifiable, by understanding the words you wrote, ie. by grasping their meaning. As usual, the claim that some things are unverifiable is parasitic on the existence of a kind of meaning that has nothing to do with verifiability.