Binarity isn’t the same as “describes a concept which translates to reality”.
I’ll accept that in general.
When you say meaningful, you (I think) refer to the former, while I refer to the latter.
In this context, I fail to understand what is entailed by that supposed difference.
Put another way: I fail to understand how “X”/”not X” can be a binary attribute of a physical system (a ball, a monitor, whatever) if X doesn’t correspond to a physical attribute, or a “concept which translates to reality”. Can you give me an example of such an X?
Put yet another way: if there’s no translation of X to reality, if there’s no physical attribute to which X corresponds, then it seems to me neither “X” nor “not X” can be true or meaningful. What in the world could they possibly mean? What evidence would compel confidence in one proposition or the other?
Looked at yet a different way...
case 1: I am confident phlogiston doesn’t exist.
I am confident of this because of evidence related to how friction works, how combustion works, because burning things can cause their mass to increase, for various other reasons. (P1) “My stove has phlogiston” is meaningful—for example, I know what it would be to test for its truth or falsehood—and based on other evidence I am confident it’s false. (P2) “My stove has no phlogiston” is meaningful, and based on other evidence I am confident it’s true.
If you remove all my evidence for the truth or falsehood of P1/P2, but somehow preserve my confidence in the meaningfulness of “phlogiston”, you seem to be saying that my P(P1) << P(P2).
case 2: I am confident photons exist. Similarly to P1/P2, I’m confident that P3 (“My lightbulb generates photons”) is true, and P4 (“My lightbulb generates no photons”) is false, and “photon” is meaningful. Remove my evidence for P3/P4 but preserve my confidence in the meaningfulness of “photon”, should my P(P3) << P(P4)? Or should my P(P3) >> P(P4)?
I don’t see any grounds for justifying either. Do you?
I don’t see any grounds for justifying either. Do you?
Yes. P1 also entails that phlogiston theory is an accurate descriptor of reality—after all, it is saying your stove has phlogiston. P2 does not entail that phlogiston theory is an accurate descriptor of reality. Rejecting that your stove contains phlogiston can be done on the basis of “chances are nothing contains phlogiston, not knowing anything about phlogiston theory, it’s probably not real, duh”, which is why P(P2)>>P(P1).
The same applies to case 2, knowing nothing about photons, you should always go with the proposition (in this case P4) which is also supported by “photons are an imaginary concept with no equivalent in reality”. For P3 to be correct, photons must have some physical equivalent on the territory level, so that anything (e.g. your lightbulb) can produce photons in the first place. For a randomly picked concept (not picked out of a physics textbook), the chances of that are negligible.
Take some random concept, such as “there are 17 kinds of quark, if something contains the 13th quark—the blue quark—we call it ‘blue’”. Then affirming it is blue entails affirming the 17-kinds-of-quark theory (quite the burden, knowing nothing about its veracity), while saying “it is not blue = it does not contain the 13th quark, because the 17-kinds-of-quark theory does not describe our reality” is the much favored default case.
A not-yet-considered randomly chosen concept (phlogiston, photons) does not have 50-50 odds of accurately describing reality, its odds of doing so—given no evidence—are vanishingly small. That translates to
P(“stove contains phlogiston”) being much smaller than P(“stove does not contain phlogiston”). Reason (rephrasing the above argument): rejecting phlogiston theory as an accurate map of the territory strengthens your “stove does not contain phlogiston (… because phlogiston theory is probably not an an accurate map, knowing nothing about it)”
even if
P(“stove contains phlogiston given phlogiston theory describes reality”) = P(“stove does not contain phlogiston given phlogiston theory describes reality”) = 0.5
I agree that if “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, then P(“my stove does not contain X”) >>> P(“my stove contains X”) for an arbitrarily selected concept X, since most arbitrarily selected concepts have no extension into the real world.
I am not nearly as convinced as you sound that “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, but I’m not sure there’s anything more to say about that than we’ve already said.
Also, thinking about it, I suspect I’m overly prone to assuming that X has some extension into the real world when I hear people talking about X.
I am not nearly as convinced as you sound that “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, but I’m not sure there’s anything more to say about that than we’ve already said.
Consider e.g. “There is not a magical garden gnome living under my floor”, “I don’t emit telepathic brain waves” or “There is no Superman-like alien on our planet”, which to me all are meaningful and accurate, even if they all contain concepts which do not (as far as we know) extend into the real world. Can an atheist not meaningfully say that “I don’t have a soul”?
If I adopted your point of view (i.e. talking about magical garden gnomes living or not living under my floor makes no (very little) sense either way since they (probably) cannot exist), then my confidence for or against such a proposition would be equal but very low (no 50% in that case either). Except if, as you say, you’re assigning a very high degree of belief into “concept extends into the real world” as soon as you hear someone talk about it.
“This is a property which I know nothing about but of which I am certain that it can apply to reality” is the only scenario in which you could argue for a belief of 0.5. It is not the scenario of the original post.
The more I think about this, the clearer it becomes that I’m getting my labels confused with my referents and consequently taking it way too much for granted that anything real is being talked about at all.
“Given that some monitors are bamboozled (and no other knowledge), is my monitor bamboozled?” isn’t the same question as “Given that “bamboozled” is a set of phonemes (and no other knowledge), is “my monitor is bamboozled” true?” or even “Given that English speakers sometimes talk about monitors being bamboozled (ibid), is my monitor bamboozled?” and, as you say, neither the original blue-ball case nor the bamboozled-computer case is remotely like the first question.
So, yeah: you’re right, I’m wrong. Thanks for your patience.
I’ll accept that in general.
In this context, I fail to understand what is entailed by that supposed difference.
Put another way: I fail to understand how “X”/”not X” can be a binary attribute of a physical system (a ball, a monitor, whatever) if X doesn’t correspond to a physical attribute, or a “concept which translates to reality”. Can you give me an example of such an X?
Put yet another way: if there’s no translation of X to reality, if there’s no physical attribute to which X corresponds, then it seems to me neither “X” nor “not X” can be true or meaningful. What in the world could they possibly mean? What evidence would compel confidence in one proposition or the other?
Looked at yet a different way...
case 1: I am confident phlogiston doesn’t exist.
I am confident of this because of evidence related to how friction works, how combustion works, because burning things can cause their mass to increase, for various other reasons. (P1) “My stove has phlogiston” is meaningful—for example, I know what it would be to test for its truth or falsehood—and based on other evidence I am confident it’s false. (P2) “My stove has no phlogiston” is meaningful, and based on other evidence I am confident it’s true.
If you remove all my evidence for the truth or falsehood of P1/P2, but somehow preserve my confidence in the meaningfulness of “phlogiston”, you seem to be saying that my P(P1) << P(P2).
case 2: I am confident photons exist. Similarly to P1/P2, I’m confident that P3 (“My lightbulb generates photons”) is true, and P4 (“My lightbulb generates no photons”) is false, and “photon” is meaningful. Remove my evidence for P3/P4 but preserve my confidence in the meaningfulness of “photon”, should my P(P3) << P(P4)? Or should my P(P3) >> P(P4)?
I don’t see any grounds for justifying either. Do you?
Yes. P1 also entails that phlogiston theory is an accurate descriptor of reality—after all, it is saying your stove has phlogiston. P2 does not entail that phlogiston theory is an accurate descriptor of reality. Rejecting that your stove contains phlogiston can be done on the basis of “chances are nothing contains phlogiston, not knowing anything about phlogiston theory, it’s probably not real, duh”, which is why P(P2)>>P(P1).
The same applies to case 2, knowing nothing about photons, you should always go with the proposition (in this case P4) which is also supported by “photons are an imaginary concept with no equivalent in reality”. For P3 to be correct, photons must have some physical equivalent on the territory level, so that anything (e.g. your lightbulb) can produce photons in the first place. For a randomly picked concept (not picked out of a physics textbook), the chances of that are negligible.
Take some random concept, such as “there are 17 kinds of quark, if something contains the 13th quark—the blue quark—we call it ‘blue’”. Then affirming it is blue entails affirming the 17-kinds-of-quark theory (quite the burden, knowing nothing about its veracity), while saying “it is not blue = it does not contain the 13th quark, because the 17-kinds-of-quark theory does not describe our reality” is the much favored default case.
A not-yet-considered randomly chosen concept (phlogiston, photons) does not have 50-50 odds of accurately describing reality, its odds of doing so—given no evidence—are vanishingly small. That translates to
P(“stove contains phlogiston”) being much smaller than P(“stove does not contain phlogiston”). Reason (rephrasing the above argument): rejecting phlogiston theory as an accurate map of the territory strengthens your “stove does not contain phlogiston (… because phlogiston theory is probably not an an accurate map, knowing nothing about it)”
even if
P(“stove contains phlogiston given phlogiston theory describes reality”) = P(“stove does not contain phlogiston given phlogiston theory describes reality”) = 0.5
I agree that if “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, then P(“my stove does not contain X”) >>> P(“my stove contains X”) for an arbitrarily selected concept X, since most arbitrarily selected concepts have no extension into the real world.
I am not nearly as convinced as you sound that “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, but I’m not sure there’s anything more to say about that than we’ve already said.
Also, thinking about it, I suspect I’m overly prone to assuming that X has some extension into the real world when I hear people talking about X.
I’m glad we found common ground.
Consider e.g. “There is not a magical garden gnome living under my floor”, “I don’t emit telepathic brain waves” or “There is no Superman-like alien on our planet”, which to me all are meaningful and accurate, even if they all contain concepts which do not (as far as we know) extend into the real world. Can an atheist not meaningfully say that “I don’t have a soul”?
If I adopted your point of view (i.e. talking about magical garden gnomes living or not living under my floor makes no (very little) sense either way since they (probably) cannot exist), then my confidence for or against such a proposition would be equal but very low (no 50% in that case either). Except if, as you say, you’re assigning a very high degree of belief into “concept extends into the real world” as soon as you hear someone talk about it.
“This is a property which I know nothing about but of which I am certain that it can apply to reality” is the only scenario in which you could argue for a belief of 0.5. It is not the scenario of the original post.
The more I think about this, the clearer it becomes that I’m getting my labels confused with my referents and consequently taking it way too much for granted that anything real is being talked about at all.
“Given that some monitors are bamboozled (and no other knowledge), is my monitor bamboozled?” isn’t the same question as “Given that “bamboozled” is a set of phonemes (and no other knowledge), is “my monitor is bamboozled” true?” or even “Given that English speakers sometimes talk about monitors being bamboozled (ibid), is my monitor bamboozled?” and, as you say, neither the original blue-ball case nor the bamboozled-computer case is remotely like the first question.
So, yeah: you’re right, I’m wrong. Thanks for your patience.