Given those assumptions, a universal prior is appropriate… 50% chance that “My ball is blue” is true, 50% chance that it’s false.
You and Kindly both? Very surprising.
Consider you as B9, reading on the internet about some new and independent property of items, “bamboozle-ness”. Should you now believe that P(“My monitor is bamboozled”) = 0.5? That it is as likely that your monitor is bamboozled as that it’s not bamboozled?
If I offered you a bet of 100 big currency units, if it turns out your monitor was bamboozled, you’d win triple! Or 50x! Wouldn’t you accept, based on your “well, 50% chance of winning” assessment?
Am I bamboozled? Are you bamboozled?
Notice that B9 has even less reason to believe in colors than you in the example above—it hasn’t even read about them on the internet.
Instead of assigning 50-50 odds, you’d have to take that part of the probability space which represents “my belief in other models than my main model”, identify the miniscule prior for that specific model containing “colors”, or “bamboozleness”, then calculate from assuming that model the odds of blue versus not-blue, then weigh back in the uncertainty from such an arbitrary model being true in lieu of your standard model.
That it is as likely that your monitor is bamboozled as that it’s not bamboozled?
Given the following propositions: (P1) “My monitor is bamboozled.” (P2) “My monitor is not bamboozled.” (P3) “‘My monitor is bamboozled’ is not the sort of statement that has a binary truth value; monitors are neither bamboozled nor non-bamboozled.”
...and knowing nothing at all about bamboozledness, never even having heard the word before, it seems I ought to assign high probability to P3 (since it’s true of most statements that it’s possible to construct) and consequently low probabilities to P1 and P2.
But when I read about bamboozledness on the Internet (or am asked whether my ball is blue), my confidence in P3 seems to go up [EDIT: I mean down] pretty quickly, based on my experience with people talking about stuff. (Which among other things suggests that my prior for P3 wasn’t all that low [EDIT: I mean high].)
Having become convinced of NOT(P3) (despite still knowing nothing much about bamboozledness other than it’s the sort of thing people talk about on the Internet), if I have very low confidence in P1, I have very high confidence in P2. If I have very low confidence in P2, I have very high confidence in P1. Very high confidence in either proposition seems unjustifiable… indeed, a lower probability for P1 than P2 or vice-versa seems unjustifiable… so I conclude 50%.
If I’m wrong to do so, it seems I’m wrong to reduce my confidence in P3 in the first place. Which I guess is possible, though I do seem to do it quite naturally. But given NOT(P3), I genuinely don’t see why I should believe P(P2) > P(P1).
If I offered you a bet of 100 big currency units, if it turns out your monitor was bamboozled, you’d win triple! Or 50x! Wouldn’t you accept, based on your “well, 50% chance of winning” assessment?
Just to be clear: you’re offering me (300BCUs if P1, −100BCUs if P2)? And you’re suggesting I shouldn’t take that bet, because P(P2) >> P(P1)?
It seems to follow from that reasoning that I ought to take (300BCUs if P2, −100BCUs if P1). Would you suggest I take that bet?
Anyway, to answer your question: I wouldn’t take either bet if offered, because of game-theoretical considerations… that is, the moment you offer me the bet, that’s evidence that you expect to gain by the bet, which given my ignorance is enough to make me confident I’ll lose by accepting it. But if I eliminate those concerns, and I am confident in P3, then I’ll take either bet if offered. (Better yet, I’ll take both bets, and walk away with 200 BCUs.)
Let’s lift the veil: “bamboozledness” is a placeholder for … phlogiston (a la “contains more than 30ppm phlogiston” = “bamboozled”).
Looks like you now assign a probability of 0.5 to phlogiston, in your monitor, no less. (No fair? It could also have been something meaningful, but in the ‘blue balls’ scenario we’re asking for the prior of a concept which you’ve never even seen mentioned as such (and hopefully never experienced), what are the chances that a randomly picked concept is a sensible addition to your current world view.)
That’s the missing ingredient, the improbability of a hitherto unknown concept belonging to a sensible model of reality:
P(“Monitor contains phlogiston” | “phlogiston is the correct theory” Λ “I have no clue about the theory other than it being correct and wouldn’t know the first thing of how to guess what contains phlogiston”) could be around 0.5 (although not necessarily exactly 0.5 based on complexity considerations).
However, what you’re faced with isn’t ”… given that colors exist”, ”… given that bamboozledness exists”, ”… given that phlogiston exists” (in each case, ‘that the model which contains concepts corresponding to the aforementioned corresponds to reality’), it is simply “what is the chance that there is phlogiston in your computer?” (Wait, now it’s in my computer too! Not only my monitor?)
Since you have no (little - ‘read about it on the internet’) reason to assume that phlogiston / blue is anything meaningful, and especially given that in the scenario you aren’t even asked about the color of a ball, but simply the prior which relies upon the unknown concept of ‘blue’ which corresponds to some physical property which isn’t a part of your current model, any option which contains “phlogiston is nonsense”/”blue is nonsense”, in the form of “monitor does not contain phlogiston”, “ball is not blue”, is vastly favored.
I posed the bet to show that you wouldn’t actually assign a 0.5 probability to a randomly picked concept being part of your standard model. Heads says this concept called “blue” exists, tails it doesn’t. Since you like memes. Maybe it helps not to think about the ball, but to think about what it would mean for the ball to be “blue”. Instead of “Is the ball blue?”, think “does blue extend my current model of reality in a meaningful way”, then replace blue with bamboozled.
But I guess I do see where you’re coming from, more so than I did before. The all important question is, “does that new attribute you know nothing about have to correspond to any physically existing quantity, can you assume that it extends/replaces your current model of the world, and do you thus need to factor in the improbability of invalidating your current model into assigning the probabilities of the new attribute”. Would that be accurate?
Anyway, to answer your question: I wouldn’t take either bet if offered, because of game-theoretical considerations…
Enter Psi, Omega’s retarded, ahem, special little brother. It just goes around offering random bets, with no background knowledge whatsoever, so you’re free to disregard the “why is he offering a bet in the first place” reservations.
you have no (little - ‘read about it on the internet’) reason to assume that phlogiston / blue is anything meaningful,
Well, meaningfulness is the crux, yes.
As I said initially, when I read about bamboozledness on the Internet (or am asked whether my ball is blue), my confidence seems to grow pretty quickly that the word isn’t just gibberish… that there is some attribute to which the word refers, such that (P1 XOR P2) is true. When I listen to a conversation about bamboozled computers, I seem to generally accept the premise that bamboozled computers are possible pretty quickly, even if I haven’t the foggiest clue what a bamboozled computer (or monitor, or ball, or hot thing, or whatever) is. It would surprise me if this were uncommon.
And, sure, perhaps I ought to be more skeptical about the premise that people are talking about anything meaningful at all. (I’m not certain of this, but there’s certainly precedent for it.)
any option which contains “phlogiston is nonsense”/”blue is nonsense”, in the form of “monitor does not contain phlogiston”, “ball is not blue”
Here’s where you lose me. I don’t see how an option can contain “X is nonsense” in the form of “monitor does not contain X”. If X is nonsense, “monitor does not contain X” isn’t true. “monitor contains X” isn’t true either. That’s kind of what it means for X to be nonsense.
The all important question is, “does that new attribute you know nothing about have to correspond to any physically existing quantity, can you assume that it extends/replaces your current model of the world, and do you thus need to factor in the improbability of invalidating your current model into assigning the probabilities of the new attribute”. Would that be accurate?
I’m not sure. The question that seems important here is “how confident am I, about that new attribute X, that a system either has X or lacks X but doesn’t do both or neither?” Which seems to map pretty closely to “how confident am I that ‘X’ is meaningful?” Which may be equivalent to your formulation, but if so I don’t follow the equivalence.
Enter Psi, Omega’s retarded, ahem, special little brother.
(nods) As I said in the first place, if I eliminate the game-theoretical concerns, and I am confident that “bamboozled” isn’t just meaningless gibberish, then I’ll take either bet if offered.
You’re just trying to find out whether X is binary, then—if it is binary—you’d assign even odds, in the absence of any other information.
However, it’s not enough for “blue”—“not blue” to be established as a binary attribute, we also need to weigh in the chances of the semantic content (the definition of ‘blue’, unknown to us at that time) corresponding to any physical attributes.
Binarity isn’t the same as “describes a concept which translates to reality”. When you say meaningful, you (I think) refer to the former, while I refer to the latter. With ‘nonsense’ I didn’t mean ‘non-binary’, but instead ’if you had the actual definition of the color attribute, you’d find that it probably doesn’t correspond to any meaningful property of the world, and as such that not having the property is vastly more likely, which would be “ball isn’t blue (because nothing is blue, blue is e.g. about having blue-quarks, which don’t model reality)”.
Binarity isn’t the same as “describes a concept which translates to reality”.
I’ll accept that in general.
When you say meaningful, you (I think) refer to the former, while I refer to the latter.
In this context, I fail to understand what is entailed by that supposed difference.
Put another way: I fail to understand how “X”/”not X” can be a binary attribute of a physical system (a ball, a monitor, whatever) if X doesn’t correspond to a physical attribute, or a “concept which translates to reality”. Can you give me an example of such an X?
Put yet another way: if there’s no translation of X to reality, if there’s no physical attribute to which X corresponds, then it seems to me neither “X” nor “not X” can be true or meaningful. What in the world could they possibly mean? What evidence would compel confidence in one proposition or the other?
Looked at yet a different way...
case 1: I am confident phlogiston doesn’t exist.
I am confident of this because of evidence related to how friction works, how combustion works, because burning things can cause their mass to increase, for various other reasons. (P1) “My stove has phlogiston” is meaningful—for example, I know what it would be to test for its truth or falsehood—and based on other evidence I am confident it’s false. (P2) “My stove has no phlogiston” is meaningful, and based on other evidence I am confident it’s true.
If you remove all my evidence for the truth or falsehood of P1/P2, but somehow preserve my confidence in the meaningfulness of “phlogiston”, you seem to be saying that my P(P1) << P(P2).
case 2: I am confident photons exist. Similarly to P1/P2, I’m confident that P3 (“My lightbulb generates photons”) is true, and P4 (“My lightbulb generates no photons”) is false, and “photon” is meaningful. Remove my evidence for P3/P4 but preserve my confidence in the meaningfulness of “photon”, should my P(P3) << P(P4)? Or should my P(P3) >> P(P4)?
I don’t see any grounds for justifying either. Do you?
I don’t see any grounds for justifying either. Do you?
Yes. P1 also entails that phlogiston theory is an accurate descriptor of reality—after all, it is saying your stove has phlogiston. P2 does not entail that phlogiston theory is an accurate descriptor of reality. Rejecting that your stove contains phlogiston can be done on the basis of “chances are nothing contains phlogiston, not knowing anything about phlogiston theory, it’s probably not real, duh”, which is why P(P2)>>P(P1).
The same applies to case 2, knowing nothing about photons, you should always go with the proposition (in this case P4) which is also supported by “photons are an imaginary concept with no equivalent in reality”. For P3 to be correct, photons must have some physical equivalent on the territory level, so that anything (e.g. your lightbulb) can produce photons in the first place. For a randomly picked concept (not picked out of a physics textbook), the chances of that are negligible.
Take some random concept, such as “there are 17 kinds of quark, if something contains the 13th quark—the blue quark—we call it ‘blue’”. Then affirming it is blue entails affirming the 17-kinds-of-quark theory (quite the burden, knowing nothing about its veracity), while saying “it is not blue = it does not contain the 13th quark, because the 17-kinds-of-quark theory does not describe our reality” is the much favored default case.
A not-yet-considered randomly chosen concept (phlogiston, photons) does not have 50-50 odds of accurately describing reality, its odds of doing so—given no evidence—are vanishingly small. That translates to
P(“stove contains phlogiston”) being much smaller than P(“stove does not contain phlogiston”). Reason (rephrasing the above argument): rejecting phlogiston theory as an accurate map of the territory strengthens your “stove does not contain phlogiston (… because phlogiston theory is probably not an an accurate map, knowing nothing about it)”
even if
P(“stove contains phlogiston given phlogiston theory describes reality”) = P(“stove does not contain phlogiston given phlogiston theory describes reality”) = 0.5
I agree that if “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, then P(“my stove does not contain X”) >>> P(“my stove contains X”) for an arbitrarily selected concept X, since most arbitrarily selected concepts have no extension into the real world.
I am not nearly as convinced as you sound that “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, but I’m not sure there’s anything more to say about that than we’ve already said.
Also, thinking about it, I suspect I’m overly prone to assuming that X has some extension into the real world when I hear people talking about X.
I am not nearly as convinced as you sound that “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, but I’m not sure there’s anything more to say about that than we’ve already said.
Consider e.g. “There is not a magical garden gnome living under my floor”, “I don’t emit telepathic brain waves” or “There is no Superman-like alien on our planet”, which to me all are meaningful and accurate, even if they all contain concepts which do not (as far as we know) extend into the real world. Can an atheist not meaningfully say that “I don’t have a soul”?
If I adopted your point of view (i.e. talking about magical garden gnomes living or not living under my floor makes no (very little) sense either way since they (probably) cannot exist), then my confidence for or against such a proposition would be equal but very low (no 50% in that case either). Except if, as you say, you’re assigning a very high degree of belief into “concept extends into the real world” as soon as you hear someone talk about it.
“This is a property which I know nothing about but of which I am certain that it can apply to reality” is the only scenario in which you could argue for a belief of 0.5. It is not the scenario of the original post.
The more I think about this, the clearer it becomes that I’m getting my labels confused with my referents and consequently taking it way too much for granted that anything real is being talked about at all.
“Given that some monitors are bamboozled (and no other knowledge), is my monitor bamboozled?” isn’t the same question as “Given that “bamboozled” is a set of phonemes (and no other knowledge), is “my monitor is bamboozled” true?” or even “Given that English speakers sometimes talk about monitors being bamboozled (ibid), is my monitor bamboozled?” and, as you say, neither the original blue-ball case nor the bamboozled-computer case is remotely like the first question.
So, yeah: you’re right, I’m wrong. Thanks for your patience.
I ought to assign high probability to P3 (since it’s true of most statements that it’s possible to construct) and consequently low probabilities to P1 and P2.
I don’t think the logic in this part follows. Some of it looks like precision: it’s not clear to me that P1, P2, and P3 are mutually exclusive. What about cases where ‘my monitor is bamboozled’ and ‘my monitor is not bamboozled’ are both true, like sets that are both closed and open? Later, it looks like you want P3 to be the reverse of what you have it written as; there it looks like you want P3 to be the proposition that it is a well-formed statement with a binary truth value.
Blech; you’re right, I incompletely transitioned from an earlier formulation and didn’t shift signs all the way through. I think I fixed it now.
Your larger point about (p1 and p2) being just as plausible a priori is certainly true, and you’re right that makes “and consequently low probabilities to P1 and P2” not follow from a properly constructed version of P3.
I’m not sure that makes a difference, though perhaps it does. It still seems that P(P1) > P(P2) is no more likely, given complete ignorance of the referent for “bamboozle”, than P(P1) < P(P2)… and it still seems that knowing that otherwise sane people talk about whether monitors are bamboozled or not quickly makes me confident that P(P1 XOR P2) >> P((P1 AND P2) OR NOT(P1 OR P2))… though perhaps it ought not do so.
You and Kindly both? Very surprising.
Consider you as B9, reading on the internet about some new and independent property of items, “bamboozle-ness”. Should you now believe that P(“My monitor is bamboozled”) = 0.5? That it is as likely that your monitor is bamboozled as that it’s not bamboozled?
If I offered you a bet of 100 big currency units, if it turns out your monitor was bamboozled, you’d win triple! Or 50x! Wouldn’t you accept, based on your “well, 50% chance of winning” assessment?
Am I bamboozled? Are you bamboozled?
Notice that B9 has even less reason to believe in colors than you in the example above—it hasn’t even read about them on the internet.
Instead of assigning 50-50 odds, you’d have to take that part of the probability space which represents “my belief in other models than my main model”, identify the miniscule prior for that specific model containing “colors”, or “bamboozleness”, then calculate from assuming that model the odds of blue versus not-blue, then weigh back in the uncertainty from such an arbitrary model being true in lieu of your standard model.
Given the following propositions:
(P1) “My monitor is bamboozled.”
(P2) “My monitor is not bamboozled.”
(P3) “‘My monitor is bamboozled’ is not the sort of statement that has a binary truth value; monitors are neither bamboozled nor non-bamboozled.”
...and knowing nothing at all about bamboozledness, never even having heard the word before, it seems I ought to assign high probability to P3 (since it’s true of most statements that it’s possible to construct) and consequently low probabilities to P1 and P2.
But when I read about bamboozledness on the Internet (or am asked whether my ball is blue), my confidence in P3 seems to go up [EDIT: I mean down] pretty quickly, based on my experience with people talking about stuff. (Which among other things suggests that my prior for P3 wasn’t all that low [EDIT: I mean high].)
Having become convinced of NOT(P3) (despite still knowing nothing much about bamboozledness other than it’s the sort of thing people talk about on the Internet), if I have very low confidence in P1, I have very high confidence in P2. If I have very low confidence in P2, I have very high confidence in P1. Very high confidence in either proposition seems unjustifiable… indeed, a lower probability for P1 than P2 or vice-versa seems unjustifiable… so I conclude 50%.
If I’m wrong to do so, it seems I’m wrong to reduce my confidence in P3 in the first place.
Which I guess is possible, though I do seem to do it quite naturally.
But given NOT(P3), I genuinely don’t see why I should believe P(P2) > P(P1).
Just to be clear: you’re offering me (300BCUs if P1, −100BCUs if P2)?
And you’re suggesting I shouldn’t take that bet, because P(P2) >> P(P1)?
It seems to follow from that reasoning that I ought to take (300BCUs if P2, −100BCUs if P1).
Would you suggest I take that bet?
Anyway, to answer your question: I wouldn’t take either bet if offered, because of game-theoretical considerations… that is, the moment you offer me the bet, that’s evidence that you expect to gain by the bet, which given my ignorance is enough to make me confident I’ll lose by accepting it. But if I eliminate those concerns, and I am confident in P3, then I’ll take either bet if offered. (Better yet, I’ll take both bets, and walk away with 200 BCUs.)
Let’s lift the veil: “bamboozledness” is a placeholder for … phlogiston (a la “contains more than 30ppm phlogiston” = “bamboozled”).
Looks like you now assign a probability of 0.5 to phlogiston, in your monitor, no less. (No fair? It could also have been something meaningful, but in the ‘blue balls’ scenario we’re asking for the prior of a concept which you’ve never even seen mentioned as such (and hopefully never experienced), what are the chances that a randomly picked concept is a sensible addition to your current world view.)
That’s the missing ingredient, the improbability of a hitherto unknown concept belonging to a sensible model of reality:
P(“Monitor contains phlogiston” | “phlogiston is the correct theory” Λ “I have no clue about the theory other than it being correct and wouldn’t know the first thing of how to guess what contains phlogiston”) could be around 0.5 (although not necessarily exactly 0.5 based on complexity considerations).
However, what you’re faced with isn’t ”… given that colors exist”, ”… given that bamboozledness exists”, ”… given that phlogiston exists” (in each case, ‘that the model which contains concepts corresponding to the aforementioned corresponds to reality’), it is simply “what is the chance that there is phlogiston in your computer?” (Wait, now it’s in my computer too! Not only my monitor?)
Since you have no (little - ‘read about it on the internet’) reason to assume that phlogiston / blue is anything meaningful, and especially given that in the scenario you aren’t even asked about the color of a ball, but simply the prior which relies upon the unknown concept of ‘blue’ which corresponds to some physical property which isn’t a part of your current model, any option which contains “phlogiston is nonsense”/”blue is nonsense”, in the form of “monitor does not contain phlogiston”, “ball is not blue”, is vastly favored.
I posed the bet to show that you wouldn’t actually assign a 0.5 probability to a randomly picked concept being part of your standard model. Heads says this concept called “blue” exists, tails it doesn’t. Since you like memes. Maybe it helps not to think about the ball, but to think about what it would mean for the ball to be “blue”. Instead of “Is the ball blue?”, think “does blue extend my current model of reality in a meaningful way”, then replace blue with bamboozled.
But I guess I do see where you’re coming from, more so than I did before. The all important question is, “does that new attribute you know nothing about have to correspond to any physically existing quantity, can you assume that it extends/replaces your current model of the world, and do you thus need to factor in the improbability of invalidating your current model into assigning the probabilities of the new attribute”. Would that be accurate?
Enter Psi, Omega’s retarded, ahem, special little brother. It just goes around offering random bets, with no background knowledge whatsoever, so you’re free to disregard the “why is he offering a bet in the first place” reservations.
Well, meaningfulness is the crux, yes.
As I said initially, when I read about bamboozledness on the Internet (or am asked whether my ball is blue), my confidence seems to grow pretty quickly that the word isn’t just gibberish… that there is some attribute to which the word refers, such that (P1 XOR P2) is true. When I listen to a conversation about bamboozled computers, I seem to generally accept the premise that bamboozled computers are possible pretty quickly, even if I haven’t the foggiest clue what a bamboozled computer (or monitor, or ball, or hot thing, or whatever) is. It would surprise me if this were uncommon.
And, sure, perhaps I ought to be more skeptical about the premise that people are talking about anything meaningful at all. (I’m not certain of this, but there’s certainly precedent for it.)
Here’s where you lose me. I don’t see how an option can contain “X is nonsense” in the form of “monitor does not contain X”. If X is nonsense, “monitor does not contain X” isn’t true. “monitor contains X” isn’t true either. That’s kind of what it means for X to be nonsense.
I’m not sure. The question that seems important here is “how confident am I, about that new attribute X, that a system either has X or lacks X but doesn’t do both or neither?” Which seems to map pretty closely to “how confident am I that ‘X’ is meaningful?” Which may be equivalent to your formulation, but if so I don’t follow the equivalence.
(nods) As I said in the first place, if I eliminate the game-theoretical concerns, and I am confident that “bamboozled” isn’t just meaningless gibberish, then I’ll take either bet if offered.
You’re just trying to find out whether X is binary, then—if it is binary—you’d assign even odds, in the absence of any other information.
However, it’s not enough for “blue”—“not blue” to be established as a binary attribute, we also need to weigh in the chances of the semantic content (the definition of ‘blue’, unknown to us at that time) corresponding to any physical attributes.
Binarity isn’t the same as “describes a concept which translates to reality”. When you say meaningful, you (I think) refer to the former, while I refer to the latter. With ‘nonsense’ I didn’t mean ‘non-binary’, but instead ’if you had the actual definition of the color attribute, you’d find that it probably doesn’t correspond to any meaningful property of the world, and as such that not having the property is vastly more likely, which would be “ball isn’t blue (because nothing is blue, blue is e.g. about having blue-quarks, which don’t model reality)”.
I’ll accept that in general.
In this context, I fail to understand what is entailed by that supposed difference.
Put another way: I fail to understand how “X”/”not X” can be a binary attribute of a physical system (a ball, a monitor, whatever) if X doesn’t correspond to a physical attribute, or a “concept which translates to reality”. Can you give me an example of such an X?
Put yet another way: if there’s no translation of X to reality, if there’s no physical attribute to which X corresponds, then it seems to me neither “X” nor “not X” can be true or meaningful. What in the world could they possibly mean? What evidence would compel confidence in one proposition or the other?
Looked at yet a different way...
case 1: I am confident phlogiston doesn’t exist.
I am confident of this because of evidence related to how friction works, how combustion works, because burning things can cause their mass to increase, for various other reasons. (P1) “My stove has phlogiston” is meaningful—for example, I know what it would be to test for its truth or falsehood—and based on other evidence I am confident it’s false. (P2) “My stove has no phlogiston” is meaningful, and based on other evidence I am confident it’s true.
If you remove all my evidence for the truth or falsehood of P1/P2, but somehow preserve my confidence in the meaningfulness of “phlogiston”, you seem to be saying that my P(P1) << P(P2).
case 2: I am confident photons exist. Similarly to P1/P2, I’m confident that P3 (“My lightbulb generates photons”) is true, and P4 (“My lightbulb generates no photons”) is false, and “photon” is meaningful. Remove my evidence for P3/P4 but preserve my confidence in the meaningfulness of “photon”, should my P(P3) << P(P4)? Or should my P(P3) >> P(P4)?
I don’t see any grounds for justifying either. Do you?
Yes. P1 also entails that phlogiston theory is an accurate descriptor of reality—after all, it is saying your stove has phlogiston. P2 does not entail that phlogiston theory is an accurate descriptor of reality. Rejecting that your stove contains phlogiston can be done on the basis of “chances are nothing contains phlogiston, not knowing anything about phlogiston theory, it’s probably not real, duh”, which is why P(P2)>>P(P1).
The same applies to case 2, knowing nothing about photons, you should always go with the proposition (in this case P4) which is also supported by “photons are an imaginary concept with no equivalent in reality”. For P3 to be correct, photons must have some physical equivalent on the territory level, so that anything (e.g. your lightbulb) can produce photons in the first place. For a randomly picked concept (not picked out of a physics textbook), the chances of that are negligible.
Take some random concept, such as “there are 17 kinds of quark, if something contains the 13th quark—the blue quark—we call it ‘blue’”. Then affirming it is blue entails affirming the 17-kinds-of-quark theory (quite the burden, knowing nothing about its veracity), while saying “it is not blue = it does not contain the 13th quark, because the 17-kinds-of-quark theory does not describe our reality” is the much favored default case.
A not-yet-considered randomly chosen concept (phlogiston, photons) does not have 50-50 odds of accurately describing reality, its odds of doing so—given no evidence—are vanishingly small. That translates to
P(“stove contains phlogiston”) being much smaller than P(“stove does not contain phlogiston”). Reason (rephrasing the above argument): rejecting phlogiston theory as an accurate map of the territory strengthens your “stove does not contain phlogiston (… because phlogiston theory is probably not an an accurate map, knowing nothing about it)”
even if
P(“stove contains phlogiston given phlogiston theory describes reality”) = P(“stove does not contain phlogiston given phlogiston theory describes reality”) = 0.5
I agree that if “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, then P(“my stove does not contain X”) >>> P(“my stove contains X”) for an arbitrarily selected concept X, since most arbitrarily selected concepts have no extension into the real world.
I am not nearly as convinced as you sound that “my stove does not contain X” is a meaningful and accurate thing to say even when X has no extension into the real world at all, but I’m not sure there’s anything more to say about that than we’ve already said.
Also, thinking about it, I suspect I’m overly prone to assuming that X has some extension into the real world when I hear people talking about X.
I’m glad we found common ground.
Consider e.g. “There is not a magical garden gnome living under my floor”, “I don’t emit telepathic brain waves” or “There is no Superman-like alien on our planet”, which to me all are meaningful and accurate, even if they all contain concepts which do not (as far as we know) extend into the real world. Can an atheist not meaningfully say that “I don’t have a soul”?
If I adopted your point of view (i.e. talking about magical garden gnomes living or not living under my floor makes no (very little) sense either way since they (probably) cannot exist), then my confidence for or against such a proposition would be equal but very low (no 50% in that case either). Except if, as you say, you’re assigning a very high degree of belief into “concept extends into the real world” as soon as you hear someone talk about it.
“This is a property which I know nothing about but of which I am certain that it can apply to reality” is the only scenario in which you could argue for a belief of 0.5. It is not the scenario of the original post.
The more I think about this, the clearer it becomes that I’m getting my labels confused with my referents and consequently taking it way too much for granted that anything real is being talked about at all.
“Given that some monitors are bamboozled (and no other knowledge), is my monitor bamboozled?” isn’t the same question as “Given that “bamboozled” is a set of phonemes (and no other knowledge), is “my monitor is bamboozled” true?” or even “Given that English speakers sometimes talk about monitors being bamboozled (ibid), is my monitor bamboozled?” and, as you say, neither the original blue-ball case nor the bamboozled-computer case is remotely like the first question.
So, yeah: you’re right, I’m wrong. Thanks for your patience.
I don’t think the logic in this part follows. Some of it looks like precision: it’s not clear to me that P1, P2, and P3 are mutually exclusive. What about cases where ‘my monitor is bamboozled’ and ‘my monitor is not bamboozled’ are both true, like sets that are both closed and open? Later, it looks like you want P3 to be the reverse of what you have it written as; there it looks like you want P3 to be the proposition that it is a well-formed statement with a binary truth value.
Blech; you’re right, I incompletely transitioned from an earlier formulation and didn’t shift signs all the way through. I think I fixed it now.
Your larger point about (p1 and p2) being just as plausible a priori is certainly true, and you’re right that makes “and consequently low probabilities to P1 and P2” not follow from a properly constructed version of P3.
I’m not sure that makes a difference, though perhaps it does. It still seems that P(P1) > P(P2) is no more likely, given complete ignorance of the referent for “bamboozle”, than P(P1) < P(P2)… and it still seems that knowing that otherwise sane people talk about whether monitors are bamboozled or not quickly makes me confident that P(P1 XOR P2) >> P((P1 AND P2) OR NOT(P1 OR P2))… though perhaps it ought not do so.