When B is not known or known to be false, A implies C, and, when it is know to be true, A&B do not imply C. Surely we have no actual disagreement here, and I only somehow managed to be unclear that before I introduced B, it wasn’t known?
Zulupineapple I feel like Said is trying to give a first lesson in propositional logic, a setting where all his statements are true. Were you trying to use the colloquial/conversational meaning of the word ‘implies’?
Yes, I explicitly said so earlier. And propositional logic makes no sense in this context. So I don’t understand where the confusion is coming from. But if you have advice on how I could have prevented that, I’d appreciate it. Is there a better word for “implies” maybe?
Maybe you’re talking about the usual logic? I explained in the very comment you first responded to, that by “X implies Y” I mean that “observing X lead us to believe that Y”. This is a common usage, I assume, and I can’t think of a better word.
And, if you see a wet sidewalk and know nothing about any sprinklers, then “rain” is the correct inference to make (depending on your priors). Surely we actually agree on that?
Yes, I saw your definition. The standard sort of generalization of propositional logic to probabilistic beliefs does not rescue your claims.
And, if you see a wet sidewalk and know nothing about any sprinklers, then “rain” is the correct inference to make (depending on your priors). Surely we actually agree on that?
No. If you’re leaving propositional logic behind and moving into the realm of probabilistic beliefs, then the correct inference to make is to use the information you’ve got to update from your priors to a posterior probability distribution over the possible states of the world. This is all standard stuff and I’m sure you know it as well as I do.
The outcome of this update may well be “P(rain) = $large_number; P(other things, such as sprinklers, etc.) = $smaller number”. You would, of course, then behave as if you believed it rained (more or less). (I am glossing over details, such as the overlap in P(sprinkler, etc.) and P(rain), as well as the possibility of “hybrid” behaviors that make sense if you are uncertain between two similarly likely possibilities, etc.; these details do not change the calculus.)
Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).
Now, you might also be claiming something like “seeing a wet sidewalk does increase P(rain), but does not increase P(sprinkler)”. The characterization quoted in the above paragraph would be consistent with this claim. However, this claim is obviously wrong, so I assumed this wasn’t what you meant.
So when I said “rain is the correct inference to make”, you somehow read that as “P(rain) = 1″? Because I see no other explanation why you felt the need to write entire paragraphs about what probabilities and priors are. I even explicitly mentioned priors in my comment, just to prevent a reply just like yours, but apparently that wasn’t enough.
Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).
Ok. How do you think I should have explained the situation? Preferably, in less than four paragraphs?
I personally find my explanation completely clear, especially since I expected most people to be familiar with the sidewalk/rain/sprinkler example, or something similar. But then I’m aware that my judgements about clarity don’t always match other people’s, so I’ll try to take your advice seriously.
How do you think I should have explained the situation? Preferably, in less than four paragraphs?
Assuming that “the situation” in question is this, from upthread—
We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C.
I would state the nearest-true-claim thus:
“Sometimes P(C|A) is very low but P(C|A,B) is much higher, enough to make it the dominant conclusion.”
Edit: Er, I got that backwards, obviously. Corrected version:
“Sometimes P(C|A) is very high, enough to make it the dominant conclusion, but P(C|A,B) is much lower [this is due to the low prior probability of B but the high conditional probability P(C|B)]”.
Ok, that’s reasonable. At least I understand why you would find such explanation better.
One issue is that I worry about using the conditional probability notation. I suspect that sometimes people are unwilling to parse it. Also the “very low” and “much higher” are awkward to say. I’d much prefer something in colloquial terms.
Another issue, I worry that this is not less confusing. This is evidenced by you confusing yourself about it, twice (no, P(C|B), or P(rain|sprinkler) is not high, and it doesn’t even have to be that low). I think, ultimately, listing which probabilities are “high” and which are “low” is not helpful, there should be a more general way to express the idea.
Zulupineapple do you release you have been engaged in a highly decoupled argument?
Your point (made way back) that values contextual conclusions is valid but decoupling is needed to enhance those conclusions and as it is harder by an order of magnitude requires more practice and knowledge.
Personally I feel the terms abstract and concrete are more useful. Alternating between the two and refining the abstract ideas before applying them to concrete examples.
But this is false. That was Eliezer’s whole point.
When B is not known or known to be false, A implies C, and, when it is know to be true, A&B do not imply C. Surely we have no actual disagreement here, and I only somehow managed to be unclear that before I introduced B, it wasn’t known?
No. This is wrong. This is what I am saying: when B is not known, A does not imply C. A can only imply C if B is known to be false.
Edit: In other words, A → (B ∨ C).
Edit 2: Spelling it out in more detail:
A ⇒ (B ∨ C)
(A ⇒ (B ∨ C)) ∧ A) ⇏ C
(A ⇒ (B ∨ C)) ∧ (A ∧ ¬B)) ⇒ C
Zulupineapple I feel like Said is trying to give a first lesson in propositional logic, a setting where all his statements are true. Were you trying to use the colloquial/conversational meaning of the word ‘implies’?
Yes, I explicitly said so earlier. And propositional logic makes no sense in this context. So I don’t understand where the confusion is coming from. But if you have advice on how I could have prevented that, I’d appreciate it. Is there a better word for “implies” maybe?
Maybe you’re talking about the usual logic? I explained in the very comment you first responded to, that by “X implies Y” I mean that “observing X lead us to believe that Y”. This is a common usage, I assume, and I can’t think of a better word.
And, if you see a wet sidewalk and know nothing about any sprinklers, then “rain” is the correct inference to make (depending on your priors). Surely we actually agree on that?
Yes, I saw your definition. The standard sort of generalization of propositional logic to probabilistic beliefs does not rescue your claims.
No. If you’re leaving propositional logic behind and moving into the realm of probabilistic beliefs, then the correct inference to make is to use the information you’ve got to update from your priors to a posterior probability distribution over the possible states of the world. This is all standard stuff and I’m sure you know it as well as I do.
The outcome of this update may well be “P(rain) = $large_number; P(other things, such as sprinklers, etc.) = $smaller number”. You would, of course, then behave as if you believed it rained (more or less). (I am glossing over details, such as the overlap in P(sprinkler, etc.) and P(rain), as well as the possibility of “hybrid” behaviors that make sense if you are uncertain between two similarly likely possibilities, etc.; these details do not change the calculus.)
Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).
Now, you might also be claiming something like “seeing a wet sidewalk does increase P(rain), but does not increase P(sprinkler)”. The characterization quoted in the above paragraph would be consistent with this claim. However, this claim is obviously wrong, so I assumed this wasn’t what you meant.
So when I said “rain is the correct inference to make”, you somehow read that as “P(rain) = 1″? Because I see no other explanation why you felt the need to write entire paragraphs about what probabilities and priors are. I even explicitly mentioned priors in my comment, just to prevent a reply just like yours, but apparently that wasn’t enough.
Ok. How do you think I should have explained the situation? Preferably, in less than four paragraphs?
I personally find my explanation completely clear, especially since I expected most people to be familiar with the sidewalk/rain/sprinkler example, or something similar. But then I’m aware that my judgements about clarity don’t always match other people’s, so I’ll try to take your advice seriously.
Assuming that “the situation” in question is this, from upthread—
I would state the nearest-true-claim thus:
“Sometimes P(C|A) is very low but P(C|A,B) is much higher, enough to make it the dominant conclusion.”
Edit: Er, I got that backwards, obviously. Corrected version:
“Sometimes P(C|A) is very high, enough to make it the dominant conclusion, but P(C|A,B) is much lower [this is due to the low prior probability of B but the high conditional probability P(C|B)]”.
Ok, that’s reasonable. At least I understand why you would find such explanation better.
One issue is that I worry about using the conditional probability notation. I suspect that sometimes people are unwilling to parse it. Also the “very low” and “much higher” are awkward to say. I’d much prefer something in colloquial terms.
Another issue, I worry that this is not less confusing. This is evidenced by you confusing yourself about it, twice (no, P(C|B), or P(rain|sprinkler) is not high, and it doesn’t even have to be that low). I think, ultimately, listing which probabilities are “high” and which are “low” is not helpful, there should be a more general way to express the idea.
Zulupineapple do you release you have been engaged in a highly decoupled argument? Your point (made way back) that values contextual conclusions is valid but decoupling is needed to enhance those conclusions and as it is harder by an order of magnitude requires more practice and knowledge.
Personally I feel the terms abstract and concrete are more useful. Alternating between the two and refining the abstract ideas before applying them to concrete examples.