Since this comes from the Harris-Klein debate, I should point to this recent post and especially the comments underneath it. To summarize, the “high decoupler” Harris is making errors. This is, of course, what happens when you ignore the context of the real world.
Now, there are perhaps better examples of disagreement between “high decouplers” and “low decouplers”, and perhaps those are still meaningful categories. But I’d be weary of conclusions made with high decoupling.
I propose an alternative view, where “low decoupling” is the objectively correct way to look at the world, and “high decoupling” is something you do because you’re lazy and unwilling to deal with all the couplings of the real world.
“”high decoupling” is something you do because you’re lazy and unwilling to deal with all the couplings of the real world”—I suspect you don’t quite understand what high decoupling is. Have you read Local Validity as a Key to Sanity and Civilisation? High decoupling conversations allow people to focus on checking the local validity of their arguments.
High decoupling conversations allow people to focus on checking the local validity of their arguments.
We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C. So, if you’re talking about A and C, and I bring up B, but you ignore it because that’s “sloppy thinking”, then that’s your problem. There is nothing valid about it.
I suspect you don’t quite understand what high decoupling is.
High decoupling is what Harris is doing in that debate. What he is doing is wrong. Therefore high decoupling is wrong (or at least unreliable).
I get the feeling that maybe you don’t quite understand what low decoupling is? You didn’t say anything explicitly negative about it, but I get the feeling that you don’t really consider it a reasonable perspective. E.g. what is the word “empathy” doing in your post? It might be pointing to some straw man.
High decoupling is what Harris is doing in that debate. What he is doing is wrong. Therefore high decoupling is wrong (or at least unreliable).
Upvoted for going all-in on a low-decoupling norm—I can’t tell whether that was intentionally funny or you’re genuinely living life by low-decoupling norms.
(Either way I think you’re passing the ITT for low-decoupling, so thanks.)
If you think there is something funny about low decoupling, then you’re probably strawmanning it. Or maybe it was a straw man all along, and I’m erroneously using that term to refer to something real.
or you’re genuinely living life by low-decoupling norms.
I can’t say that I do. But I try to. Because high decoupling leads to being wrong.
“We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C. So, if you’re talking about A and C, and I bring up B, but you ignore it because that’s “sloppy thinking”, then that’s your problem. There is nothing valid about it.”—What kind of “implies” are you talking about? Surely not logical implications, but rather the connotations of words? If so, I think I know what I need to clarify.
I didn’t comment on what norms should be in wider society, just that low decoupling spaces are vital. I was going to write this in my previous comment, but I had to run out the door. John Nerst explains “empathy” much more in his post.
I’m talking about the kind of “X implies Y” where observing X lead us to believe that Y is also likely true. For example, take A=”wet sidewalk” and C=”rain”. Then A implies C. But if B=”sprinkler”, then A&B no longer imply C. You may read this, also by Elizer and somewhat relevant.
John Nerst explains “empathy” much more in his post.
Yes, I’ve read that, and he is also strawmanning. Lack of empathy is not the problem with what Harris is saying. Did you read the comments I linked to? Or should I have quoted them here?
When B is not known or known to be false, A implies C, and, when it is know to be true, A&B do not imply C. Surely we have no actual disagreement here, and I only somehow managed to be unclear that before I introduced B, it wasn’t known?
Zulupineapple I feel like Said is trying to give a first lesson in propositional logic, a setting where all his statements are true. Were you trying to use the colloquial/conversational meaning of the word ‘implies’?
Yes, I explicitly said so earlier. And propositional logic makes no sense in this context. So I don’t understand where the confusion is coming from. But if you have advice on how I could have prevented that, I’d appreciate it. Is there a better word for “implies” maybe?
Maybe you’re talking about the usual logic? I explained in the very comment you first responded to, that by “X implies Y” I mean that “observing X lead us to believe that Y”. This is a common usage, I assume, and I can’t think of a better word.
And, if you see a wet sidewalk and know nothing about any sprinklers, then “rain” is the correct inference to make (depending on your priors). Surely we actually agree on that?
Yes, I saw your definition. The standard sort of generalization of propositional logic to probabilistic beliefs does not rescue your claims.
And, if you see a wet sidewalk and know nothing about any sprinklers, then “rain” is the correct inference to make (depending on your priors). Surely we actually agree on that?
No. If you’re leaving propositional logic behind and moving into the realm of probabilistic beliefs, then the correct inference to make is to use the information you’ve got to update from your priors to a posterior probability distribution over the possible states of the world. This is all standard stuff and I’m sure you know it as well as I do.
The outcome of this update may well be “P(rain) = $large_number; P(other things, such as sprinklers, etc.) = $smaller number”. You would, of course, then behave as if you believed it rained (more or less). (I am glossing over details, such as the overlap in P(sprinkler, etc.) and P(rain), as well as the possibility of “hybrid” behaviors that make sense if you are uncertain between two similarly likely possibilities, etc.; these details do not change the calculus.)
Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).
Now, you might also be claiming something like “seeing a wet sidewalk does increase P(rain), but does not increase P(sprinkler)”. The characterization quoted in the above paragraph would be consistent with this claim. However, this claim is obviously wrong, so I assumed this wasn’t what you meant.
So when I said “rain is the correct inference to make”, you somehow read that as “P(rain) = 1″? Because I see no other explanation why you felt the need to write entire paragraphs about what probabilities and priors are. I even explicitly mentioned priors in my comment, just to prevent a reply just like yours, but apparently that wasn’t enough.
Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).
Ok. How do you think I should have explained the situation? Preferably, in less than four paragraphs?
I personally find my explanation completely clear, especially since I expected most people to be familiar with the sidewalk/rain/sprinkler example, or something similar. But then I’m aware that my judgements about clarity don’t always match other people’s, so I’ll try to take your advice seriously.
How do you think I should have explained the situation? Preferably, in less than four paragraphs?
Assuming that “the situation” in question is this, from upthread—
We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C.
I would state the nearest-true-claim thus:
“Sometimes P(C|A) is very low but P(C|A,B) is much higher, enough to make it the dominant conclusion.”
Edit: Er, I got that backwards, obviously. Corrected version:
“Sometimes P(C|A) is very high, enough to make it the dominant conclusion, but P(C|A,B) is much lower [this is due to the low prior probability of B but the high conditional probability P(C|B)]”.
Ok, that’s reasonable. At least I understand why you would find such explanation better.
One issue is that I worry about using the conditional probability notation. I suspect that sometimes people are unwilling to parse it. Also the “very low” and “much higher” are awkward to say. I’d much prefer something in colloquial terms.
Another issue, I worry that this is not less confusing. This is evidenced by you confusing yourself about it, twice (no, P(C|B), or P(rain|sprinkler) is not high, and it doesn’t even have to be that low). I think, ultimately, listing which probabilities are “high” and which are “low” is not helpful, there should be a more general way to express the idea.
Zulupineapple do you release you have been engaged in a highly decoupled argument?
Your point (made way back) that values contextual conclusions is valid but decoupling is needed to enhance those conclusions and as it is harder by an order of magnitude requires more practice and knowledge.
Personally I feel the terms abstract and concrete are more useful. Alternating between the two and refining the abstract ideas before applying them to concrete examples.
Since this comes from the Harris-Klein debate, I should point to this recent post and especially the comments underneath it. To summarize, the “high decoupler” Harris is making errors. This is, of course, what happens when you ignore the context of the real world.
Now, there are perhaps better examples of disagreement between “high decouplers” and “low decouplers”, and perhaps those are still meaningful categories. But I’d be weary of conclusions made with high decoupling.
I propose an alternative view, where “low decoupling” is the objectively correct way to look at the world, and “high decoupling” is something you do because you’re lazy and unwilling to deal with all the couplings of the real world.
“”high decoupling” is something you do because you’re lazy and unwilling to deal with all the couplings of the real world”—I suspect you don’t quite understand what high decoupling is. Have you read Local Validity as a Key to Sanity and Civilisation? High decoupling conversations allow people to focus on checking the local validity of their arguments.
We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C. So, if you’re talking about A and C, and I bring up B, but you ignore it because that’s “sloppy thinking”, then that’s your problem. There is nothing valid about it.
High decoupling is what Harris is doing in that debate. What he is doing is wrong. Therefore high decoupling is wrong (or at least unreliable).
I get the feeling that maybe you don’t quite understand what low decoupling is? You didn’t say anything explicitly negative about it, but I get the feeling that you don’t really consider it a reasonable perspective. E.g. what is the word “empathy” doing in your post? It might be pointing to some straw man.
Upvoted for going all-in on a low-decoupling norm—I can’t tell whether that was intentionally funny or you’re genuinely living life by low-decoupling norms.
(Either way I think you’re passing the ITT for low-decoupling, so thanks.)
If you think there is something funny about low decoupling, then you’re probably strawmanning it. Or maybe it was a straw man all along, and I’m erroneously using that term to refer to something real.
I can’t say that I do. But I try to. Because high decoupling leads to being wrong.
“We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C. So, if you’re talking about A and C, and I bring up B, but you ignore it because that’s “sloppy thinking”, then that’s your problem. There is nothing valid about it.”—What kind of “implies” are you talking about? Surely not logical implications, but rather the connotations of words? If so, I think I know what I need to clarify.
I didn’t comment on what norms should be in wider society, just that low decoupling spaces are vital. I was going to write this in my previous comment, but I had to run out the door. John Nerst explains “empathy” much more in his post.
I’m talking about the kind of “X implies Y” where observing X lead us to believe that Y is also likely true. For example, take A=”wet sidewalk” and C=”rain”. Then A implies C. But if B=”sprinkler”, then A&B no longer imply C. You may read this, also by Elizer and somewhat relevant.
Yes, I’ve read that, and he is also strawmanning. Lack of empathy is not the problem with what Harris is saying. Did you read the comments I linked to? Or should I have quoted them here?
But this is false. That was Eliezer’s whole point.
When B is not known or known to be false, A implies C, and, when it is know to be true, A&B do not imply C. Surely we have no actual disagreement here, and I only somehow managed to be unclear that before I introduced B, it wasn’t known?
No. This is wrong. This is what I am saying: when B is not known, A does not imply C. A can only imply C if B is known to be false.
Edit: In other words, A → (B ∨ C).
Edit 2: Spelling it out in more detail:
A ⇒ (B ∨ C)
(A ⇒ (B ∨ C)) ∧ A) ⇏ C
(A ⇒ (B ∨ C)) ∧ (A ∧ ¬B)) ⇒ C
Zulupineapple I feel like Said is trying to give a first lesson in propositional logic, a setting where all his statements are true. Were you trying to use the colloquial/conversational meaning of the word ‘implies’?
Yes, I explicitly said so earlier. And propositional logic makes no sense in this context. So I don’t understand where the confusion is coming from. But if you have advice on how I could have prevented that, I’d appreciate it. Is there a better word for “implies” maybe?
Maybe you’re talking about the usual logic? I explained in the very comment you first responded to, that by “X implies Y” I mean that “observing X lead us to believe that Y”. This is a common usage, I assume, and I can’t think of a better word.
And, if you see a wet sidewalk and know nothing about any sprinklers, then “rain” is the correct inference to make (depending on your priors). Surely we actually agree on that?
Yes, I saw your definition. The standard sort of generalization of propositional logic to probabilistic beliefs does not rescue your claims.
No. If you’re leaving propositional logic behind and moving into the realm of probabilistic beliefs, then the correct inference to make is to use the information you’ve got to update from your priors to a posterior probability distribution over the possible states of the world. This is all standard stuff and I’m sure you know it as well as I do.
The outcome of this update may well be “P(rain) = $large_number; P(other things, such as sprinklers, etc.) = $smaller number”. You would, of course, then behave as if you believed it rained (more or less). (I am glossing over details, such as the overlap in P(sprinkler, etc.) and P(rain), as well as the possibility of “hybrid” behaviors that make sense if you are uncertain between two similarly likely possibilities, etc.; these details do not change the calculus.)
Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).
Now, you might also be claiming something like “seeing a wet sidewalk does increase P(rain), but does not increase P(sprinkler)”. The characterization quoted in the above paragraph would be consistent with this claim. However, this claim is obviously wrong, so I assumed this wasn’t what you meant.
So when I said “rain is the correct inference to make”, you somehow read that as “P(rain) = 1″? Because I see no other explanation why you felt the need to write entire paragraphs about what probabilities and priors are. I even explicitly mentioned priors in my comment, just to prevent a reply just like yours, but apparently that wasn’t enough.
Ok. How do you think I should have explained the situation? Preferably, in less than four paragraphs?
I personally find my explanation completely clear, especially since I expected most people to be familiar with the sidewalk/rain/sprinkler example, or something similar. But then I’m aware that my judgements about clarity don’t always match other people’s, so I’ll try to take your advice seriously.
Assuming that “the situation” in question is this, from upthread—
I would state the nearest-true-claim thus:
“Sometimes P(C|A) is very low but P(C|A,B) is much higher, enough to make it the dominant conclusion.”
Edit: Er, I got that backwards, obviously. Corrected version:
“Sometimes P(C|A) is very high, enough to make it the dominant conclusion, but P(C|A,B) is much lower [this is due to the low prior probability of B but the high conditional probability P(C|B)]”.
Ok, that’s reasonable. At least I understand why you would find such explanation better.
One issue is that I worry about using the conditional probability notation. I suspect that sometimes people are unwilling to parse it. Also the “very low” and “much higher” are awkward to say. I’d much prefer something in colloquial terms.
Another issue, I worry that this is not less confusing. This is evidenced by you confusing yourself about it, twice (no, P(C|B), or P(rain|sprinkler) is not high, and it doesn’t even have to be that low). I think, ultimately, listing which probabilities are “high” and which are “low” is not helpful, there should be a more general way to express the idea.
Zulupineapple do you release you have been engaged in a highly decoupled argument? Your point (made way back) that values contextual conclusions is valid but decoupling is needed to enhance those conclusions and as it is harder by an order of magnitude requires more practice and knowledge.
Personally I feel the terms abstract and concrete are more useful. Alternating between the two and refining the abstract ideas before applying them to concrete examples.