“Steel is better than aluminium!” counters Bob. Both of them continue to stubbornly hold these opinions, even in the face of vehement denials from the other.
It is not at once clear how to resolve this issue. However, both Alice and Bob have recently read the above article, and attempt to apply it to their disagreement.
“Aluminium is better than steel because aluminium does not rust,” says Alice. “The statement ‘aluminium does not rust, but steel does’ is an equivalent argument to ‘aluminium is better than steel’”.
“Steel is better than aluminium because steel is stronger than aluminium,” counters Bob. “Steel can hold more weight than aluminium without bending, which makes it a superior metal.”
“So the crux of our argument,” concludes Alice, “is really that we are disagreeing on what it is that makes a metal better; I am placing more importance on rustproofing, while you are showing a preference for strength?”
For this example you don’t need any double cruxes. Alice and Bob should have just defined their terms, specifically the word “better” to which they attach different meanings.
That is true. In a disagreement where the root of the disagreement is applying different meanings to the word ‘better’, properly defining that term would identify the true disagreement straight away. The double crux method, by seeking equivalent statements for each position, brings that disagreement in terminology to light almost immediately (where a word-by-word process of definitions might well get mired down in the definition of ‘steel’ and whether or not it includes small amounts of chromium—which might be interesting and informative on its own, but does nothing to resolve the disagreement).
This appears to suggest that double crux, applied properly, will work in every case where the true disagreement is a matter is inconsistent definition of terms (as above). I’d go further, and say that the double crux method will also work in cases where the disagreement is due to one of the debaters having made an error in a mathematical equation that he believes supports his argument. So, when you don’t know the root cause of the argument, double crux is probably at least as fast a route to finding that cause as a careful definition of all terms, and probably faster.
Let me rephrase: does the double crux method contains any improvement that is not already covered by tabooing terms? Or simply saying “why do you think this is the case?”
“Steel is better than aluminum because aluminum is worse than steel” is also equivalent, adheres to the letter of the prescription but does not move the discussion forward.
What I’m trying to prove wrong is that the logical prescriptions, given as explanation of what a crux is, do not really capture anything substantial.
Let me rephrase: does the double crux method contains any improvement that is not already covered by tabooing terms? Or simply saying “why do you think this is the case?”
In this particular argument, no. (In fact, if both participants are willing to examine their own chain of reasoning and consider that they might be wrong, then asking “why do you think this is the case?” sounds like a perfect first step in the double crux method to me)
In cases where the disagreement is due to (say) Bob making a mathematical error, tabooing terms is unlikely to reveal the error, while double crux seems likely to do so. So, as a general disagreement-solving technique, it seems powerful as it can be applied to a wide variety of causes of disagreement, even without knowing what the cause of the disagreement actually is.
If I’m understanding correctly, I think you’ve made a mistake in your formal logic above—you equated “If B, then A” with “If A, then B” which is not at all the same.
The search for a double crux encourages each side to adopt the causal model of the other (or, in other words, to search through the other’s causal models until they find one they can agree is true). I believe “If B, then A,” which is meaningfully different from your belief “If ¬B then ¬A.” If each of us comes around to saying, “Yeah, I buy your similar-but-different causal model, too,” then we’ve converged in an often-significant way, and have almost always CLARIFIED the underlying belief structure.
If I’m understanding correctly, I think you’ve made a mistake in your formal logic above—you equated “If B, then A” with “If A, then B” which is not at all the same.
No, he only inferred “If A, then B” from “If not B, then not A” which is a valid inference.
2) if not B, then not A. Which implies if A then B.
… but then he went on to say “How can an equivalent argument have explanatory power?” which seemed, to me, to assume that “if B then A” and “if A then B” are equivalent (which they are not).
This is easier to think about in the context of specific examples, rather than as abstract logical propositions. You can generally tell when statement B is progress towards making the disagreement about A more concrete / more tractable / closer to the underlying source of disagreement.
I typically think of the arrows as causal implication between beliefs. For example, my belief that school uniforms reduce bullying causes me to believe that students should wear uniforms. With logical implication the contrapositive is equivalent to the original statement (as you say). With causal implication, trying to do the contrapositive would give us something like “If I believed that students should not wear uniforms, that would cause me to believe that uniforms don’t reduce bullying” which is not the sort of move that I want to make in my reasoning.
Another way to look at this, while sticking to logical implication, is that we don’t actually have B-->A. Instead we have (B & Q & R & S & T … & Z) --> A. For example, I believe that students should wear uniforms because uniforms reduce bullying, and uniforms are not too expensive, and uniforms do not reduce learning, and uniforms do not cause Ebola, etc. If you take the contrapositive, you get ~A --> (~B or ~Q or ~R or ~S or ~T … or ~Z). Or, in English, I have many cruxes for my belief that students should wear uniforms, and changing my mind about that belief could involve changing my mind about any one of those cruxes.
The requirement that B is crucial for A is equivalent to “If A then B”, not “if B then A”.
For example:
A = self-driving cars would be safer
B = the chances that a bug would cause all self-driving cars to crash spectacularly on the same day are small.
If A is true, then B must be true, because if B is false, A is clearly false. But B does not imply A: even if B is true (there could be zero chance of such a bug), A could still be false (self-driving cars might be bad at avoiding crashes due to, say, object recognition being too slow to react in time). So B being crucial for A means that A implies B.
I hear what you’re saying, which is what I hinted at with point n° 2, but “if B then A” is explicitely written in the post: last paragraph in the “How to play” section. It seems to me you’re arguing against the original poster about what “being crucial” means logically, and although I do not agree on the conclusion you reach, I do agree that the formulation is wrong.
I’m quite confident that my formulation isn’t wrong, and that we’re talking past each other (specifically, that you’re missing something important that I’m apparently not saying well).
What was explicitly written in the post was “If B then A. Furthermore, if not B then not A.” Those are two different statements, and you need both of them. The former is an expression of the belief structure of the person on the left. The latter is an expression of the belief structure of the person on the right. They are NOT logically equivalent. They are BOTH required for a “double crux,” because the whole point is for the two people to converge—to zero in on the places where they are not in disagreement, or where one can persuade the other of a causal model.
It’s a crux that cuts both ways—B’s true state implies A’s trueness, but B’s false state is not irrelevant in the usual way. Speaking strictly logically, if all we know is that B implies A, not-B doesn’t have any impact at all on A. But when we’re searching for a double crux, we’re searching for something where not-B does have causal impact on A—something where not-B implies not-A. That’s a meaningfully different and valuable situation, and finding it (particularly, assuming that it exists and can be found, and then going looking for it) is the key ingredient in this particular method of transforming argument into collaborative truth-seeking.
Ah, I think I’ve understood where the problem lies. See, we both agree that B --> A and -B --> -A. This second statement, as we know from logic, is equivalent to A --> B. So we both agree that B --> A and A --> B. Which yields that A is equivalent to B, or in symbols A <--> B. This is what I was referring to: the crux being equivalent to the original statement, not that B --> A is logically equivalent to -B --> -A
I’m probably rustier on my formal logic than you. But I think what’s going on here is fuzz around the boundaries where reality gets boiled down and translated to symbolic logic.
“Uniforms are good because they’ll reduce bullying.” (A because B, B --> A)
“Uniforms are bad, because along with all their costs they fail to reduce bullying.” (~A because ~B, ~B --> ~A)
Whether this is a minor abuse of language or a minor abuse of logic, I think it’s a mistake to go from that to “Uniforms are equivalent to bullying reduction” or “Bullying reductions result in uniforms.” I thought that was what you were claiming, and it seems nonsensical to me. I note that I’m confused, and therefore that this is probably not what you were implying, and I’ve made some silly mistake, but that leaves me little closer to understanding what you were.
“Uniforms are good because they’ll reduce bullying.” (A because B, B --> A) “Uniforms are bad, because along with all their costs they fail to reduce bullying.” (~A because ~B, ~B --> ~A)
A: “Uniforms are good”
B: “Uniforms reduce bullying”
B->A: “If uniforms reduce bullying, then uniforms are good.”
~B->~A : “If uniforms do not reduce bullying, then uniforms are not good.”
“A is equivalent to B”: “The statement ‘uniforms are good’ is exactly as true as the statement ‘uniforms reduce bullying’.”
A->B: “If uniforms are good, then it is possible to deduce that uniforms reduce bullying.”
There’s confusion here between logical implication and reason for belief.
Duncan, I believe, was expressing belief causality—not logical implication—when he wrote “If B, then A.” This was confusing because “if, then” is the traditional language for logical implication.
With logical implication, it might make sense to translate “A because B” as “B implies A”. However, with belief causality, “I believe A because I believe B” is very different from “B implies A”.
For example:
A: Uniforms are good.
B: Uniforms reduce bullying.
C: Uniforms cause death.
Let’s assume that you believe A because you believe B, and also that you would absolutely not believe A if it turned out that C were true. (That is, ~C is another crux of your belief in A.)
Now look what happens if B and C are both true. (Uniforms reduce bullying to zero because anyone who wears a uniform dies and therefore cannot bully or be bullied.)
C is true, therefore A is false even though B is true. So B can’t imply A.
B is only one reason for your belief in A, but other factors could override B and make A false for you in spite of B being true. That’s why you can have multiple independent cruxes. If any one of your cruxes for A turns out to be false, then you would have to conclude that A is false. But any one crux being true doesn’t by itself imply that A is true, because some other crux could be false, which would make A false.
So with belief causality, “A because B” does not mean that B implies A. What it actually means is that ~B implies ~A, or equivalently, that A implies B—which in that form sounds counter-intuitive even though it’s right.
So for B to be a crux of A means only (in formal logical implication) that A → B, and definitely not that B → A. In fact, for a crux to be interesting/useful, you don’t want a logical implication of B → A, because then you’ve effectively made no progress toward identifying the source of disagreement. To make progress, you want each crux to be “saying less than” A.
In a pure-logic kind of way, finding B where B is exactly equivalent to A means nothing, yes. However, in a human-communication kind of way, it’s often useful to stop and rephrase your argument in different words. (You’ll recognise when this is helpful if your debate partner says something along the lines of “Wait, is that what you meant? I had it all wrong!”)
This has nothing to do with formal logic; it’s merely a means of reducing the probability that your axioms have been misunderstood (which is a distressingly common problem).
Correct me if I’m wrong. You are searching for a sentence B such that:
1) if B then A
2) if not B, then not A. Which implies if A then B.
Which implies that you are searching for an equivalent argument. How can an equivalent argument have an explanatory power?
“Aluminium is better than steel!” cries Alice.
“Steel is better than aluminium!” counters Bob. Both of them continue to stubbornly hold these opinions, even in the face of vehement denials from the other.
It is not at once clear how to resolve this issue. However, both Alice and Bob have recently read the above article, and attempt to apply it to their disagreement.
“Aluminium is better than steel because aluminium does not rust,” says Alice. “The statement ‘aluminium does not rust, but steel does’ is an equivalent argument to ‘aluminium is better than steel’”.
“Steel is better than aluminium because steel is stronger than aluminium,” counters Bob. “Steel can hold more weight than aluminium without bending, which makes it a superior metal.”
“So the crux of our argument,” concludes Alice, “is really that we are disagreeing on what it is that makes a metal better; I am placing more importance on rustproofing, while you are showing a preference for strength?”
For this example you don’t need any double cruxes. Alice and Bob should have just defined their terms, specifically the word “better” to which they attach different meanings.
True, but they then could easily have gone on to do a meaningful double crux about why their chosen quality is the most important one to attend to.
That is true. In a disagreement where the root of the disagreement is applying different meanings to the word ‘better’, properly defining that term would identify the true disagreement straight away. The double crux method, by seeking equivalent statements for each position, brings that disagreement in terminology to light almost immediately (where a word-by-word process of definitions might well get mired down in the definition of ‘steel’ and whether or not it includes small amounts of chromium—which might be interesting and informative on its own, but does nothing to resolve the disagreement).
This appears to suggest that double crux, applied properly, will work in every case where the true disagreement is a matter is inconsistent definition of terms (as above). I’d go further, and say that the double crux method will also work in cases where the disagreement is due to one of the debaters having made an error in a mathematical equation that he believes supports his argument. So, when you don’t know the root cause of the argument, double crux is probably at least as fast a route to finding that cause as a careful definition of all terms, and probably faster.
Let me rephrase: does the double crux method contains any improvement that is not already covered by tabooing terms? Or simply saying “why do you think this is the case?”
“Steel is better than aluminum because aluminum is worse than steel” is also equivalent, adheres to the letter of the prescription but does not move the discussion forward.
What I’m trying to prove wrong is that the logical prescriptions, given as explanation of what a crux is, do not really capture anything substantial.
In this particular argument, no. (In fact, if both participants are willing to examine their own chain of reasoning and consider that they might be wrong, then asking “why do you think this is the case?” sounds like a perfect first step in the double crux method to me)
In cases where the disagreement is due to (say) Bob making a mathematical error, tabooing terms is unlikely to reveal the error, while double crux seems likely to do so. So, as a general disagreement-solving technique, it seems powerful as it can be applied to a wide variety of causes of disagreement, even without knowing what the cause of the disagreement actually is.
If I’m understanding correctly, I think you’ve made a mistake in your formal logic above—you equated “If B, then A” with “If A, then B” which is not at all the same.
The search for a double crux encourages each side to adopt the causal model of the other (or, in other words, to search through the other’s causal models until they find one they can agree is true). I believe “If B, then A,” which is meaningfully different from your belief “If ¬B then ¬A.” If each of us comes around to saying, “Yeah, I buy your similar-but-different causal model, too,” then we’ve converged in an often-significant way, and have almost always CLARIFIED the underlying belief structure.
No, he only inferred “If A, then B” from “If not B, then not A” which is a valid inference.
… but then he went on to say “How can an equivalent argument have explanatory power?” which seemed, to me, to assume that “if B then A” and “if A then B” are equivalent (which they are not).
I read that statement as implying that argument A is equivalent to argument B. (Not (1) and (2), which are statements about arguments A and B)
And, if A implies B and B implies A, then it seems to me that A and B have to be equivalent to each other.
(This is Dan from CFAR)
This is easier to think about in the context of specific examples, rather than as abstract logical propositions. You can generally tell when statement B is progress towards making the disagreement about A more concrete / more tractable / closer to the underlying source of disagreement.
I typically think of the arrows as causal implication between beliefs. For example, my belief that school uniforms reduce bullying causes me to believe that students should wear uniforms. With logical implication the contrapositive is equivalent to the original statement (as you say). With causal implication, trying to do the contrapositive would give us something like “If I believed that students should not wear uniforms, that would cause me to believe that uniforms don’t reduce bullying” which is not the sort of move that I want to make in my reasoning.
Another way to look at this, while sticking to logical implication, is that we don’t actually have B-->A. Instead we have (B & Q & R & S & T … & Z) --> A. For example, I believe that students should wear uniforms because uniforms reduce bullying, and uniforms are not too expensive, and uniforms do not reduce learning, and uniforms do not cause Ebola, etc. If you take the contrapositive, you get ~A --> (~B or ~Q or ~R or ~S or ~T … or ~Z). Or, in English, I have many cruxes for my belief that students should wear uniforms, and changing my mind about that belief could involve changing my mind about any one of those cruxes.
The requirement that B is crucial for A is equivalent to “If A then B”, not “if B then A”.
For example:
A = self-driving cars would be safer
B = the chances that a bug would cause all self-driving cars to crash spectacularly on the same day are small.
If A is true, then B must be true, because if B is false, A is clearly false. But B does not imply A: even if B is true (there could be zero chance of such a bug), A could still be false (self-driving cars might be bad at avoiding crashes due to, say, object recognition being too slow to react in time). So B being crucial for A means that A implies B.
I hear what you’re saying, which is what I hinted at with point n° 2, but “if B then A” is explicitely written in the post: last paragraph in the “How to play” section.
It seems to me you’re arguing against the original poster about what “being crucial” means logically, and although I do not agree on the conclusion you reach, I do agree that the formulation is wrong.
I’m quite confident that my formulation isn’t wrong, and that we’re talking past each other (specifically, that you’re missing something important that I’m apparently not saying well).
What was explicitly written in the post was “If B then A. Furthermore, if not B then not A.” Those are two different statements, and you need both of them. The former is an expression of the belief structure of the person on the left. The latter is an expression of the belief structure of the person on the right. They are NOT logically equivalent. They are BOTH required for a “double crux,” because the whole point is for the two people to converge—to zero in on the places where they are not in disagreement, or where one can persuade the other of a causal model.
It’s a crux that cuts both ways—B’s true state implies A’s trueness, but B’s false state is not irrelevant in the usual way. Speaking strictly logically, if all we know is that B implies A, not-B doesn’t have any impact at all on A. But when we’re searching for a double crux, we’re searching for something where not-B does have causal impact on A—something where not-B implies not-A. That’s a meaningfully different and valuable situation, and finding it (particularly, assuming that it exists and can be found, and then going looking for it) is the key ingredient in this particular method of transforming argument into collaborative truth-seeking.
Ah, I think I’ve understood where the problem lies.
See, we both agree that B --> A and -B --> -A. This second statement, as we know from logic, is equivalent to A --> B. So we both agree that B --> A and A --> B. Which yields that A is equivalent to B, or in symbols A <--> B.
This is what I was referring to: the crux being equivalent to the original statement, not that B --> A is logically equivalent to -B --> -A
I’m probably rustier on my formal logic than you. But I think what’s going on here is fuzz around the boundaries where reality gets boiled down and translated to symbolic logic.
“Uniforms are good because they’ll reduce bullying.” (A because B, B --> A) “Uniforms are bad, because along with all their costs they fail to reduce bullying.” (~A because ~B, ~B --> ~A)
Whether this is a minor abuse of language or a minor abuse of logic, I think it’s a mistake to go from that to “Uniforms are equivalent to bullying reduction” or “Bullying reductions result in uniforms.” I thought that was what you were claiming, and it seems nonsensical to me. I note that I’m confused, and therefore that this is probably not what you were implying, and I’ve made some silly mistake, but that leaves me little closer to understanding what you were.
A: “Uniforms are good”
B: “Uniforms reduce bullying”
B->A: “If uniforms reduce bullying, then uniforms are good.”
~B->~A : “If uniforms do not reduce bullying, then uniforms are not good.”
“A is equivalent to B”: “The statement ‘uniforms are good’ is exactly as true as the statement ‘uniforms reduce bullying’.”
A->B: “If uniforms are good, then it is possible to deduce that uniforms reduce bullying.”
...does that help?
Yep. Thanks. =)
I was misunderstanding “equivalency” as “identical in all respects to,” rather than seeing equivalency as “exactly as true as.”
There’s confusion here between logical implication and reason for belief.
Duncan, I believe, was expressing belief causality—not logical implication—when he wrote “If B, then A.” This was confusing because “if, then” is the traditional language for logical implication.
With logical implication, it might make sense to translate “A because B” as “B implies A”. However, with belief causality, “I believe A because I believe B” is very different from “B implies A”.
For example:
A: Uniforms are good.
B: Uniforms reduce bullying.
C: Uniforms cause death.
Let’s assume that you believe A because you believe B, and also that you would absolutely not believe A if it turned out that C were true. (That is, ~C is another crux of your belief in A.)
Now look what happens if B and C are both true. (Uniforms reduce bullying to zero because anyone who wears a uniform dies and therefore cannot bully or be bullied.)
C is true, therefore A is false even though B is true. So B can’t imply A.
B is only one reason for your belief in A, but other factors could override B and make A false for you in spite of B being true. That’s why you can have multiple independent cruxes. If any one of your cruxes for A turns out to be false, then you would have to conclude that A is false. But any one crux being true doesn’t by itself imply that A is true, because some other crux could be false, which would make A false.
So with belief causality, “A because B” does not mean that B implies A. What it actually means is that ~B implies ~A, or equivalently, that A implies B—which in that form sounds counter-intuitive even though it’s right.
So for B to be a crux of A means only (in formal logical implication) that A → B, and definitely not that B → A. In fact, for a crux to be interesting/useful, you don’t want a logical implication of B → A, because then you’ve effectively made no progress toward identifying the source of disagreement. To make progress, you want each crux to be “saying less than” A.
In a pure-logic kind of way, finding B where B is exactly equivalent to A means nothing, yes. However, in a human-communication kind of way, it’s often useful to stop and rephrase your argument in different words. (You’ll recognise when this is helpful if your debate partner says something along the lines of “Wait, is that what you meant? I had it all wrong!”)
This has nothing to do with formal logic; it’s merely a means of reducing the probability that your axioms have been misunderstood (which is a distressingly common problem).
I guess we now can get rid of http://lesswrong.com/lw/wj/is_that_your_true_rejection/ .