I’m not sure whether these remarks are addressed ‘as a reply’ to me in particular. That you use the ‘marginal tax rate in the UK’ example I do suggests this might be meant as a response. On the other hand, I struggle to locate the particular loci of disagreement—or rather, I see in your remarks an explanation of double crux which includes various elements I believe I both understand and object to, but not reasons that argue against this belief (e.g. “you think double crux involves X, but actually it is X*, and thus your objection vanishes when this misunderstanding is resolved”, “your objection to X is mistaken as Y”, etc.) If this is a reply, I apologise for not getting it; if it is not, I apologise for my mistake.
In any case, I take the opportunity to suggest to concretely identify one aspect of my disagreement:
A typical belief has many cruxes. For example, if Ron is in favor of a proposal to increase the top marginal tax rate in the UK by 5 percentage points, his cruxes might include “There is too much inequality in the UK”, “Increasing the top marginal rate by a few percentage points would not have much negative effect on the economy”, and “Spending by the UK government, at the margin, produces value”. If he thought that more inequality would be good for society then he would no longer favor increasing the top marginal rate. If he thought that increasing the top marginal rate would be disastrous for the UK economy then he would no longer favor increasing it (even if he didn’t change his mind about there being too much equality). If he thought that marginal government spending was worthless or harmful then he would no longer favor increasing taxes.
This seems to imply agreement with my take that cruxes (per how CFAR sees them) have the ‘if you change your mind about this, you should change your mind about that’, and so this example has the sequence think-esque characteristic that these cruxes are jointly necessary for ron’s belief (i.e. if Ron thinks ¬A, ¬B, or ¬C, he should change his mind about the marginal tax rate). Yet by my lights it seems more typical considerations like these exert weight upon the balance of reason, but not of such strength that their negation provides a decisive consideration against increasing taxes (e.g. it doesn’t seem crazy for Ron to think “Well, I don’t think inequality is a big deal, but other reasons nonetheless favour raising taxes”, or “Even though I think marginal spending by the UK government is harmful, this negative externality could be outweighed by other considerations”).
I think some harder data can provide better information than litigating hypothetical cases. If the claim that a typical belief has many cruxes, one should see that if one asks elite cognisers to state their credence for a belief, and then state their credences for the most crucial few considerations regarding it, the credence for the belief should only be very rarely higher than the lowest credence among the considerations. This is because if most beliefs have many (jointly necessary) cruxes which should usually comprise at least the top few considerations, and thus this conjunction is necessary (but not sufficient) for believing B, and P(one crux) >= P(conjunction of cruxes). In essence ones credence in a belief should be no greater than ones weakest crux (I guess usually the credence in the belief of a sequence-thinking argument should generally approximate a lower credence set by P(crux1)*P(crux2) etc, as these are usually fairly independent.)
In contrast, if I am closer to the mark, one should fairly commonly see the credence for the belief be higher than the lowest credence of the set of important considerations. If each consideration offers a bayesian update favouring B, a set of important considerations that support B may act together (along with other less important considerations) to increase its credence such that one is more confident of B than of some (or all) of the important considerations that support it.
I aver relevant elite cognisers (e.g. superforecasters, the philosophers I point to) will exhibit the property I suggest. I would also venture that when reasonable cognisers attempt to double crux, their credences will also behave in the way I predict.
I agree that it would be good to look at some real examples of beliefs rather than continuing with hypothetical examples and abstract arguments.
Your suggestion for what hard data to get isn’t something that we can do right now (and I’m also not sure if I disagree with your prediction). We do have some real examples of beliefs and (first attempt at stating) cruxes near at hand, in this comment from Duncan and in this post from gjm (under the heading “So, what would change my mind? ”) and Raemon (under the heading “So: My Actual Cruxes”). And I’d recommend that anyone who cares about cruxes or double crux enough to be reading this three-layers-deep comment, and who has never had a double crux conversation, pick a belief of yours, set a 5 minute timer, and spend that time looking for cruxes. (I recommend picking a belief that is near the level of actions, not something on the level of a philosophical doctrine.)
In response to your question about whether my comments were aimed at you:
They were partly aimed at you, partly aimed at other LWers (taking you as one data point of how LWers are thinking about cruxes). My impression is that your model of cruxes and double crux is different from the thing that folks around CFAR actually do, and I was trying to close that gap for you and for other folks who don’t have direct experience with double crux at CFAR.
For my first comment: the OP had several phrases like “traced to a single underlying consideration” which I would not use when talking about cruxes. Melissa’s current belief that she should start a gizmo company isn’t based on a single consideration, it’s a result of the fact that several different factors line up in a way that makes that specific plan look like an especially good idea. So of course she has several different cruxes. Similarly with views on marginal tax rates.
For my second comment: ‘Primarily look for things that would change your own views, not for things that would change the other person’s views’ is one of the core advantages of focusing on cruxes, in my opinion, and it didn’t seem to be a focus of the OP. It’s something that’s missing from your suggested substitute (“Look for key considerations”) and from your discussion of the example of how experts philosophers handle disagreements. e.g., If Theist is the one pressing the moral argument for the existence of God, because Theist guesses that it might shift Atheist’s views, then that is not a conversation based on cruxes. Whereas if Atheist is choosing to focus the discussion on that argument because Atheist thinks it might shift their own views, then it sounds like it is very similar to a conversation based on cruxes.
On the question of whether cruxes are all-or-nothing or a matter of degree: I think of “crux” as a term similar to “belief”. It suggests sharp category boundaries when in fact things are a matter of degree, but it’s often a good enough approximation and it’s easier for a person to think about, learn, and use the rest of the framework if they can fall back on the categorical concept. Replacing “look for cruxes” with “look for considerations to which your beliefs have relatively high credence sensitivity” also seems like a decent approximation. Doing a Value of Information calculation also seems like a decent approximation, at least for the subset of considerations that are within the model. I could say more to try to elaborate on all of this, but it feels like it really needs some concrete examples to point at. If a discussion like this was happening at a workshop, I’d elaborate by looking at the person’s attempts to come up with cruxes and giving them feedback.
(I’ll repeat here: this comment is about cruxes, not about double crux in particular.)
Hello Dan,
I’m not sure whether these remarks are addressed ‘as a reply’ to me in particular. That you use the ‘marginal tax rate in the UK’ example I do suggests this might be meant as a response. On the other hand, I struggle to locate the particular loci of disagreement—or rather, I see in your remarks an explanation of double crux which includes various elements I believe I both understand and object to, but not reasons that argue against this belief (e.g. “you think double crux involves X, but actually it is X*, and thus your objection vanishes when this misunderstanding is resolved”, “your objection to X is mistaken as Y”, etc.) If this is a reply, I apologise for not getting it; if it is not, I apologise for my mistake.
In any case, I take the opportunity to suggest to concretely identify one aspect of my disagreement:
This seems to imply agreement with my take that cruxes (per how CFAR sees them) have the ‘if you change your mind about this, you should change your mind about that’, and so this example has the sequence think-esque characteristic that these cruxes are jointly necessary for ron’s belief (i.e. if Ron thinks ¬A, ¬B, or ¬C, he should change his mind about the marginal tax rate). Yet by my lights it seems more typical considerations like these exert weight upon the balance of reason, but not of such strength that their negation provides a decisive consideration against increasing taxes (e.g. it doesn’t seem crazy for Ron to think “Well, I don’t think inequality is a big deal, but other reasons nonetheless favour raising taxes”, or “Even though I think marginal spending by the UK government is harmful, this negative externality could be outweighed by other considerations”).
I think some harder data can provide better information than litigating hypothetical cases. If the claim that a typical belief has many cruxes, one should see that if one asks elite cognisers to state their credence for a belief, and then state their credences for the most crucial few considerations regarding it, the credence for the belief should only be very rarely higher than the lowest credence among the considerations. This is because if most beliefs have many (jointly necessary) cruxes which should usually comprise at least the top few considerations, and thus this conjunction is necessary (but not sufficient) for believing B, and P(one crux) >= P(conjunction of cruxes). In essence ones credence in a belief should be no greater than ones weakest crux (I guess usually the credence in the belief of a sequence-thinking argument should generally approximate a lower credence set by P(crux1)*P(crux2) etc, as these are usually fairly independent.)
In contrast, if I am closer to the mark, one should fairly commonly see the credence for the belief be higher than the lowest credence of the set of important considerations. If each consideration offers a bayesian update favouring B, a set of important considerations that support B may act together (along with other less important considerations) to increase its credence such that one is more confident of B than of some (or all) of the important considerations that support it.
I aver relevant elite cognisers (e.g. superforecasters, the philosophers I point to) will exhibit the property I suggest. I would also venture that when reasonable cognisers attempt to double crux, their credences will also behave in the way I predict.
I agree that it would be good to look at some real examples of beliefs rather than continuing with hypothetical examples and abstract arguments.
Your suggestion for what hard data to get isn’t something that we can do right now (and I’m also not sure if I disagree with your prediction). We do have some real examples of beliefs and (first attempt at stating) cruxes near at hand, in this comment from Duncan and in this post from gjm (under the heading “So, what would change my mind? ”) and Raemon (under the heading “So: My Actual Cruxes”). And I’d recommend that anyone who cares about cruxes or double crux enough to be reading this three-layers-deep comment, and who has never had a double crux conversation, pick a belief of yours, set a 5 minute timer, and spend that time looking for cruxes. (I recommend picking a belief that is near the level of actions, not something on the level of a philosophical doctrine.)
In response to your question about whether my comments were aimed at you:
They were partly aimed at you, partly aimed at other LWers (taking you as one data point of how LWers are thinking about cruxes). My impression is that your model of cruxes and double crux is different from the thing that folks around CFAR actually do, and I was trying to close that gap for you and for other folks who don’t have direct experience with double crux at CFAR.
For my first comment: the OP had several phrases like “traced to a single underlying consideration” which I would not use when talking about cruxes. Melissa’s current belief that she should start a gizmo company isn’t based on a single consideration, it’s a result of the fact that several different factors line up in a way that makes that specific plan look like an especially good idea. So of course she has several different cruxes. Similarly with views on marginal tax rates.
For my second comment: ‘Primarily look for things that would change your own views, not for things that would change the other person’s views’ is one of the core advantages of focusing on cruxes, in my opinion, and it didn’t seem to be a focus of the OP. It’s something that’s missing from your suggested substitute (“Look for key considerations”) and from your discussion of the example of how experts philosophers handle disagreements. e.g., If Theist is the one pressing the moral argument for the existence of God, because Theist guesses that it might shift Atheist’s views, then that is not a conversation based on cruxes. Whereas if Atheist is choosing to focus the discussion on that argument because Atheist thinks it might shift their own views, then it sounds like it is very similar to a conversation based on cruxes.
On the question of whether cruxes are all-or-nothing or a matter of degree: I think of “crux” as a term similar to “belief”. It suggests sharp category boundaries when in fact things are a matter of degree, but it’s often a good enough approximation and it’s easier for a person to think about, learn, and use the rest of the framework if they can fall back on the categorical concept. Replacing “look for cruxes” with “look for considerations to which your beliefs have relatively high credence sensitivity” also seems like a decent approximation. Doing a Value of Information calculation also seems like a decent approximation, at least for the subset of considerations that are within the model. I could say more to try to elaborate on all of this, but it feels like it really needs some concrete examples to point at. If a discussion like this was happening at a workshop, I’d elaborate by looking at the person’s attempts to come up with cruxes and giving them feedback.
(I’ll repeat here: this comment is about cruxes, not about double crux in particular.)