If double crux felt like the Inevitable Correct Thing, what other things would we most likely believe about rationality in order for that to be the case?
I think this is a potentially useful question to ask for three reasons. One, it can be a way to install double crux as a mental habit—figure out ways of thinking which make it seem inevitable. Two, to the extent that we think double crux really is quite useful, but don’t know exactly why, that’s Bayesian evidence for whatever we come up with as potential justification for it. But, three, pinning down sufficient conditions for double crux can also help us see limitations in its applicability (IE, point toward necessary conditions).
I like the four preconditions Duncan listed:
Epistemic humility.
Good faith.
Confidence in the existence of objective truth.
Curiosity.
I made my list mostly by moving through the stages of the algorithm and trying to justify each one. Again, these are things which I think might or might not be true, but which I think would help motivate one step or another of the double crux algorithm if they were true.
A mindset of gathering information from people (that is, a mindset of honest curiosity) is a good way to combat certain biases (“arguments are soldiers” and all that).
Finding disagreements with others and finding out why they believe what they believe is a good way to gather information from them.
Most people (or perhaps, most people in the intended audience) are biased to argue for their own points as a kind of dominance game / intelligence signaling. This reduces their ability to learn things from each other.
Telling people not to do that, in some appropriate way, can actually improve the situation—perhaps by subverting the signaling game, making things other than winning arguments get you intelligence-signaling-points.
Illusion of transparency is a common problem, and operationalizing disagreements is a good way to fight against the illusion of transparency.
Or: Free-floating beliefs are a common problem, and operationalization is a good way to fight free-floating beliefs.
Or: operationalizing / discussing examples is a good way to make things easier to reason about, which people oftem don’t take enough advantage of.
Seeking your cruxes helps ensure your belief isn’t free-floating: if the belief is doing any work, it must make some predictions (which means it could potentially be falsified). So, in looking for your cruxes, you’re doing yourself a service, not just the other person.
Giving your cruxes to the other person helps them disprove your beliefs, which is a good thing: it means you’re providing them with the tools to help you learn. You have reason to think they know something you don’t. (Just be sure that your conditions for switching beliefs are good!)
Seeking out cruxes shows the other person that you believe things for reasons: your beliefs could be different if things were different, so they are entangled with reality.
In ordinary conversations, people try to have modus ponens without modus tollens: they want a belief that implies lots of things very strongly, but which is immune to attack. Bayesian evidence doesn’t work this way; a hypothesis which makes sharp prediction is necessarily sticking its neck out for the chopping block if the prediction turns out false. So, asking what would change your mind (asking for cruxes) is in a way equivalent to asking for implications of your belief. However, it’s doing it in a way which enforces the equivalence of implication and potential falsifier.
Asking for cruxes from them is a good way to avoid wasting time in a conversation. You don’t want to spend time explaining something only to find that it doesn’t change their mind on the issue at hand. (But, you have to believe that they give honest cruxes, and also that they are working to give you cruxes which could plausibly lead to progress rather than ones which will just be impossible to decide one way or the other.)
It’s good to focus on why you believe what you believe, and why they believe what they believe. The most productive conversations will tend to concentrate on the sources of beliefs rather than the after-the-fact reasoning, because this is often where the most evidence lies.
If you disagree with their crux but it isn’t a crux for you, then you may have info for them, but the discussion won’t be very informative for your belief. Also, the weight of the information you have is less likely to be large. Perhaps discuss it, but look for a double crux.
If they disagree with your crux but it isn’t a crux for them, then there may be information for you to extract from them, but you’re allowing the conversation to be bias toward cherry-picking disproof of your belief; perhaps discuss, but try to get them to stick their neck out more so that you’re mutually testing your beliefs.
Of all of this, my attempt to justify looking for a double crux rather than accepting single-person cruxes sticks out to me as especially weak. Also, I think a lot of the above points get something wrong with respect to good faith, but I’m not quite sure how to articulate my confusion on that.
If double crux felt like the Inevitable Correct Thing, what other things would we most likely believe about rationality in order for that to be the case?
I think this is a potentially useful question to ask for three reasons. One, it can be a way to install double crux as a mental habit—figure out ways of thinking which make it seem inevitable. Two, to the extent that we think double crux really is quite useful, but don’t know exactly why, that’s Bayesian evidence for whatever we come up with as potential justification for it. But, three, pinning down sufficient conditions for double crux can also help us see limitations in its applicability (IE, point toward necessary conditions).
I like the four preconditions Duncan listed:
Epistemic humility.
Good faith.
Confidence in the existence of objective truth.
Curiosity.
I made my list mostly by moving through the stages of the algorithm and trying to justify each one. Again, these are things which I think might or might not be true, but which I think would help motivate one step or another of the double crux algorithm if they were true.
A mindset of gathering information from people (that is, a mindset of honest curiosity) is a good way to combat certain biases (“arguments are soldiers” and all that).
Finding disagreements with others and finding out why they believe what they believe is a good way to gather information from them.
Most people (or perhaps, most people in the intended audience) are biased to argue for their own points as a kind of dominance game / intelligence signaling. This reduces their ability to learn things from each other.
Telling people not to do that, in some appropriate way, can actually improve the situation—perhaps by subverting the signaling game, making things other than winning arguments get you intelligence-signaling-points.
Illusion of transparency is a common problem, and operationalizing disagreements is a good way to fight against the illusion of transparency.
Or: Free-floating beliefs are a common problem, and operationalization is a good way to fight free-floating beliefs.
Or: operationalizing / discussing examples is a good way to make things easier to reason about, which people oftem don’t take enough advantage of.
Seeking your cruxes helps ensure your belief isn’t free-floating: if the belief is doing any work, it must make some predictions (which means it could potentially be falsified). So, in looking for your cruxes, you’re doing yourself a service, not just the other person.
Giving your cruxes to the other person helps them disprove your beliefs, which is a good thing: it means you’re providing them with the tools to help you learn. You have reason to think they know something you don’t. (Just be sure that your conditions for switching beliefs are good!)
Seeking out cruxes shows the other person that you believe things for reasons: your beliefs could be different if things were different, so they are entangled with reality.
In ordinary conversations, people try to have modus ponens without modus tollens: they want a belief that implies lots of things very strongly, but which is immune to attack. Bayesian evidence doesn’t work this way; a hypothesis which makes sharp prediction is necessarily sticking its neck out for the chopping block if the prediction turns out false. So, asking what would change your mind (asking for cruxes) is in a way equivalent to asking for implications of your belief. However, it’s doing it in a way which enforces the equivalence of implication and potential falsifier.
Asking for cruxes from them is a good way to avoid wasting time in a conversation. You don’t want to spend time explaining something only to find that it doesn’t change their mind on the issue at hand. (But, you have to believe that they give honest cruxes, and also that they are working to give you cruxes which could plausibly lead to progress rather than ones which will just be impossible to decide one way or the other.)
It’s good to focus on why you believe what you believe, and why they believe what they believe. The most productive conversations will tend to concentrate on the sources of beliefs rather than the after-the-fact reasoning, because this is often where the most evidence lies.
If you disagree with their crux but it isn’t a crux for you, then you may have info for them, but the discussion won’t be very informative for your belief. Also, the weight of the information you have is less likely to be large. Perhaps discuss it, but look for a double crux.
If they disagree with your crux but it isn’t a crux for them, then there may be information for you to extract from them, but you’re allowing the conversation to be bias toward cherry-picking disproof of your belief; perhaps discuss, but try to get them to stick their neck out more so that you’re mutually testing your beliefs.
Of all of this, my attempt to justify looking for a double crux rather than accepting single-person cruxes sticks out to me as especially weak. Also, I think a lot of the above points get something wrong with respect to good faith, but I’m not quite sure how to articulate my confusion on that.