Anecdotal data time! We tried this at last week’s Chicago rationality meetup, with moderate success. Here’s a rundown of how we approached the activity, and some difficulties and confusion we encountered.
Approach:
Before the meeting, some of us came up with lists of possibly contentious topics and/or strongly held opinions, and we used those as starting points by just listing them off to the group and seeing if anyone held the opposite view. Some of the assertions on which we disagreed were:
Cryonic preservation should be standard medical procedure upon death, on an opt-out basis
For the average person, reading the news has no practical value beyond social signalling
Public schools should focus on providing some minimum quality of education to all students before allocating resources to programs for gifted students
The rationality movement focuses too much of its energy on AI safety
We should expend more effort to make rationality more accessible to ‘normal people’
We paired off, with each pair in front of a blackboard, and spent about 15 minutes on our first double crux, after the resolution of which the conversations mostly devolved. We then came together, gave feedback, switched partners, and tried again.
Difficulties/confusion:
For the purposes of practice, we had trouble finding points of genuine disagreement – in some cases we found that the argument dissolved after we clarified minor semantic points in the assertion, and in other cases a pair would just sit there and agree on assertion after assertion (though the latter is more a flaw in the way I designed the activity than in the actual technique). However, we all agree that this technique will be useful when we encounter disagreements in future meetings, and even in the absence of disagreement, the activity of finding cruxes was a useful way of examining the structure of our beliefs.
We were a little confused as to whether coming up with an empirical test to resolve the issue was a satisfactory endpoint, or if we actually needed to seek out the results in order to consider the disagreement resolved.
In one case, when we were debating the cryonics assertion, my interlocutor managed to convince me of all the factual questions on which I thought my disagreement rested, but I still had some lingering doubt – even though I was convinced of the conclusion on an intellectual level, I didn’t grok it. When we learned goal factoring, we were taught not dismiss fuzzy, difficult-to-define feelings; that they could be genuinely important reasons for our thoughts and behavior. Given its reliance on empiricism, how does Double Crux deal with these feelings, if at all? (Disclaimer: it’s been two years since we learned goal factoring, so maybe we were taught how to deal with this and I just forgot.)
In another case, my interlocutor changed his mind on the question of public schools, but when asked to explain the line of argument that led him to change his mind, he wasn’t able to construct an argument that sounded convincing to him. I’m not sure what happened here, but in the future I would place more emphasis on writing down the key points of the discussion as it unfolds. We did make some use of the blackboards, but it wasn’t very systematic.
Overall it wasn’t as structured as I expected it to be. People didn’t reference the write-up when immersed in their discussions, and didn’t make use of any of the tips you gave. I know you said we shouldn’t be preoccupied with executing “the ideal double crux,” but I somehow still have the feeling that we didn’t quite do it right. For example, I don’t think we focused enough on falsifiability and we didn’t resonate after reaching our conclusions, which seem like key points. But ultimately the model was still useful, no matter how loosely we adhered to it.
I hope some of that was helpful to you! Also, tell Eli Tyre we miss him!
Anecdotal data time! We tried this at last week’s Chicago rationality meetup, with moderate success. Here’s a rundown of how we approached the activity, and some difficulties and confusion we encountered.
Approach:
Before the meeting, some of us came up with lists of possibly contentious topics and/or strongly held opinions, and we used those as starting points by just listing them off to the group and seeing if anyone held the opposite view. Some of the assertions on which we disagreed were:
Cryonic preservation should be standard medical procedure upon death, on an opt-out basis
For the average person, reading the news has no practical value beyond social signalling
Public schools should focus on providing some minimum quality of education to all students before allocating resources to programs for gifted students
The rationality movement focuses too much of its energy on AI safety
We should expend more effort to make rationality more accessible to ‘normal people’
We paired off, with each pair in front of a blackboard, and spent about 15 minutes on our first double crux, after the resolution of which the conversations mostly devolved. We then came together, gave feedback, switched partners, and tried again.
Difficulties/confusion:
For the purposes of practice, we had trouble finding points of genuine disagreement – in some cases we found that the argument dissolved after we clarified minor semantic points in the assertion, and in other cases a pair would just sit there and agree on assertion after assertion (though the latter is more a flaw in the way I designed the activity than in the actual technique). However, we all agree that this technique will be useful when we encounter disagreements in future meetings, and even in the absence of disagreement, the activity of finding cruxes was a useful way of examining the structure of our beliefs.
We were a little confused as to whether coming up with an empirical test to resolve the issue was a satisfactory endpoint, or if we actually needed to seek out the results in order to consider the disagreement resolved.
In one case, when we were debating the cryonics assertion, my interlocutor managed to convince me of all the factual questions on which I thought my disagreement rested, but I still had some lingering doubt – even though I was convinced of the conclusion on an intellectual level, I didn’t grok it. When we learned goal factoring, we were taught not dismiss fuzzy, difficult-to-define feelings; that they could be genuinely important reasons for our thoughts and behavior. Given its reliance on empiricism, how does Double Crux deal with these feelings, if at all? (Disclaimer: it’s been two years since we learned goal factoring, so maybe we were taught how to deal with this and I just forgot.)
In another case, my interlocutor changed his mind on the question of public schools, but when asked to explain the line of argument that led him to change his mind, he wasn’t able to construct an argument that sounded convincing to him. I’m not sure what happened here, but in the future I would place more emphasis on writing down the key points of the discussion as it unfolds. We did make some use of the blackboards, but it wasn’t very systematic.
Overall it wasn’t as structured as I expected it to be. People didn’t reference the write-up when immersed in their discussions, and didn’t make use of any of the tips you gave. I know you said we shouldn’t be preoccupied with executing “the ideal double crux,” but I somehow still have the feeling that we didn’t quite do it right. For example, I don’t think we focused enough on falsifiability and we didn’t resonate after reaching our conclusions, which seem like key points. But ultimately the model was still useful, no matter how loosely we adhered to it.
I hope some of that was helpful to you! Also, tell Eli Tyre we miss him!
Very useful. I don’t have the time to give you the detailed response you deserve, but I deeply appreciate the data (and Eli says hi).