Note: I’m pretty behind on sleep. This writing quality is likely sub-par for me.
Yeah, that picture seems correct. I think you do need to do a bunch of model sharing as the first stage in a double-crux like that, and that related to this you can’t have your cruxes (your **substantive** cruxes) prepared in advance and likely can’t even generate them on your own even given hours to prepare (or days or weeks or months given upon the thing).
It’s not obvious that if I see no reason for the improbable position I will get much from reflecting on my own “signgle-player” cruxes before the model sharing. For the balloons example, sure, I could try to imagine from scratch (but I don’t know much about the person, Guatemala, etc., so I might not do a very good job without more research). It certainly seems unlikely any cruxes I might generate will coincide with theirs and up being double-cruxes. Seems best to quickly get to the part where they explain their reasoning/models and I see whether I’m convinced / we talk about what would convince me that their reasoning is/isn’t correct.
If the majority of double-cruxes are actually like this (that is, I see a reason for something that to you seems low-probability), then that means for those disagreements you won’t be able to have cruxes prepared in advance beyond the generic “I don’t have models that would suggest this at all, perhaps because I have little info on the target domain”. It perhaps means that for all instances where you believe something that other people reasonably have a low prior on, you alone should be prepared with your cruxes, and this basically means there’s asymmetrical preparedness except when people are arguing about two specific “low prior probability/complex” beliefs, e.g. minimalism vs information-density.
In the double-crux workshops I’ve attended, there were attempts to find people who had strong disagreements (clashes of models) about given topics, but this struggled because often one person would believe a strange/low-prior probability thing which others haven’t thought about and so basically no one had pre-existing models to argue back with.
Note: I’m pretty behind on sleep. This writing quality is likely sub-par for me.
Yeah, that picture seems correct. I think you do need to do a bunch of model sharing as the first stage in a double-crux like that, and that related to this you can’t have your cruxes (your **substantive** cruxes) prepared in advance and likely can’t even generate them on your own even given hours to prepare (or days or weeks or months given upon the thing).
It’s not obvious that if I see no reason for the improbable position I will get much from reflecting on my own “signgle-player” cruxes before the model sharing. For the balloons example, sure, I could try to imagine from scratch (but I don’t know much about the person, Guatemala, etc., so I might not do a very good job without more research). It certainly seems unlikely any cruxes I might generate will coincide with theirs and up being double-cruxes. Seems best to quickly get to the part where they explain their reasoning/models and I see whether I’m convinced / we talk about what would convince me that their reasoning is/isn’t correct.
If the majority of double-cruxes are actually like this (that is, I see a reason for something that to you seems low-probability), then that means for those disagreements you won’t be able to have cruxes prepared in advance beyond the generic “I don’t have models that would suggest this at all, perhaps because I have little info on the target domain”. It perhaps means that for all instances where you believe something that other people reasonably have a low prior on, you alone should be prepared with your cruxes, and this basically means there’s asymmetrical preparedness except when people are arguing about two specific “low prior probability/complex” beliefs, e.g. minimalism vs information-density.
In the double-crux workshops I’ve attended, there were attempts to find people who had strong disagreements (clashes of models) about given topics, but this struggled because often one person would believe a strange/low-prior probability thing which others haven’t thought about and so basically no one had pre-existing models to argue back with.