Note: seems important to notice if it turns out doublecrux stops being the best frame. Currently I don’t have strong reason to suspect that but it seems like a good hypothesis to note.
So, sometimes it seems like my reason for not believing something is “I dunno, that seems like a random hypothesis privileged, and I don’t precisely have a reason to disbelieve it other than base rates or something. So the crux is “I don’t see a good reason to privilege it”, and it’s hard to pluck a reason to privilege the hypothesis out of nowhere.
Not saying this is necessarily a good metaphor for the party conversation, but for a different illustration of the point I’m mulling over: if you say ‘Do you think I should quite my job and start selling balloons on the streets of Guatamala?’ I might say ‘um, probably not?‘, and if you ask for my crux there it’s like ‘I dunno man, just seems like a kinda weird thing to do, why would you want to do that?’. It just feels like there’s more model-sharing that needs to be done before I have anything concrete to consider.
And maybe it turns out a) you hate your job, b) you like balloons, c) you like the streets of Guatamala. In which case, okay cool, I guess that it is now a reasonable hypothesis to consider (maybe weighed against your current social life and other goals). But if those are all new facts for me it’s not something I could have generated in advance.
Note: I’m pretty behind on sleep. This writing quality is likely sub-par for me.
Yeah, that picture seems correct. I think you do need to do a bunch of model sharing as the first stage in a double-crux like that, and that related to this you can’t have your cruxes (your **substantive** cruxes) prepared in advance and likely can’t even generate them on your own even given hours to prepare (or days or weeks or months given upon the thing).
It’s not obvious that if I see no reason for the improbable position I will get much from reflecting on my own “signgle-player” cruxes before the model sharing. For the balloons example, sure, I could try to imagine from scratch (but I don’t know much about the person, Guatemala, etc., so I might not do a very good job without more research). It certainly seems unlikely any cruxes I might generate will coincide with theirs and up being double-cruxes. Seems best to quickly get to the part where they explain their reasoning/models and I see whether I’m convinced / we talk about what would convince me that their reasoning is/isn’t correct.
If the majority of double-cruxes are actually like this (that is, I see a reason for something that to you seems low-probability), then that means for those disagreements you won’t be able to have cruxes prepared in advance beyond the generic “I don’t have models that would suggest this at all, perhaps because I have little info on the target domain”. It perhaps means that for all instances where you believe something that other people reasonably have a low prior on, you alone should be prepared with your cruxes, and this basically means there’s asymmetrical preparedness except when people are arguing about two specific “low prior probability/complex” beliefs, e.g. minimalism vs information-density.
In the double-crux workshops I’ve attended, there were attempts to find people who had strong disagreements (clashes of models) about given topics, but this struggled because often one person would believe a strange/low-prior probability thing which others haven’t thought about and so basically no one had pre-existing models to argue back with.
Note: seems important to notice if it turns out doublecrux stops being the best frame. Currently I don’t have strong reason to suspect that but it seems like a good hypothesis to note.
So, sometimes it seems like my reason for not believing something is “I dunno, that seems like a random hypothesis privileged, and I don’t precisely have a reason to disbelieve it other than base rates or something. So the crux is “I don’t see a good reason to privilege it”, and it’s hard to pluck a reason to privilege the hypothesis out of nowhere.
Not saying this is necessarily a good metaphor for the party conversation, but for a different illustration of the point I’m mulling over: if you say ‘Do you think I should quite my job and start selling balloons on the streets of Guatamala?’ I might say ‘um, probably not?‘, and if you ask for my crux there it’s like ‘I dunno man, just seems like a kinda weird thing to do, why would you want to do that?’. It just feels like there’s more model-sharing that needs to be done before I have anything concrete to consider.
And maybe it turns out a) you hate your job, b) you like balloons, c) you like the streets of Guatamala. In which case, okay cool, I guess that it is now a reasonable hypothesis to consider (maybe weighed against your current social life and other goals). But if those are all new facts for me it’s not something I could have generated in advance.
Note: I’m pretty behind on sleep. This writing quality is likely sub-par for me.
Yeah, that picture seems correct. I think you do need to do a bunch of model sharing as the first stage in a double-crux like that, and that related to this you can’t have your cruxes (your **substantive** cruxes) prepared in advance and likely can’t even generate them on your own even given hours to prepare (or days or weeks or months given upon the thing).
It’s not obvious that if I see no reason for the improbable position I will get much from reflecting on my own “signgle-player” cruxes before the model sharing. For the balloons example, sure, I could try to imagine from scratch (but I don’t know much about the person, Guatemala, etc., so I might not do a very good job without more research). It certainly seems unlikely any cruxes I might generate will coincide with theirs and up being double-cruxes. Seems best to quickly get to the part where they explain their reasoning/models and I see whether I’m convinced / we talk about what would convince me that their reasoning is/isn’t correct.
If the majority of double-cruxes are actually like this (that is, I see a reason for something that to you seems low-probability), then that means for those disagreements you won’t be able to have cruxes prepared in advance beyond the generic “I don’t have models that would suggest this at all, perhaps because I have little info on the target domain”. It perhaps means that for all instances where you believe something that other people reasonably have a low prior on, you alone should be prepared with your cruxes, and this basically means there’s asymmetrical preparedness except when people are arguing about two specific “low prior probability/complex” beliefs, e.g. minimalism vs information-density.
In the double-crux workshops I’ve attended, there were attempts to find people who had strong disagreements (clashes of models) about given topics, but this struggled because often one person would believe a strange/low-prior probability thing which others haven’t thought about and so basically no one had pre-existing models to argue back with.