Something I noticed while in a recent argument – it’s hard to find cruxes for… negative or null options? Or something? (I don’t know what the right words to use here)
I was debating with a friend whether we should host a party soon. My take was “I dunno, I’m not excited for a party right now.” We actually agreed that at the meta level that we wanted to hold some kind of event. We had a product to build together. But when I queried “what would make me excited about holding a party” the answer was “I dunno, a vision for a particular party that felt exciting”, which wasn’t a very doublecruxy answer.
As it turns out, a short while later I came up with a spin on the potential party that I was excited about. It came to me fairly abruptly without much deliberateness. This was great for the purpose of solving the object level conversation, but dissatisfying from the standpoint of “right now I’m very interested in building deep knowledge of how to resolve dissagreements, and I don’t feel like I figured anything out as a result of this.”
I’m noting this here to help flesh out some of the open problems in disagreement resolution.
I was the friend in the story. Here’s something which might be part of the picture of resolving this. I had an argument for wanting a party with the structure:
A → B, I also believed A, so I concluded B was true.
We could have double-cruxed over the entailment A → B or if you already agreed to that, then over the truth of A, where A can be made up of many propositions.
The shift is perhaps going from the double-crux being about a conclusion of what action to take to being about whether an argument (entailment) plus its premises are true. These of course are my cruxes once/if I can uncover my beliefs to this structure (which seems like you should be able to).
Overall though, I’m a bit confused by “negatives” and “null options”. In all cases you’re arguing about how the world is. You can just always say “what is my crux the world is not that way?”, “what are the positive beliefs that cause me to think the world is not the way my interlocutor does?”
Note: seems important to notice if it turns out doublecrux stops being the best frame. Currently I don’t have strong reason to suspect that but it seems like a good hypothesis to note.
So, sometimes it seems like my reason for not believing something is “I dunno, that seems like a random hypothesis privileged, and I don’t precisely have a reason to disbelieve it other than base rates or something. So the crux is “I don’t see a good reason to privilege it”, and it’s hard to pluck a reason to privilege the hypothesis out of nowhere.
Not saying this is necessarily a good metaphor for the party conversation, but for a different illustration of the point I’m mulling over: if you say ‘Do you think I should quite my job and start selling balloons on the streets of Guatamala?’ I might say ‘um, probably not?‘, and if you ask for my crux there it’s like ‘I dunno man, just seems like a kinda weird thing to do, why would you want to do that?’. It just feels like there’s more model-sharing that needs to be done before I have anything concrete to consider.
And maybe it turns out a) you hate your job, b) you like balloons, c) you like the streets of Guatamala. In which case, okay cool, I guess that it is now a reasonable hypothesis to consider (maybe weighed against your current social life and other goals). But if those are all new facts for me it’s not something I could have generated in advance.
Note: I’m pretty behind on sleep. This writing quality is likely sub-par for me.
Yeah, that picture seems correct. I think you do need to do a bunch of model sharing as the first stage in a double-crux like that, and that related to this you can’t have your cruxes (your **substantive** cruxes) prepared in advance and likely can’t even generate them on your own even given hours to prepare (or days or weeks or months given upon the thing).
It’s not obvious that if I see no reason for the improbable position I will get much from reflecting on my own “signgle-player” cruxes before the model sharing. For the balloons example, sure, I could try to imagine from scratch (but I don’t know much about the person, Guatemala, etc., so I might not do a very good job without more research). It certainly seems unlikely any cruxes I might generate will coincide with theirs and up being double-cruxes. Seems best to quickly get to the part where they explain their reasoning/models and I see whether I’m convinced / we talk about what would convince me that their reasoning is/isn’t correct.
If the majority of double-cruxes are actually like this (that is, I see a reason for something that to you seems low-probability), then that means for those disagreements you won’t be able to have cruxes prepared in advance beyond the generic “I don’t have models that would suggest this at all, perhaps because I have little info on the target domain”. It perhaps means that for all instances where you believe something that other people reasonably have a low prior on, you alone should be prepared with your cruxes, and this basically means there’s asymmetrical preparedness except when people are arguing about two specific “low prior probability/complex” beliefs, e.g. minimalism vs information-density.
In the double-crux workshops I’ve attended, there were attempts to find people who had strong disagreements (clashes of models) about given topics, but this struggled because often one person would believe a strange/low-prior probability thing which others haven’t thought about and so basically no one had pre-existing models to argue back with.
Something I noticed while in a recent argument – it’s hard to find cruxes for… negative or null options? Or something? (I don’t know what the right words to use here)
I was debating with a friend whether we should host a party soon. My take was “I dunno, I’m not excited for a party right now.” We actually agreed that at the meta level that we wanted to hold some kind of event. We had a product to build together. But when I queried “what would make me excited about holding a party” the answer was “I dunno, a vision for a particular party that felt exciting”, which wasn’t a very doublecruxy answer.
As it turns out, a short while later I came up with a spin on the potential party that I was excited about. It came to me fairly abruptly without much deliberateness. This was great for the purpose of solving the object level conversation, but dissatisfying from the standpoint of “right now I’m very interested in building deep knowledge of how to resolve dissagreements, and I don’t feel like I figured anything out as a result of this.”
I’m noting this here to help flesh out some of the open problems in disagreement resolution.
I was the friend in the story. Here’s something which might be part of the picture of resolving this. I had an argument for wanting a party with the structure: A → B, I also believed A, so I concluded B was true.
We could have double-cruxed over the entailment A → B or if you already agreed to that, then over the truth of A, where A can be made up of many propositions.
The shift is perhaps going from the double-crux being about a conclusion of what action to take to being about whether an argument (entailment) plus its premises are true. These of course are my cruxes once/if I can uncover my beliefs to this structure (which seems like you should be able to).
Overall though, I’m a bit confused by “negatives” and “null options”. In all cases you’re arguing about how the world is. You can just always say “what is my crux the world is not that way?”, “what are the positive beliefs that cause me to think the world is not the way my interlocutor does?”
Note: seems important to notice if it turns out doublecrux stops being the best frame. Currently I don’t have strong reason to suspect that but it seems like a good hypothesis to note.
So, sometimes it seems like my reason for not believing something is “I dunno, that seems like a random hypothesis privileged, and I don’t precisely have a reason to disbelieve it other than base rates or something. So the crux is “I don’t see a good reason to privilege it”, and it’s hard to pluck a reason to privilege the hypothesis out of nowhere.
Not saying this is necessarily a good metaphor for the party conversation, but for a different illustration of the point I’m mulling over: if you say ‘Do you think I should quite my job and start selling balloons on the streets of Guatamala?’ I might say ‘um, probably not?‘, and if you ask for my crux there it’s like ‘I dunno man, just seems like a kinda weird thing to do, why would you want to do that?’. It just feels like there’s more model-sharing that needs to be done before I have anything concrete to consider.
And maybe it turns out a) you hate your job, b) you like balloons, c) you like the streets of Guatamala. In which case, okay cool, I guess that it is now a reasonable hypothesis to consider (maybe weighed against your current social life and other goals). But if those are all new facts for me it’s not something I could have generated in advance.
Note: I’m pretty behind on sleep. This writing quality is likely sub-par for me.
Yeah, that picture seems correct. I think you do need to do a bunch of model sharing as the first stage in a double-crux like that, and that related to this you can’t have your cruxes (your **substantive** cruxes) prepared in advance and likely can’t even generate them on your own even given hours to prepare (or days or weeks or months given upon the thing).
It’s not obvious that if I see no reason for the improbable position I will get much from reflecting on my own “signgle-player” cruxes before the model sharing. For the balloons example, sure, I could try to imagine from scratch (but I don’t know much about the person, Guatemala, etc., so I might not do a very good job without more research). It certainly seems unlikely any cruxes I might generate will coincide with theirs and up being double-cruxes. Seems best to quickly get to the part where they explain their reasoning/models and I see whether I’m convinced / we talk about what would convince me that their reasoning is/isn’t correct.
If the majority of double-cruxes are actually like this (that is, I see a reason for something that to you seems low-probability), then that means for those disagreements you won’t be able to have cruxes prepared in advance beyond the generic “I don’t have models that would suggest this at all, perhaps because I have little info on the target domain”. It perhaps means that for all instances where you believe something that other people reasonably have a low prior on, you alone should be prepared with your cruxes, and this basically means there’s asymmetrical preparedness except when people are arguing about two specific “low prior probability/complex” beliefs, e.g. minimalism vs information-density.
In the double-crux workshops I’ve attended, there were attempts to find people who had strong disagreements (clashes of models) about given topics, but this struggled because often one person would believe a strange/low-prior probability thing which others haven’t thought about and so basically no one had pre-existing models to argue back with.