I’d been working on a sequence explaining this all in more detail (I think there’s a lot of moving parts and inferential distance to cover here). I’ll mostly respond in the form of “finish that sequence.”
But here’s a quick paragraph that more fully expands what I actually believe:
If you’re building a product with someone (metaphorical product or literal product), and you find yourself disagreeing, and you explain “This is important because X, which implies Y”, and they say “What!? But, A, therefore B!” and then you both keep repeating those points over and over… you’re going to waste a lot of time, and possibly build a confused frankenstein product that’s less effective than if you could figure out how to successfully communicate.
In that situation, I claim you should be doing something different, if you want to build a product that’s actually good.
If you’re not building a product, this is less obviously important. If you’re just arguing for fun, I dunno, keep at it I guess.
A separate, further claim is that the reason you’re miscommunicating is because you have a bunch of hidden assumptions in your belief-network, or the frames that underly your belief network. I think you will continue to disagree and waste effort until you figure out how to make those hidden assumptions explicit.
You don’t have to rush that process. Take your time to mull over your beliefs, do focusing or whatever helps you tease out the hidden assumptions without accidentally crystallizing them wrong.
Meanwhile, you can reference the fact that the differing assumptions exist by giving them placeholder names like “the sparkly pink purple ball thing”.
This isn’t an “obligation” I think people should have. But I think it’s a law-of-the-universe that if you don’t do this, your group will waste time and/or your product will be worse.
(Lots of companies successfully build products without dealing with this, so I’m not at all claiming you’ll fail. And meanwhile there’s lots of other tradeoffs your company might be making that are bad and should be improved, and I’m not confident this is the most important thing to be working on)
But among rationalists, who are trying to improve their rationality while building products together, I think resolving this issue should be a high priority, which will pay for itself pretty quickly.
Thirdly: I claim there is a skill to building up a model of your beliefs, and your cruxes for those beliefs, and the frames that underly your beliefs… such that you can make normally implicit things explicit in advance. (Or, at least, every time you disagree with someone about one of your beliefs, you automatically flag what the crux for the belief was, and then keep track of it for future reference). So, by the time you get to a heated disagreement, you already have some sense of what sort of things would change your mind, and why you formed the beliefs you did.
You don’t have to share this with others, esp. if they seem to be adversarial. But understanding it for yourself can still help you make sense of the conversation.
Relatedly, there’s a skill to detecting when other people are in a different frame from you, and helping them to articulate their frame.
Literal companies building literal products can alleviate this problem by only hiring people with similar frames and beliefs, so they have an easier time communicating. But, it’s
This seems important because weird, intractable conversations have shown up repeatedly...
in the EA ecosystem
(where even though people are mostly building different products, there is a shared commons that is something of a “collectively built product” that everyone has a stake in, and where billions of dollars and billions of dollars worth of reputation are at stake)
on LessWrong the website
(where everyone has a stake in a shared product of “how we have conversations together” and what truthseeking means)
on the LessWrong development team
where we are literally building a product (a website), and often have persistent, intractable disagreements about UI, minimalism, how shortform should work, is Vulcan a terribly shitshow of a framework that should be scrapped, etc.
every time you disagree with someone about one of your beliefs, you [can] automatically flag what the crux for the belief was
This is the bit that is computationally intractable.
Looking for cruxes is a healthy move, exposing the moving parts of your beliefs in a way that can lead to you learning important new info.
However, there are an incredible number of cruxes for any given belief. If I think that a hypothetical project should accelerate it’s development time 2x in the coming month, I could change my mind if I learn some important fact about the long-term improvements of spending the month refactoring the entire codebase; I could change my mind if I learn that the current time we spend on things is required for models of the code to propagate and become common knowledge in the staff; I could change my mind if my models of geopolitical events suggest that our industry is going to tank next week and we should get out immediately.
I’m not claiming you can literally do this all the time. [Ah, an earlier draft of the previous comment emphasized this this was all “things worth pushing for on the margin”, and explicitly not something you were supposed to sacrifice all other priorities for. I think I then rewrote the post and forgot to emphasize that clarification]
I’ll try to write up better instructions/explanations later, but to give a rough idea of the amount of work I’m talking about. I’m saying “spend a bit more time than you normally do in ‘doublecrux mode’”. [This can be, like, an extra half hour sometimes when having a particular difficult conversation].
When someone seems obviously wrong, or you seem obviously right, ask yourself “what are cruxes are most loadbearing”, and then:
Be mindful as you do it, to notice what mental motions you’re actually performing that help. Basically, do Tuning Your Cognitive Strategies to the double crux process, to improve your feedback loop.
When you’re done, cache the results. Maybe by writing it down, or maybe just sort of thinking harder about it so you remember it a better.
The point is not to have fully mapped out cruxes of all your beliefs. The point is that you generally have practiced the skill of noticing what the most important cruxes are, so that a) you can do it easily, and b) you keep the results computed for later.
I’d been working on a sequence explaining this all in more detail (I think there’s a lot of moving parts and inferential distance to cover here). I’ll mostly respond in the form of “finish that sequence.”
But here’s a quick paragraph that more fully expands what I actually believe:
If you’re building a product with someone (metaphorical product or literal product), and you find yourself disagreeing, and you explain “This is important because X, which implies Y”, and they say “What!? But, A, therefore B!” and then you both keep repeating those points over and over… you’re going to waste a lot of time, and possibly build a confused frankenstein product that’s less effective than if you could figure out how to successfully communicate.
In that situation, I claim you should be doing something different, if you want to build a product that’s actually good.
If you’re not building a product, this is less obviously important. If you’re just arguing for fun, I dunno, keep at it I guess.
A separate, further claim is that the reason you’re miscommunicating is because you have a bunch of hidden assumptions in your belief-network, or the frames that underly your belief network. I think you will continue to disagree and waste effort until you figure out how to make those hidden assumptions explicit.
You don’t have to rush that process. Take your time to mull over your beliefs, do focusing or whatever helps you tease out the hidden assumptions without accidentally crystallizing them wrong.
Meanwhile, you can reference the fact that the differing assumptions exist by giving them placeholder names like “the sparkly pink purple ball thing”.
This isn’t an “obligation” I think people should have. But I think it’s a law-of-the-universe that if you don’t do this, your group will waste time and/or your product will be worse.
(Lots of companies successfully build products without dealing with this, so I’m not at all claiming you’ll fail. And meanwhile there’s lots of other tradeoffs your company might be making that are bad and should be improved, and I’m not confident this is the most important thing to be working on)
But among rationalists, who are trying to improve their rationality while building products together, I think resolving this issue should be a high priority, which will pay for itself pretty quickly.
Thirdly: I claim there is a skill to building up a model of your beliefs, and your cruxes for those beliefs, and the frames that underly your beliefs… such that you can make normally implicit things explicit in advance. (Or, at least, every time you disagree with someone about one of your beliefs, you automatically flag what the crux for the belief was, and then keep track of it for future reference). So, by the time you get to a heated disagreement, you already have some sense of what sort of things would change your mind, and why you formed the beliefs you did.
You don’t have to share this with others, esp. if they seem to be adversarial. But understanding it for yourself can still help you make sense of the conversation.
Relatedly, there’s a skill to detecting when other people are in a different frame from you, and helping them to articulate their frame.
Literal companies building literal products can alleviate this problem by only hiring people with similar frames and beliefs, so they have an easier time communicating. But, it’s
This seems important because weird, intractable conversations have shown up repeatedly...
in the EA ecosystem
(where even though people are mostly building different products, there is a shared commons that is something of a “collectively built product” that everyone has a stake in, and where billions of dollars and billions of dollars worth of reputation are at stake)
on LessWrong the website
(where everyone has a stake in a shared product of “how we have conversations together” and what truthseeking means)
on the LessWrong development team
where we are literally building a product (a website), and often have persistent, intractable disagreements about UI, minimalism, how shortform should work, is Vulcan a terribly shitshow of a framework that should be scrapped, etc.
This is the bit that is computationally intractable.
Looking for cruxes is a healthy move, exposing the moving parts of your beliefs in a way that can lead to you learning important new info.
However, there are an incredible number of cruxes for any given belief. If I think that a hypothetical project should accelerate it’s development time 2x in the coming month, I could change my mind if I learn some important fact about the long-term improvements of spending the month refactoring the entire codebase; I could change my mind if I learn that the current time we spend on things is required for models of the code to propagate and become common knowledge in the staff; I could change my mind if my models of geopolitical events suggest that our industry is going to tank next week and we should get out immediately.
I’m not claiming you can literally do this all the time. [Ah, an earlier draft of the previous comment emphasized this this was all “things worth pushing for on the margin”, and explicitly not something you were supposed to sacrifice all other priorities for. I think I then rewrote the post and forgot to emphasize that clarification]
I’ll try to write up better instructions/explanations later, but to give a rough idea of the amount of work I’m talking about. I’m saying “spend a bit more time than you normally do in ‘doublecrux mode’”. [This can be, like, an extra half hour sometimes when having a particular difficult conversation].
When someone seems obviously wrong, or you seem obviously right, ask yourself “what are cruxes are most loadbearing”, and then:
Be mindful as you do it, to notice what mental motions you’re actually performing that help. Basically, do Tuning Your Cognitive Strategies to the double crux process, to improve your feedback loop.
When you’re done, cache the results. Maybe by writing it down, or maybe just sort of thinking harder about it so you remember it a better.
The point is not to have fully mapped out cruxes of all your beliefs. The point is that you generally have practiced the skill of noticing what the most important cruxes are, so that a) you can do it easily, and b) you keep the results computed for later.