I think I’m expecting people to understand what “finding cruxes” looks like, but this is probably unreasonable of me. This is an hour-long class at CFAR, before “Double Crux” is actually taught. And even then, I suspect most people do not actually get the finer, deeper points of Finding Cruxes.
My psychology is interacting with this in some unhelpful way.
I’m living happily without that frustration, because for me agreement isn’t a goal. A comment that disagrees with me is valuable if it contains interesting ideas, no matter the private reasons; if it has no interesting ideas, I simply don’t reply. In my own posts and comments I also optimize for value of information (e.g. bringing up ideas that haven’t been mentioned yet), not for changing anyone’s mind. The game is about win-win trade of interesting ideas, not zero-sum tug of war.
I’m surprised to see finding cruxes contrasted with value of information considerations.
To me, much of the value of looking for cruxes is that it can guide the conversation to the most update-rich areas.
I try to optimize my posts and comments for value of information (e.g. bringing up new ideas).
Correct me if I’m wrong, but I would guess that part of your sense of taste about what makes something an interesting new idea is whether it’s relevant to anything else (in addition to maybe how beautiful or whatever it is on its own). And whether it would make anybody change their mind about anything seems like a pretty big part of relevance. So a significant part of what makes an idea interesting is whether it’s related to your or anybody else’s cruxes, no?
The game to me is about win-win trade of interesting ideas.
Setting aside whether debates between people who disagree are themselves win-win, cruxes are interesting (to me) not just in the context of a debate between opposing sides located in two different people, but also when I’m just thinking about my own take on an issue.
Given these considerations, it seems like the best argument for not being explicit about cruxes, is if they’re already implicit in your sense of taste about what’s interesting, which is correctly guiding you to ask the right questions and look for the right new pieces of information.
That seems plausible, but I’m skeptical that it’s not often helpful to explicitly check what would make you change your mind about something.
I think caring about agreement first vs VoI first leads to different behavior. Here’s two test cases:
1) Someone strongly disagrees with you but doesn’t say anything interesting. Do you ask for their reasons (agreement first) or ignore them and talk to someone else who’s saying interesting but less disagreeable things (VoI first)?
2) You’re one of many people disagreeing with a post. Do you spell out your reasons that are similar to everyone else’s (agreement first) or try to say something new (VoI first)?
The VoI option works better for me. Given the choice whether to bring up something abstractly interesting or something I feel strongly about, I’ll choose the interesting idea every time. It’s more fun and more fruitful.
[ I responded to an older, longer version of cousin_it’s comment here, which was very different from what it looks like at present; right now, my comment doesn’t make a whole lot of sense without that context, but I’ll leave it I guess ]
This is a fascinating, alternative perspective!
If this is what LW is for, then I’ve misjudged it and don’t yet know what to make of it.
To me, the game isn’t about changing minds, but about exchanging interesting ideas to mutual benefit. Zero-sum tugs of war are for political subreddits.
I disagree with the frame.
What I’m into is having a community steered towards seeking truth together. And this is NOT a zero-sum game at all. Changing people’s minds so that we’re all more aligned with truth seems infinite-sum to me.
Why? Because the more groundwork we lay for our foundation, the more we can DO.
Were rockets built by people who just exchanged interesting ideas for rocket-building but never bothered to check each other’s math? We wouldn’t have gotten very far if this is where we stayed. So resolving each layer of disagreement led to being able to coordinate on how to build rockets and then building them.
Similarly with rationality. I’m interested in changing your mind about a lot of things. I want to convince you that I can and am seeing things in the universe that, if we can agree on them one way or another, would then allow us to move to the next step, where we’d unearth a whole NEW set of disagreements to resolve. And so forth. That is progress.
I’m willing to concede that LW might not be for this thing, and that seems maybe fine. It might even be better!
But I’m going to look the thing somewhere, if not here.
Yup! That totally makes sense (the stuff in the link) and the thing about the coins.
Also not what I’m trying to talk about here.
I’m not interested in sharing posteriors. I’m interested in sharing the methods for which people arrive at their posteriors (this is what Double Crux is all about).
So in the fair/unfair coin example in the link, the way I’d “change your mind” about whether a coin flip was fair would be to ask, “You seem to think the coin has a 39% chance of being unfair. What would change your mind about that?”
If the answer is, “Well it depends on what happens when the coin is flipped.” And let’s say this is also a Double Crux for me.
At this point we’d have to start sharing our evidence or gathering more evidence to actually resolve the disagreement. And once we did, we’d both converge towards one truth.
I think this is a super important perspective. I also think that stating cruxes is a surprisingly good way to find good pieces of information to propagate. My model of this is something like “a lot of topics show up again and again, which suggests that most participants have already heard the standard arguments and standard perspectives. Focusing on people’s cruxes helps the discussion move towards sharing pieces of information that haven’t been shared yet.”
I think I’m expecting people to understand what “finding cruxes” looks like, but this is probably unreasonable of me. This is an hour-long class at CFAR, before “Double Crux” is actually taught. And even then, I suspect most people do not actually get the finer, deeper points of Finding Cruxes.
My psychology is interacting with this in some unhelpful way.
I’m living happily without that frustration, because for me agreement isn’t a goal. A comment that disagrees with me is valuable if it contains interesting ideas, no matter the private reasons; if it has no interesting ideas, I simply don’t reply. In my own posts and comments I also optimize for value of information (e.g. bringing up ideas that haven’t been mentioned yet), not for changing anyone’s mind. The game is about win-win trade of interesting ideas, not zero-sum tug of war.
I’m surprised to see finding cruxes contrasted with value of information considerations.
To me, much of the value of looking for cruxes is that it can guide the conversation to the most update-rich areas.
Correct me if I’m wrong, but I would guess that part of your sense of taste about what makes something an interesting new idea is whether it’s relevant to anything else (in addition to maybe how beautiful or whatever it is on its own). And whether it would make anybody change their mind about anything seems like a pretty big part of relevance. So a significant part of what makes an idea interesting is whether it’s related to your or anybody else’s cruxes, no?
Setting aside whether debates between people who disagree are themselves win-win, cruxes are interesting (to me) not just in the context of a debate between opposing sides located in two different people, but also when I’m just thinking about my own take on an issue.
Given these considerations, it seems like the best argument for not being explicit about cruxes, is if they’re already implicit in your sense of taste about what’s interesting, which is correctly guiding you to ask the right questions and look for the right new pieces of information.
That seems plausible, but I’m skeptical that it’s not often helpful to explicitly check what would make you change your mind about something.
I think caring about agreement first vs VoI first leads to different behavior. Here’s two test cases:
1) Someone strongly disagrees with you but doesn’t say anything interesting. Do you ask for their reasons (agreement first) or ignore them and talk to someone else who’s saying interesting but less disagreeable things (VoI first)?
2) You’re one of many people disagreeing with a post. Do you spell out your reasons that are similar to everyone else’s (agreement first) or try to say something new (VoI first)?
The VoI option works better for me. Given the choice whether to bring up something abstractly interesting or something I feel strongly about, I’ll choose the interesting idea every time. It’s more fun and more fruitful.
Gotcha, this makes sense to me. I would want to follow the VoI strategy in each of your two test cases.
[ I responded to an older, longer version of cousin_it’s comment here, which was very different from what it looks like at present; right now, my comment doesn’t make a whole lot of sense without that context, but I’ll leave it I guess ]
This is a fascinating, alternative perspective!
If this is what LW is for, then I’ve misjudged it and don’t yet know what to make of it.
I disagree with the frame.
What I’m into is having a community steered towards seeking truth together. And this is NOT a zero-sum game at all. Changing people’s minds so that we’re all more aligned with truth seems infinite-sum to me.
Why? Because the more groundwork we lay for our foundation, the more we can DO.
Were rockets built by people who just exchanged interesting ideas for rocket-building but never bothered to check each other’s math? We wouldn’t have gotten very far if this is where we stayed. So resolving each layer of disagreement led to being able to coordinate on how to build rockets and then building them.
Similarly with rationality. I’m interested in changing your mind about a lot of things. I want to convince you that I can and am seeing things in the universe that, if we can agree on them one way or another, would then allow us to move to the next step, where we’d unearth a whole NEW set of disagreements to resolve. And so forth. That is progress.
I’m willing to concede that LW might not be for this thing, and that seems maybe fine. It might even be better!
But I’m going to look the thing somewhere, if not here.
(I had a mathy argument here, pointing to this post as a motivation for exchanging ideas instead of changing minds. It had an error, so retracted.)
Yup! That totally makes sense (the stuff in the link) and the thing about the coins.
Also not what I’m trying to talk about here.
I’m not interested in sharing posteriors. I’m interested in sharing the methods for which people arrive at their posteriors (this is what Double Crux is all about).
So in the fair/unfair coin example in the link, the way I’d “change your mind” about whether a coin flip was fair would be to ask, “You seem to think the coin has a 39% chance of being unfair. What would change your mind about that?”
If the answer is, “Well it depends on what happens when the coin is flipped.” And let’s say this is also a Double Crux for me.
At this point we’d have to start sharing our evidence or gathering more evidence to actually resolve the disagreement. And once we did, we’d both converge towards one truth.
I think this is a super important perspective. I also think that stating cruxes is a surprisingly good way to find good pieces of information to propagate. My model of this is something like “a lot of topics show up again and again, which suggests that most participants have already heard the standard arguments and standard perspectives. Focusing on people’s cruxes helps the discussion move towards sharing pieces of information that haven’t been shared yet.”