I also suspect Ray isn’t either, and isn’t saying that in his post, but it’s a long post, so I might have missed something.
The thing I find annoying to deal with is when discussion is subtly more about politics than the actual thing, which Ray does mention.
I feel like people get upvoted because
they voiced any dissenting opinion at all
they include evidence for their point, regardless of how relevant the point is to the conversation
they include technical language or references to technical topics
they cheer for the correct tribes and boo the other tribes
etc.
I appreciated the criticisms raised in my Circling post, and I upvoted a number of the comments that raised objections.
But the subsequent “arguments” often spiraled into people talking past each other and wielding arguments as weapons, etc. And not looking for cruxes, which I find to be an alarmingly common thing here, to the degree that I suspect people do not in fact WANT their cruxes to be on the table, and I’ve read multiple comments that support this.
I also suspect Ray isn’t either, and isn’t saying that in his post, but it’s a long post, so I might have missed something.
Explicitly chiming in clarify that yes, this is exactly my concern.
I only dedicated a couple paragraphs to this (search for “Incentivizing Good Ideas and Good Criticism”) because there were a lot of different things to talk about, but a central crux of mine is that, while much of the criticism you’ll on on LW is good, a sizeable chunk of it is just a waste of a time and/or actively harmful.
I want better criticism, and I think the central disagreement is something like Said/Cousin_it and a couple others disagreeing strongly with me/Oli/Ben about what makes useful criticism.
(To clarify, I also think that many criticisms in the Circling thread were quite good. For example, it’s very important to determine whether Circling is training introspection/empathy (extrospection?), or ‘just’ inducing hypnosis. This is important both within and without the paradigm that Unreal was describing of using Circling as a tool for epistemic rationality. But, a fair chunk of the comments just seemed to me to express raw bewilderment or hostility in a way that took up a lot of conversational space without moving anything forward.)
But the subsequent “arguments” often spiraled into people … not looking for cruxes, which I find to be an alarmingly common thing here, to the degree that I suspect people do not in fact WANT their cruxes to be on the table, and I’ve read multiple comments that support this.
Let me confirm your suspicions, then: I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply. There was a good deal of discussion of this in some threads about “Double Crux” a while back (I haven’t the time right now, but later I can dig up the links, if requested). Suffice it to say that there is a deep disagreement here about the nature of disputes, how to resolve them, their causes, etc.
I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply.
This is surprising to me. A crux is a thing that if you didn’t believe it you’d change your mind on some other point—that seems like a very natural concept!
Is your contention that you usually can’t fine any one statement such that if you changed your mind about it, you’d change your mind about the top-level issue? (Interestingly, this is the thrust of top comment by Robin Hanson under Eliezer’s Is That Your True Rejection? post.)
I do not know how to operationalize this into a bet, but I would if I could.
My bet would be something like…
If a person can Belief Report / do Focusing on their beliefs (this might already eliminate a bunch of people)
Then I bet some lower-level belief-node (a crux) could be found that would alter the upper-level belief-nodes if the value/sign/position/weight of that cruxy node were to be changed.
Note: Belief nodes do not have be binary (0 or 1). They can be fuzzy (0-1). Belief nodes can also be conjunctive.
If a person doesn’t work this way, I’d love to know.
There are a lot of rather specific assumptions going into your model, here, and they’re ones that I find to be anywhere between “dubious” to “incomprehensible” to “not really wrong, but thinking of things that way is unhelpful”. (I don’t, to be clear, have any intention of arguing about this here—just pointing it out.) So when you say “If a person doesn’t work this way, I’d love to know.”, I don’t quite know what to say; in my view of things, that question can’t even be asked because many layers of its prerequisites are absent. Does that mean that I “don’t work this way”?
Aw Geez, well if you happen to explain your views somewhere I’d be happy to read them. I can’t find any comments of yours on the Sabien’s Double Crux post or on the post called Contra Double Crux.
I think I’m expecting people to understand what “finding cruxes” looks like, but this is probably unreasonable of me. This is an hour-long class at CFAR, before “Double Crux” is actually taught. And even then, I suspect most people do not actually get the finer, deeper points of Finding Cruxes.
My psychology is interacting with this in some unhelpful way.
I’m living happily without that frustration, because for me agreement isn’t a goal. A comment that disagrees with me is valuable if it contains interesting ideas, no matter the private reasons; if it has no interesting ideas, I simply don’t reply. In my own posts and comments I also optimize for value of information (e.g. bringing up ideas that haven’t been mentioned yet), not for changing anyone’s mind. The game is about win-win trade of interesting ideas, not zero-sum tug of war.
I’m surprised to see finding cruxes contrasted with value of information considerations.
To me, much of the value of looking for cruxes is that it can guide the conversation to the most update-rich areas.
I try to optimize my posts and comments for value of information (e.g. bringing up new ideas).
Correct me if I’m wrong, but I would guess that part of your sense of taste about what makes something an interesting new idea is whether it’s relevant to anything else (in addition to maybe how beautiful or whatever it is on its own). And whether it would make anybody change their mind about anything seems like a pretty big part of relevance. So a significant part of what makes an idea interesting is whether it’s related to your or anybody else’s cruxes, no?
The game to me is about win-win trade of interesting ideas.
Setting aside whether debates between people who disagree are themselves win-win, cruxes are interesting (to me) not just in the context of a debate between opposing sides located in two different people, but also when I’m just thinking about my own take on an issue.
Given these considerations, it seems like the best argument for not being explicit about cruxes, is if they’re already implicit in your sense of taste about what’s interesting, which is correctly guiding you to ask the right questions and look for the right new pieces of information.
That seems plausible, but I’m skeptical that it’s not often helpful to explicitly check what would make you change your mind about something.
I think caring about agreement first vs VoI first leads to different behavior. Here’s two test cases:
1) Someone strongly disagrees with you but doesn’t say anything interesting. Do you ask for their reasons (agreement first) or ignore them and talk to someone else who’s saying interesting but less disagreeable things (VoI first)?
2) You’re one of many people disagreeing with a post. Do you spell out your reasons that are similar to everyone else’s (agreement first) or try to say something new (VoI first)?
The VoI option works better for me. Given the choice whether to bring up something abstractly interesting or something I feel strongly about, I’ll choose the interesting idea every time. It’s more fun and more fruitful.
[ I responded to an older, longer version of cousin_it’s comment here, which was very different from what it looks like at present; right now, my comment doesn’t make a whole lot of sense without that context, but I’ll leave it I guess ]
This is a fascinating, alternative perspective!
If this is what LW is for, then I’ve misjudged it and don’t yet know what to make of it.
To me, the game isn’t about changing minds, but about exchanging interesting ideas to mutual benefit. Zero-sum tugs of war are for political subreddits.
I disagree with the frame.
What I’m into is having a community steered towards seeking truth together. And this is NOT a zero-sum game at all. Changing people’s minds so that we’re all more aligned with truth seems infinite-sum to me.
Why? Because the more groundwork we lay for our foundation, the more we can DO.
Were rockets built by people who just exchanged interesting ideas for rocket-building but never bothered to check each other’s math? We wouldn’t have gotten very far if this is where we stayed. So resolving each layer of disagreement led to being able to coordinate on how to build rockets and then building them.
Similarly with rationality. I’m interested in changing your mind about a lot of things. I want to convince you that I can and am seeing things in the universe that, if we can agree on them one way or another, would then allow us to move to the next step, where we’d unearth a whole NEW set of disagreements to resolve. And so forth. That is progress.
I’m willing to concede that LW might not be for this thing, and that seems maybe fine. It might even be better!
But I’m going to look the thing somewhere, if not here.
Yup! That totally makes sense (the stuff in the link) and the thing about the coins.
Also not what I’m trying to talk about here.
I’m not interested in sharing posteriors. I’m interested in sharing the methods for which people arrive at their posteriors (this is what Double Crux is all about).
So in the fair/unfair coin example in the link, the way I’d “change your mind” about whether a coin flip was fair would be to ask, “You seem to think the coin has a 39% chance of being unfair. What would change your mind about that?”
If the answer is, “Well it depends on what happens when the coin is flipped.” And let’s say this is also a Double Crux for me.
At this point we’d have to start sharing our evidence or gathering more evidence to actually resolve the disagreement. And once we did, we’d both converge towards one truth.
I think this is a super important perspective. I also think that stating cruxes is a surprisingly good way to find good pieces of information to propagate. My model of this is something like “a lot of topics show up again and again, which suggests that most participants have already heard the standard arguments and standard perspectives. Focusing on people’s cruxes helps the discussion move towards sharing pieces of information that haven’t been shared yet.”
I, for one, am not anti-criticism.
I also suspect Ray isn’t either, and isn’t saying that in his post, but it’s a long post, so I might have missed something.
The thing I find annoying to deal with is when discussion is subtly more about politics than the actual thing, which Ray does mention.
I feel like people get upvoted because
they voiced any dissenting opinion at all
they include evidence for their point, regardless of how relevant the point is to the conversation
they include technical language or references to technical topics
they cheer for the correct tribes and boo the other tribes
etc.
I appreciated the criticisms raised in my Circling post, and I upvoted a number of the comments that raised objections.
But the subsequent “arguments” often spiraled into people talking past each other and wielding arguments as weapons, etc. And not looking for cruxes, which I find to be an alarmingly common thing here, to the degree that I suspect people do not in fact WANT their cruxes to be on the table, and I’ve read multiple comments that support this.
Explicitly chiming in clarify that yes, this is exactly my concern.
I only dedicated a couple paragraphs to this (search for “Incentivizing Good Ideas and Good Criticism”) because there were a lot of different things to talk about, but a central crux of mine is that, while much of the criticism you’ll on on LW is good, a sizeable chunk of it is just a waste of a time and/or actively harmful.
I want better criticism, and I think the central disagreement is something like Said/Cousin_it and a couple others disagreeing strongly with me/Oli/Ben about what makes useful criticism.
(To clarify, I also think that many criticisms in the Circling thread were quite good. For example, it’s very important to determine whether Circling is training introspection/empathy (extrospection?), or ‘just’ inducing hypnosis. This is important both within and without the paradigm that Unreal was describing of using Circling as a tool for epistemic rationality. But, a fair chunk of the comments just seemed to me to express raw
bewilderment or hostility in a way that took up a lot of conversational space without moving anything forward.)
Let me confirm your suspicions, then: I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply. There was a good deal of discussion of this in some threads about “Double Crux” a while back (I haven’t the time right now, but later I can dig up the links, if requested). Suffice it to say that there is a deep disagreement here about the nature of disputes, how to resolve them, their causes, etc.
This is surprising to me. A crux is a thing that if you didn’t believe it you’d change your mind on some other point—that seems like a very natural concept!
Is your contention that you usually can’t fine any one statement such that if you changed your mind about it, you’d change your mind about the top-level issue? (Interestingly, this is the thrust of top comment by Robin Hanson under Eliezer’s Is That Your True Rejection? post.)
I do not know how to operationalize this into a bet, but I would if I could.
My bet would be something like…
If a person can Belief Report / do Focusing on their beliefs (this might already eliminate a bunch of people)
Then I bet some lower-level belief-node (a crux) could be found that would alter the upper-level belief-nodes if the value/sign/position/weight of that cruxy node were to be changed.
Note: Belief nodes do not have be binary (0 or 1). They can be fuzzy (0-1). Belief nodes can also be conjunctive.
If a person doesn’t work this way, I’d love to know.
There are a lot of rather specific assumptions going into your model, here, and they’re ones that I find to be anywhere between “dubious” to “incomprehensible” to “not really wrong, but thinking of things that way is unhelpful”. (I don’t, to be clear, have any intention of arguing about this here—just pointing it out.) So when you say “If a person doesn’t work this way, I’d love to know.”, I don’t quite know what to say; in my view of things, that question can’t even be asked because many layers of its prerequisites are absent. Does that mean that I “don’t work this way”?
Aw Geez, well if you happen to explain your views somewhere I’d be happy to read them. I can’t find any comments of yours on the Sabien’s Double Crux post or on the post called Contra Double Crux.
The moderators moved my comments originally made on former post… to… this post.
I think I’m expecting people to understand what “finding cruxes” looks like, but this is probably unreasonable of me. This is an hour-long class at CFAR, before “Double Crux” is actually taught. And even then, I suspect most people do not actually get the finer, deeper points of Finding Cruxes.
My psychology is interacting with this in some unhelpful way.
I’m living happily without that frustration, because for me agreement isn’t a goal. A comment that disagrees with me is valuable if it contains interesting ideas, no matter the private reasons; if it has no interesting ideas, I simply don’t reply. In my own posts and comments I also optimize for value of information (e.g. bringing up ideas that haven’t been mentioned yet), not for changing anyone’s mind. The game is about win-win trade of interesting ideas, not zero-sum tug of war.
I’m surprised to see finding cruxes contrasted with value of information considerations.
To me, much of the value of looking for cruxes is that it can guide the conversation to the most update-rich areas.
Correct me if I’m wrong, but I would guess that part of your sense of taste about what makes something an interesting new idea is whether it’s relevant to anything else (in addition to maybe how beautiful or whatever it is on its own). And whether it would make anybody change their mind about anything seems like a pretty big part of relevance. So a significant part of what makes an idea interesting is whether it’s related to your or anybody else’s cruxes, no?
Setting aside whether debates between people who disagree are themselves win-win, cruxes are interesting (to me) not just in the context of a debate between opposing sides located in two different people, but also when I’m just thinking about my own take on an issue.
Given these considerations, it seems like the best argument for not being explicit about cruxes, is if they’re already implicit in your sense of taste about what’s interesting, which is correctly guiding you to ask the right questions and look for the right new pieces of information.
That seems plausible, but I’m skeptical that it’s not often helpful to explicitly check what would make you change your mind about something.
I think caring about agreement first vs VoI first leads to different behavior. Here’s two test cases:
1) Someone strongly disagrees with you but doesn’t say anything interesting. Do you ask for their reasons (agreement first) or ignore them and talk to someone else who’s saying interesting but less disagreeable things (VoI first)?
2) You’re one of many people disagreeing with a post. Do you spell out your reasons that are similar to everyone else’s (agreement first) or try to say something new (VoI first)?
The VoI option works better for me. Given the choice whether to bring up something abstractly interesting or something I feel strongly about, I’ll choose the interesting idea every time. It’s more fun and more fruitful.
Gotcha, this makes sense to me. I would want to follow the VoI strategy in each of your two test cases.
[ I responded to an older, longer version of cousin_it’s comment here, which was very different from what it looks like at present; right now, my comment doesn’t make a whole lot of sense without that context, but I’ll leave it I guess ]
This is a fascinating, alternative perspective!
If this is what LW is for, then I’ve misjudged it and don’t yet know what to make of it.
I disagree with the frame.
What I’m into is having a community steered towards seeking truth together. And this is NOT a zero-sum game at all. Changing people’s minds so that we’re all more aligned with truth seems infinite-sum to me.
Why? Because the more groundwork we lay for our foundation, the more we can DO.
Were rockets built by people who just exchanged interesting ideas for rocket-building but never bothered to check each other’s math? We wouldn’t have gotten very far if this is where we stayed. So resolving each layer of disagreement led to being able to coordinate on how to build rockets and then building them.
Similarly with rationality. I’m interested in changing your mind about a lot of things. I want to convince you that I can and am seeing things in the universe that, if we can agree on them one way or another, would then allow us to move to the next step, where we’d unearth a whole NEW set of disagreements to resolve. And so forth. That is progress.
I’m willing to concede that LW might not be for this thing, and that seems maybe fine. It might even be better!
But I’m going to look the thing somewhere, if not here.
(I had a mathy argument here, pointing to this post as a motivation for exchanging ideas instead of changing minds. It had an error, so retracted.)
Yup! That totally makes sense (the stuff in the link) and the thing about the coins.
Also not what I’m trying to talk about here.
I’m not interested in sharing posteriors. I’m interested in sharing the methods for which people arrive at their posteriors (this is what Double Crux is all about).
So in the fair/unfair coin example in the link, the way I’d “change your mind” about whether a coin flip was fair would be to ask, “You seem to think the coin has a 39% chance of being unfair. What would change your mind about that?”
If the answer is, “Well it depends on what happens when the coin is flipped.” And let’s say this is also a Double Crux for me.
At this point we’d have to start sharing our evidence or gathering more evidence to actually resolve the disagreement. And once we did, we’d both converge towards one truth.
I think this is a super important perspective. I also think that stating cruxes is a surprisingly good way to find good pieces of information to propagate. My model of this is something like “a lot of topics show up again and again, which suggests that most participants have already heard the standard arguments and standard perspectives. Focusing on people’s cruxes helps the discussion move towards sharing pieces of information that haven’t been shared yet.”