But the subsequent “arguments” often spiraled into people … not looking for cruxes, which I find to be an alarmingly common thing here, to the degree that I suspect people do not in fact WANT their cruxes to be on the table, and I’ve read multiple comments that support this.
Let me confirm your suspicions, then: I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply. There was a good deal of discussion of this in some threads about “Double Crux” a while back (I haven’t the time right now, but later I can dig up the links, if requested). Suffice it to say that there is a deep disagreement here about the nature of disputes, how to resolve them, their causes, etc.
I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply.
This is surprising to me. A crux is a thing that if you didn’t believe it you’d change your mind on some other point—that seems like a very natural concept!
Is your contention that you usually can’t fine any one statement such that if you changed your mind about it, you’d change your mind about the top-level issue? (Interestingly, this is the thrust of top comment by Robin Hanson under Eliezer’s Is That Your True Rejection? post.)
I do not know how to operationalize this into a bet, but I would if I could.
My bet would be something like…
If a person can Belief Report / do Focusing on their beliefs (this might already eliminate a bunch of people)
Then I bet some lower-level belief-node (a crux) could be found that would alter the upper-level belief-nodes if the value/sign/position/weight of that cruxy node were to be changed.
Note: Belief nodes do not have be binary (0 or 1). They can be fuzzy (0-1). Belief nodes can also be conjunctive.
If a person doesn’t work this way, I’d love to know.
There are a lot of rather specific assumptions going into your model, here, and they’re ones that I find to be anywhere between “dubious” to “incomprehensible” to “not really wrong, but thinking of things that way is unhelpful”. (I don’t, to be clear, have any intention of arguing about this here—just pointing it out.) So when you say “If a person doesn’t work this way, I’d love to know.”, I don’t quite know what to say; in my view of things, that question can’t even be asked because many layers of its prerequisites are absent. Does that mean that I “don’t work this way”?
Aw Geez, well if you happen to explain your views somewhere I’d be happy to read them. I can’t find any comments of yours on the Sabien’s Double Crux post or on the post called Contra Double Crux.
Let me confirm your suspicions, then: I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply. There was a good deal of discussion of this in some threads about “Double Crux” a while back (I haven’t the time right now, but later I can dig up the links, if requested). Suffice it to say that there is a deep disagreement here about the nature of disputes, how to resolve them, their causes, etc.
This is surprising to me. A crux is a thing that if you didn’t believe it you’d change your mind on some other point—that seems like a very natural concept!
Is your contention that you usually can’t fine any one statement such that if you changed your mind about it, you’d change your mind about the top-level issue? (Interestingly, this is the thrust of top comment by Robin Hanson under Eliezer’s Is That Your True Rejection? post.)
I do not know how to operationalize this into a bet, but I would if I could.
My bet would be something like…
If a person can Belief Report / do Focusing on their beliefs (this might already eliminate a bunch of people)
Then I bet some lower-level belief-node (a crux) could be found that would alter the upper-level belief-nodes if the value/sign/position/weight of that cruxy node were to be changed.
Note: Belief nodes do not have be binary (0 or 1). They can be fuzzy (0-1). Belief nodes can also be conjunctive.
If a person doesn’t work this way, I’d love to know.
There are a lot of rather specific assumptions going into your model, here, and they’re ones that I find to be anywhere between “dubious” to “incomprehensible” to “not really wrong, but thinking of things that way is unhelpful”. (I don’t, to be clear, have any intention of arguing about this here—just pointing it out.) So when you say “If a person doesn’t work this way, I’d love to know.”, I don’t quite know what to say; in my view of things, that question can’t even be asked because many layers of its prerequisites are absent. Does that mean that I “don’t work this way”?
Aw Geez, well if you happen to explain your views somewhere I’d be happy to read them. I can’t find any comments of yours on the Sabien’s Double Crux post or on the post called Contra Double Crux.
The moderators moved my comments originally made on former post… to… this post.