I don’t know what you mean by “true reason”. I’ve defined a “crux” as holding probability such that it being true or not actually affects the confidence in the belief. Could you define “true reason” in this level of detail?
I’m confused about where we disagree:
When someone gives me a reason for why they believe in something, I don’t assume that they gave me a crux.
When I ask someone “if that reason turned out to not be true, would you still be just as confident in your belief?”, I’ll usually trust them when they say “yes” or “no”.
If after 2, and I show them that their reason is actually false, they say “oh, that actually didn’t change my mind like I predicted”, then most people would feel weird/bad about being inconsistent and would try to resolve it. This situation is also good, but I predict it’s unlikely.
I thought that crux was a statement that if belief in that is altered then the conclusion is changed. A true reason is something that is actually used in reasoning. The opposite is a fake reason. It is a reason in that it entails the conclusion. But it is fake because it is not actually used in the reasoning. That is “plausible affirmability”. A non-crux would be a statement if changed would not move the conclusion. If you have a true reason that is a non-crux then after the argument you hold the same conclusion with different reasons/groundings. If you have a fake reason that is a non-crux then after the argument your belief is implausible (or its a talk point htat can be repeated for retorical losing but no change in positions). If you have a fake reason that is a crux then you will profess a different conclusion after the argument. And for a true reason that is a crux your position actually changes.
I do find it strange that the line of questioning doesn’t trust that “why do you believe that?” is answered accurately but does trust that “would you change you mind?” is. A common misreading of a question like “Why do you believe that?” could be “”please give a defence of your opinion” which would be an inaccurate answer. The point of a clarifying line of questions would be to disambiguate that “no really, I am interested in the why and would like you to share it”.
Thanks for explaining the differences and going into detail for the combinations. I defined crux in this post as anything with probability attached to it, such that if it wasn’t true, the confidence of the belief would lower. This is more general and covers cases that have multiple reasons that lead to a belief.
For ex. I believe with 99% confidence that I picked the fair coin after I flipped it 10 times. Each of those 10 flips contributes to the belief, and each is a crux.
I don’t quite understand the disambiguating of the last line. Some people do interpret it as “give me a defense/good arguments for that belief“, but I don’t see how “no really, …” couldn’t also be misinterpreted the same way.
To clarify why I trust one statement and not another, I used this technique a couple of days ago with two guys. I asked the “why do you believe this?”, and didn’t get a crux. I asked if he’d be just as confident if his reason didn’t exist and he said he’d be just as confident.
After then explaining that I’m looking for reasons that contribute to the confidence of his belief, he said “oh, I get it” and we had a very productive conversation.
I think that drawing the picture, asking for their confidence, asking why, and asking if it’s a crux helps tremendously towards a productive conversation. I think this process (which takes like 1-2 minutes per iteration) disambiguates the “why” question mentioned above. (Though, if you have better phrasing’s that are clearer, I’d be happy to hear them)
I had in mid when writing the cases that the confidences could be degrees. I don’t see how the realness instead of binariness explains any disparities.
Trying to be clear what I mean and how I effectively communicate it I did think that I probably define both “realness” and “cruxiness” to be similar continuums instead of binaries which probably wanst apparent in my expression but I was thinking of expressing it through the poles. But I think that a belief can be “half-real” where you have a mix of reasoning you don’t know whether you state in name only or whether they actually convinced/convince you. Similarly the other aspect has a slide in it. I will reserve the name “crux” for the concept used in the post. Rename my other aspect as “stateness”. I think they are the same but I am not entirely sure.
Someone close to me pointed out that being smart can make you argumentative as non-smartasses might disagree with a claim but if two smartasses agree with a claim they can disagree on why that position should be held. I have found it very helpdul for myself to have debates over whether the reasons I find something are correct or not, or to phrase it differently if I get something right based on luck, I did an error because I should have arrived at the answer in a systematic way that can be relied on. In more middle cases when I apply a heuristic, the heuristic can be more or less applicaple to the situation and applying a heuristic where it is not meant to is a error-like thing.
Using propabilities can be very model ambivalent which makes different models somewhat interoperable. But it makes it so that model sensitive aspects of arguing are going to be hidden or very hard to express in that kind of language. I still think there is a danger that any claim that has 0 probability moving power would be judged to be a free-spinning wheel. Then there is the issue whether valuable 0 probability moving issues exist and whether it is productive to such bring entities into a co-op deliberation or argument. The point of the crux method is to identify points that lead to effective resolvation. Thefore it depends whether we use resolvation to be more deliberative / communicative or view the agreement as the higher goal so that deliberation and communication are tools to get to agreement.
In the coin flip example, say that you flip the coins and somebody says “there are as many tails as heads”. When you start to think what you would belive if one of the flips had a different result there are multiple paths: 1) coin #1 is different, the bystander commenter doesn’t say that the amounts are equal 2) coin #1 is different, the bystander commenter does say the amounts are equal, coin #2 is different also
3) All coins are have the oppostie result, the bystander says the amounts are equal.
It seems strange to me that every single one of your cruxes could be different yet you would still maintain that the coin was fair. That doesn’t sound like a move in confidence. That doesn’t seem to fullfill the definition of a crux.
I kinda get that you want to express that the result of the coin flips are materially connected to the belief that the coin is fair. But the definition of the crux as stated doesn’t really express that kind of thing. Or in the alternative if the definition is supposed to cover that kind of case then the case where the belief is materially connected but doesn’t push the conclusion confidence in any direction ought to also be covered.
I also furher more started to doubt if there is more hidden conceptual disagreements. To me “why” is about the past and the causal history. However the “why” asked here seems that it concerns the future as in “what keeps you believing that thing” as opposed to “what made you adopt that belief”. I also realised that I think it should be easy for participants to reveal if their beliefs formed under questionable circumstances (and that this is not trivial and not everybody has policies to this direction). Answering the attitude of “what keeps you believing” makes one endorse those standards differntly and might lead to endorsements that would not exist without asking.
Disambiguation are hard to get exhaustive. I guess the focus on that disamgiuation would be the “why” part. It can just get confusing whether we are talkikng about the level of what the speaker intends or what is offered for the hearer to interpret. I realised that big part of the crux approach can be that the YOU is emphasised (there is no crux for objective facts it require subjective judgement). It could also make sense to emphasise the WOULD (we are about to actually change opinions and not just wave flags for our sides) or BELIEF (we are not caring about professing or side-picking just what you think is the case). But it muddies the waters that context where any of the crucial part here are demphasised in perfectly sensible activities (YOU,why would the general population have sympathy for your views,WOULD for Aist which we don’t know whether they exist or their other attributes how they would take that, BELIEF winning a debate where your position is picked at random at start, WHY decision by vote possibly by irrational or populist voters)
I don’t know what you mean by “true reason”. I’ve defined a “crux” as holding probability such that it being true or not actually affects the confidence in the belief. Could you define “true reason” in this level of detail?
I’m confused about where we disagree:
When someone gives me a reason for why they believe in something, I don’t assume that they gave me a crux.
When I ask someone “if that reason turned out to not be true, would you still be just as confident in your belief?”, I’ll usually trust them when they say “yes” or “no”.
If after 2, and I show them that their reason is actually false, they say “oh, that actually didn’t change my mind like I predicted”, then most people would feel weird/bad about being inconsistent and would try to resolve it. This situation is also good, but I predict it’s unlikely.
I thought that crux was a statement that if belief in that is altered then the conclusion is changed. A true reason is something that is actually used in reasoning. The opposite is a fake reason. It is a reason in that it entails the conclusion. But it is fake because it is not actually used in the reasoning. That is “plausible affirmability”. A non-crux would be a statement if changed would not move the conclusion. If you have a true reason that is a non-crux then after the argument you hold the same conclusion with different reasons/groundings. If you have a fake reason that is a non-crux then after the argument your belief is implausible (or its a talk point htat can be repeated for retorical losing but no change in positions). If you have a fake reason that is a crux then you will profess a different conclusion after the argument. And for a true reason that is a crux your position actually changes.
I do find it strange that the line of questioning doesn’t trust that “why do you believe that?” is answered accurately but does trust that “would you change you mind?” is. A common misreading of a question like “Why do you believe that?” could be “”please give a defence of your opinion” which would be an inaccurate answer. The point of a clarifying line of questions would be to disambiguate that “no really, I am interested in the why and would like you to share it”.
Thanks for explaining the differences and going into detail for the combinations. I defined crux in this post as anything with probability attached to it, such that if it wasn’t true, the confidence of the belief would lower. This is more general and covers cases that have multiple reasons that lead to a belief.
For ex. I believe with 99% confidence that I picked the fair coin after I flipped it 10 times. Each of those 10 flips contributes to the belief, and each is a crux.
I don’t quite understand the disambiguating of the last line. Some people do interpret it as “give me a defense/good arguments for that belief“, but I don’t see how “no really, …” couldn’t also be misinterpreted the same way.
To clarify why I trust one statement and not another, I used this technique a couple of days ago with two guys. I asked the “why do you believe this?”, and didn’t get a crux. I asked if he’d be just as confident if his reason didn’t exist and he said he’d be just as confident.
After then explaining that I’m looking for reasons that contribute to the confidence of his belief, he said “oh, I get it” and we had a very productive conversation.
I think that drawing the picture, asking for their confidence, asking why, and asking if it’s a crux helps tremendously towards a productive conversation. I think this process (which takes like 1-2 minutes per iteration) disambiguates the “why” question mentioned above. (Though, if you have better phrasing’s that are clearer, I’d be happy to hear them)
I had in mid when writing the cases that the confidences could be degrees. I don’t see how the realness instead of binariness explains any disparities.
Trying to be clear what I mean and how I effectively communicate it I did think that I probably define both “realness” and “cruxiness” to be similar continuums instead of binaries which probably wanst apparent in my expression but I was thinking of expressing it through the poles. But I think that a belief can be “half-real” where you have a mix of reasoning you don’t know whether you state in name only or whether they actually convinced/convince you. Similarly the other aspect has a slide in it. I will reserve the name “crux” for the concept used in the post. Rename my other aspect as “stateness”. I think they are the same but I am not entirely sure.
Someone close to me pointed out that being smart can make you argumentative as non-smartasses might disagree with a claim but if two smartasses agree with a claim they can disagree on why that position should be held. I have found it very helpdul for myself to have debates over whether the reasons I find something are correct or not, or to phrase it differently if I get something right based on luck, I did an error because I should have arrived at the answer in a systematic way that can be relied on. In more middle cases when I apply a heuristic, the heuristic can be more or less applicaple to the situation and applying a heuristic where it is not meant to is a error-like thing.
Using propabilities can be very model ambivalent which makes different models somewhat interoperable. But it makes it so that model sensitive aspects of arguing are going to be hidden or very hard to express in that kind of language. I still think there is a danger that any claim that has 0 probability moving power would be judged to be a free-spinning wheel. Then there is the issue whether valuable 0 probability moving issues exist and whether it is productive to such bring entities into a co-op deliberation or argument. The point of the crux method is to identify points that lead to effective resolvation. Thefore it depends whether we use resolvation to be more deliberative / communicative or view the agreement as the higher goal so that deliberation and communication are tools to get to agreement.
In the coin flip example, say that you flip the coins and somebody says “there are as many tails as heads”. When you start to think what you would belive if one of the flips had a different result there are multiple paths: 1) coin #1 is different, the bystander commenter doesn’t say that the amounts are equal 2) coin #1 is different, the bystander commenter does say the amounts are equal, coin #2 is different also 3) All coins are have the oppostie result, the bystander says the amounts are equal. It seems strange to me that every single one of your cruxes could be different yet you would still maintain that the coin was fair. That doesn’t sound like a move in confidence. That doesn’t seem to fullfill the definition of a crux.
I kinda get that you want to express that the result of the coin flips are materially connected to the belief that the coin is fair. But the definition of the crux as stated doesn’t really express that kind of thing. Or in the alternative if the definition is supposed to cover that kind of case then the case where the belief is materially connected but doesn’t push the conclusion confidence in any direction ought to also be covered.
I also furher more started to doubt if there is more hidden conceptual disagreements. To me “why” is about the past and the causal history. However the “why” asked here seems that it concerns the future as in “what keeps you believing that thing” as opposed to “what made you adopt that belief”. I also realised that I think it should be easy for participants to reveal if their beliefs formed under questionable circumstances (and that this is not trivial and not everybody has policies to this direction). Answering the attitude of “what keeps you believing” makes one endorse those standards differntly and might lead to endorsements that would not exist without asking.
Disambiguation are hard to get exhaustive. I guess the focus on that disamgiuation would be the “why” part. It can just get confusing whether we are talkikng about the level of what the speaker intends or what is offered for the hearer to interpret. I realised that big part of the crux approach can be that the YOU is emphasised (there is no crux for objective facts it require subjective judgement). It could also make sense to emphasise the WOULD (we are about to actually change opinions and not just wave flags for our sides) or BELIEF (we are not caring about professing or side-picking just what you think is the case). But it muddies the waters that context where any of the crucial part here are demphasised in perfectly sensible activities (YOU,why would the general population have sympathy for your views,WOULD for Aist which we don’t know whether they exist or their other attributes how they would take that, BELIEF winning a debate where your position is picked at random at start, WHY decision by vote possibly by irrational or populist voters)