Trying to follow the algorithm would lead to dismissal if the answer to 2 was negative which would often be destructive (or more constructive paths would be followed without trying to adhere to the algorithm)
Oftentimes just being curious how the other things gets the ball rolling. The strategy outlined tries to avoid touching the foreign mental machinery as much as possible while still changing the stance. It can be a problem if you get bogged down to irrelevant curiosities. But often the sidetracks can be more valuable than the starting main objective.
The strategy wants for the other to tell a story how they would arrive at the new stance. But the inferential steps to get to that kind of story could be a lot. It works well when a point change in one belief has clearly seen consequences for other beliefs. But it becomes increasingly inapplicable when it is hard to imagine the consequences or if the consequences are hard to predict. Onus on doing the cognitive work and adding details on adopting new stances would be for those that suggest them. Doing work only on the condition that is can be guaranteed before hand that it will lead to progress makes people keep their minds far away from fields where guarantees can’t be given.
Thanks for trying to repair communications and confirming how much sense I am making
I like using this formula as a guideline for introspection, and the overall purpose is understanding the other person (which is related to curiosity, but not my purpose).
A negative after step 2, “no, I would still be just as confident” helps focus the conversation on actual cruxes. However, I did have a guy, having already understood I was asking for cruxes, say that the reason was a crux, but it wasn’t the complete reason (“yes, if God didn’t give grace I wouldn’t believe in him at all, but the Bible and 2000 year history are also important”)
Maybe if I was talking to someone else, they wouldn’t be able to say the extra reasons, being more timid or less introspective. But I’m pretty good at noticing when someone doesn’t react like “Oh, I’ve been 100% convinced and nothing is wrong with this logic”. This skill is very useful, and isn’t mentioned in the post.
The best use of this method is definitely drawing the picture using the picture to keep up with all the reasons and reasons for reasons. It makes it so much easier for both of us to stay on track and remember what was said.
Okay, so that’s the benefits and caveats of the method, though I’m confused on your “the strategy wants the other to tell a story on how they’d arrive to a new stance”.
I don’t understand this. If I believe in ghosts. And you use this method. The story would be how I would arrive to not believing in ghosts? Like just the negative of the original belief, not anything else new, right?
If so, then I don’t think that story is very hard if, after introspecting using this method, I figure out my reasons for believing in ghosts are flawed.
But maybe if the belief is very important like a religious one, properly setting someone’s expectations would be good. Like I might need to tell them “yes, you can still be a good person, be happy, have great friends, find love, etc” if they change their belief.
You need to answer the question before introspection so you don’t have time to doubt your stance. You would need to assume or guess that the other implications would not be so out of wack to make it implausible or impossible to adopt the new stance. If I declare no prospect of position moving the thing is declared moot and we don’t discuss.
I think there can be a big gap between the embeddeness of a proposition between the participants. Somebody that doesn’t belive in ghosts can treat it like a stand-alone fact. But somebody that does believe will (might) have it entangled with other beliefs. This effect is more pronounced the less anticipated the question is and the deeper it cuts. A ghost belief can be entangled to memories of fear of death. Those associations can be hard to articulate yet they can have real effects on positions held.
It is amgibious what you refer to as “telling them”. Doing reassurances without reasons would be equivalent to a kind of “we are just separately doing intellectual stuff, there won’t be any discussion breaking forces invoked”. The other would be to argue that belief in the important things can be justified even after changing the stance. That kind of guarantee probably can fail. One could argue that the other could just adopt your belief system verbatim to be atleast as prosperous as you are. But it would come with having to adopt your positions on everything. If being of different opinion in a different field comes to cognitive dissonance with the new stance that could be a psychological problem they would have to deal with that you do not have to. That is there is a chance that there is a legit crisis of worldview after the discussion.
I guess the contrast in my mind is that argumentation takes the form of very small steps that are very well founded where all doubt is resolved as soon as there is the smallest hint of it. In a mathematical proof as you follow along you should be convinced that each line is warranted by the previous line. Sometimes when somebody assumes a lot of mathematical competency they use fewer midsteps. Then you can say “I don’t see how that follows from that” and the other person can expand the one step into multiple smaller steps. The method here seems that the question “Would you adopt X if Y was not the case?” is not particualrly amendable to going closer into detail how it is answered in the positive or negative. But I think there are lot of hard/laborusome cases where a lot of judgement needs to happen and it happens not in the interactive space but hidden in the private space of one persons head.
I don’t feel that was my point but I think it cuts close to the space. I think it now gets lucky that treating the question as a short story or a isolated fact is commonly easy but it has no guarantees that it will be easy or any tools to tackle things when things are hard. Does the method offer any advice when there is no quick or clear “yes” or “no” answer to “would you belive X if Y were the case”?
Trying to follow the algorithm would lead to dismissal if the answer to 2 was negative which would often be destructive (or more constructive paths would be followed without trying to adhere to the algorithm)
Oftentimes just being curious how the other things gets the ball rolling. The strategy outlined tries to avoid touching the foreign mental machinery as much as possible while still changing the stance. It can be a problem if you get bogged down to irrelevant curiosities. But often the sidetracks can be more valuable than the starting main objective.
The strategy wants for the other to tell a story how they would arrive at the new stance. But the inferential steps to get to that kind of story could be a lot. It works well when a point change in one belief has clearly seen consequences for other beliefs. But it becomes increasingly inapplicable when it is hard to imagine the consequences or if the consequences are hard to predict. Onus on doing the cognitive work and adding details on adopting new stances would be for those that suggest them. Doing work only on the condition that is can be guaranteed before hand that it will lead to progress makes people keep their minds far away from fields where guarantees can’t be given.
Thanks for trying to repair communications and confirming how much sense I am making
Of course, I’ve really appreciated your input.
I like using this formula as a guideline for introspection, and the overall purpose is understanding the other person (which is related to curiosity, but not my purpose).
A negative after step 2, “no, I would still be just as confident” helps focus the conversation on actual cruxes. However, I did have a guy, having already understood I was asking for cruxes, say that the reason was a crux, but it wasn’t the complete reason (“yes, if God didn’t give grace I wouldn’t believe in him at all, but the Bible and 2000 year history are also important”)
Maybe if I was talking to someone else, they wouldn’t be able to say the extra reasons, being more timid or less introspective. But I’m pretty good at noticing when someone doesn’t react like “Oh, I’ve been 100% convinced and nothing is wrong with this logic”. This skill is very useful, and isn’t mentioned in the post.
The best use of this method is definitely drawing the picture using the picture to keep up with all the reasons and reasons for reasons. It makes it so much easier for both of us to stay on track and remember what was said.
Okay, so that’s the benefits and caveats of the method, though I’m confused on your “the strategy wants the other to tell a story on how they’d arrive to a new stance”.
I don’t understand this. If I believe in ghosts. And you use this method. The story would be how I would arrive to not believing in ghosts? Like just the negative of the original belief, not anything else new, right?
If so, then I don’t think that story is very hard if, after introspecting using this method, I figure out my reasons for believing in ghosts are flawed.
But maybe if the belief is very important like a religious one, properly setting someone’s expectations would be good. Like I might need to tell them “yes, you can still be a good person, be happy, have great friends, find love, etc” if they change their belief.
Was that your point?
You need to answer the question before introspection so you don’t have time to doubt your stance. You would need to assume or guess that the other implications would not be so out of wack to make it implausible or impossible to adopt the new stance. If I declare no prospect of position moving the thing is declared moot and we don’t discuss.
I think there can be a big gap between the embeddeness of a proposition between the participants. Somebody that doesn’t belive in ghosts can treat it like a stand-alone fact. But somebody that does believe will (might) have it entangled with other beliefs. This effect is more pronounced the less anticipated the question is and the deeper it cuts. A ghost belief can be entangled to memories of fear of death. Those associations can be hard to articulate yet they can have real effects on positions held.
It is amgibious what you refer to as “telling them”. Doing reassurances without reasons would be equivalent to a kind of “we are just separately doing intellectual stuff, there won’t be any discussion breaking forces invoked”. The other would be to argue that belief in the important things can be justified even after changing the stance. That kind of guarantee probably can fail. One could argue that the other could just adopt your belief system verbatim to be atleast as prosperous as you are. But it would come with having to adopt your positions on everything. If being of different opinion in a different field comes to cognitive dissonance with the new stance that could be a psychological problem they would have to deal with that you do not have to. That is there is a chance that there is a legit crisis of worldview after the discussion.
I guess the contrast in my mind is that argumentation takes the form of very small steps that are very well founded where all doubt is resolved as soon as there is the smallest hint of it. In a mathematical proof as you follow along you should be convinced that each line is warranted by the previous line. Sometimes when somebody assumes a lot of mathematical competency they use fewer midsteps. Then you can say “I don’t see how that follows from that” and the other person can expand the one step into multiple smaller steps. The method here seems that the question “Would you adopt X if Y was not the case?” is not particualrly amendable to going closer into detail how it is answered in the positive or negative. But I think there are lot of hard/laborusome cases where a lot of judgement needs to happen and it happens not in the interactive space but hidden in the private space of one persons head.
I don’t feel that was my point but I think it cuts close to the space. I think it now gets lucky that treating the question as a short story or a isolated fact is commonly easy but it has no guarantees that it will be easy or any tools to tackle things when things are hard. Does the method offer any advice when there is no quick or clear “yes” or “no” answer to “would you belive X if Y were the case”?