I don’t understand the point you’re making in your first two paragraphs, could you explicitly relate it to finding cruxes and what you specifically disagree/agree with?
I did understand the Al part though! I never claimed that he was being purposely misleading, but I did want it to come across as “Al is giving a reason for his belief that only accounts for <10% of his confidence”, or “Al isn’t giving the main reason for his belief that accounts for the most probability”.
I agree it can account for a smaller probability, and this is mentioned in Ex. 1 as what’s subtly wrong with my phrasing.
“Do you believe in the bible now?” ask for a positive belief in bible while “You were wrong to dismiss bible as impossible” does not entail that you ought to believe in bible.
I guess there are multiple facets to the “process” point. That is the situation requires that the familiarity with the claims is overwhelming and that is only the matter of opinion what the judgement outcome is. There are multiple processes where our early opinion shapes how and how much information we clean from the object of interest. If you close a book early you dont’ know what the backhalf contains. if there is a trial and a witness is wrongly not heard you can not effectively cure this mistake without crossexamining the witness.
The second aspect to the “process” point is comparable cognitive work. That is agents are not typically logically omniscient and to the extenent they have not processed part of logic space they can’t really be blamed for. Thus if we argue we want to argue over judgement actually exerciced not that could have been exercised. The situation can be read so that it implictly assumes or endorses that Al should have formed an opinion on the plausibility for every single claim that bible contains (in order to be sure that the unexamined claims do not contain a point of higher disagreement).
Say that there is a freedom of speech case and the court must first decide whether the speech was first amendment protected speech and if it was whether there is a compelling state interest. If the court finds that the speech was not protected they don’t have to take an opinion whether a compelling state interest exisist, the question would me moot. If a higher court decides that actually the speech is protected then the further question is no longer moot. If you would ask whether not being protected kind of speech is a true reason for the verdict the logic here presented would ask whether orders of the court would have been different if this facet was different. The answer is not that “yes, the court would have provided protection” but rather “it depends on further facts” and the case is not that it would be up to chance. But just because the decision doesn’t flip on that fact doesn’t make it a untrue reason (and is in fact 100% dominant in that it doesn’t “share” the weight with other factors).
If Al was following a decision procedure like
for paragraph in book:
for sentence in paragraph:
if sentence is astronomically implausible:
think the book is false
close the book
if paragraph is astronomically implausible:
think the book is false
close book
if book has corrobation from elsewhere:
think book is true
else:
think book is false
Then if he makes a judgement error in evaluating a text units plausibility the correct cure would be to unmoot the rest of the sentences rather than arrive at the opposite conclusion (intelligence is not reversed stupidity).
I think you’re saying:
Someone might not know all the relevant information, or all the logical implications, and it might be good to encourage them to read more information or think through more implications(?)
Regardless, I think using the recursive finding cruxes algorithm given in this post solves any of these issues in a real life conversation. Are you claiming that it doesn’t?
Trying to follow the algorithm would lead to dismissal if the answer to 2 was negative which would often be destructive (or more constructive paths would be followed without trying to adhere to the algorithm)
Oftentimes just being curious how the other things gets the ball rolling. The strategy outlined tries to avoid touching the foreign mental machinery as much as possible while still changing the stance. It can be a problem if you get bogged down to irrelevant curiosities. But often the sidetracks can be more valuable than the starting main objective.
The strategy wants for the other to tell a story how they would arrive at the new stance. But the inferential steps to get to that kind of story could be a lot. It works well when a point change in one belief has clearly seen consequences for other beliefs. But it becomes increasingly inapplicable when it is hard to imagine the consequences or if the consequences are hard to predict. Onus on doing the cognitive work and adding details on adopting new stances would be for those that suggest them. Doing work only on the condition that is can be guaranteed before hand that it will lead to progress makes people keep their minds far away from fields where guarantees can’t be given.
Thanks for trying to repair communications and confirming how much sense I am making
I like using this formula as a guideline for introspection, and the overall purpose is understanding the other person (which is related to curiosity, but not my purpose).
A negative after step 2, “no, I would still be just as confident” helps focus the conversation on actual cruxes. However, I did have a guy, having already understood I was asking for cruxes, say that the reason was a crux, but it wasn’t the complete reason (“yes, if God didn’t give grace I wouldn’t believe in him at all, but the Bible and 2000 year history are also important”)
Maybe if I was talking to someone else, they wouldn’t be able to say the extra reasons, being more timid or less introspective. But I’m pretty good at noticing when someone doesn’t react like “Oh, I’ve been 100% convinced and nothing is wrong with this logic”. This skill is very useful, and isn’t mentioned in the post.
The best use of this method is definitely drawing the picture using the picture to keep up with all the reasons and reasons for reasons. It makes it so much easier for both of us to stay on track and remember what was said.
Okay, so that’s the benefits and caveats of the method, though I’m confused on your “the strategy wants the other to tell a story on how they’d arrive to a new stance”.
I don’t understand this. If I believe in ghosts. And you use this method. The story would be how I would arrive to not believing in ghosts? Like just the negative of the original belief, not anything else new, right?
If so, then I don’t think that story is very hard if, after introspecting using this method, I figure out my reasons for believing in ghosts are flawed.
But maybe if the belief is very important like a religious one, properly setting someone’s expectations would be good. Like I might need to tell them “yes, you can still be a good person, be happy, have great friends, find love, etc” if they change their belief.
You need to answer the question before introspection so you don’t have time to doubt your stance. You would need to assume or guess that the other implications would not be so out of wack to make it implausible or impossible to adopt the new stance. If I declare no prospect of position moving the thing is declared moot and we don’t discuss.
I think there can be a big gap between the embeddeness of a proposition between the participants. Somebody that doesn’t belive in ghosts can treat it like a stand-alone fact. But somebody that does believe will (might) have it entangled with other beliefs. This effect is more pronounced the less anticipated the question is and the deeper it cuts. A ghost belief can be entangled to memories of fear of death. Those associations can be hard to articulate yet they can have real effects on positions held.
It is amgibious what you refer to as “telling them”. Doing reassurances without reasons would be equivalent to a kind of “we are just separately doing intellectual stuff, there won’t be any discussion breaking forces invoked”. The other would be to argue that belief in the important things can be justified even after changing the stance. That kind of guarantee probably can fail. One could argue that the other could just adopt your belief system verbatim to be atleast as prosperous as you are. But it would come with having to adopt your positions on everything. If being of different opinion in a different field comes to cognitive dissonance with the new stance that could be a psychological problem they would have to deal with that you do not have to. That is there is a chance that there is a legit crisis of worldview after the discussion.
I guess the contrast in my mind is that argumentation takes the form of very small steps that are very well founded where all doubt is resolved as soon as there is the smallest hint of it. In a mathematical proof as you follow along you should be convinced that each line is warranted by the previous line. Sometimes when somebody assumes a lot of mathematical competency they use fewer midsteps. Then you can say “I don’t see how that follows from that” and the other person can expand the one step into multiple smaller steps. The method here seems that the question “Would you adopt X if Y was not the case?” is not particualrly amendable to going closer into detail how it is answered in the positive or negative. But I think there are lot of hard/laborusome cases where a lot of judgement needs to happen and it happens not in the interactive space but hidden in the private space of one persons head.
I don’t feel that was my point but I think it cuts close to the space. I think it now gets lucky that treating the question as a short story or a isolated fact is commonly easy but it has no guarantees that it will be easy or any tools to tackle things when things are hard. Does the method offer any advice when there is no quick or clear “yes” or “no” answer to “would you belive X if Y were the case”?
I don’t understand the point you’re making in your first two paragraphs, could you explicitly relate it to finding cruxes and what you specifically disagree/agree with?
I did understand the Al part though! I never claimed that he was being purposely misleading, but I did want it to come across as “Al is giving a reason for his belief that only accounts for <10% of his confidence”, or “Al isn’t giving the main reason for his belief that accounts for the most probability”.
I agree it can account for a smaller probability, and this is mentioned in Ex. 1 as what’s subtly wrong with my phrasing.
“Do you believe in the bible now?” ask for a positive belief in bible while “You were wrong to dismiss bible as impossible” does not entail that you ought to believe in bible.
I guess there are multiple facets to the “process” point. That is the situation requires that the familiarity with the claims is overwhelming and that is only the matter of opinion what the judgement outcome is. There are multiple processes where our early opinion shapes how and how much information we clean from the object of interest. If you close a book early you dont’ know what the backhalf contains. if there is a trial and a witness is wrongly not heard you can not effectively cure this mistake without crossexamining the witness.
The second aspect to the “process” point is comparable cognitive work. That is agents are not typically logically omniscient and to the extenent they have not processed part of logic space they can’t really be blamed for. Thus if we argue we want to argue over judgement actually exerciced not that could have been exercised. The situation can be read so that it implictly assumes or endorses that Al should have formed an opinion on the plausibility for every single claim that bible contains (in order to be sure that the unexamined claims do not contain a point of higher disagreement).
Say that there is a freedom of speech case and the court must first decide whether the speech was first amendment protected speech and if it was whether there is a compelling state interest. If the court finds that the speech was not protected they don’t have to take an opinion whether a compelling state interest exisist, the question would me moot. If a higher court decides that actually the speech is protected then the further question is no longer moot. If you would ask whether not being protected kind of speech is a true reason for the verdict the logic here presented would ask whether orders of the court would have been different if this facet was different. The answer is not that “yes, the court would have provided protection” but rather “it depends on further facts” and the case is not that it would be up to chance. But just because the decision doesn’t flip on that fact doesn’t make it a untrue reason (and is in fact 100% dominant in that it doesn’t “share” the weight with other factors).
If Al was following a decision procedure like
Then if he makes a judgement error in evaluating a text units plausibility the correct cure would be to unmoot the rest of the sentences rather than arrive at the opposite conclusion (intelligence is not reversed stupidity).
I think you’re saying: Someone might not know all the relevant information, or all the logical implications, and it might be good to encourage them to read more information or think through more implications(?)
Regardless, I think using the recursive finding cruxes algorithm given in this post solves any of these issues in a real life conversation. Are you claiming that it doesn’t?
Trying to follow the algorithm would lead to dismissal if the answer to 2 was negative which would often be destructive (or more constructive paths would be followed without trying to adhere to the algorithm)
Oftentimes just being curious how the other things gets the ball rolling. The strategy outlined tries to avoid touching the foreign mental machinery as much as possible while still changing the stance. It can be a problem if you get bogged down to irrelevant curiosities. But often the sidetracks can be more valuable than the starting main objective.
The strategy wants for the other to tell a story how they would arrive at the new stance. But the inferential steps to get to that kind of story could be a lot. It works well when a point change in one belief has clearly seen consequences for other beliefs. But it becomes increasingly inapplicable when it is hard to imagine the consequences or if the consequences are hard to predict. Onus on doing the cognitive work and adding details on adopting new stances would be for those that suggest them. Doing work only on the condition that is can be guaranteed before hand that it will lead to progress makes people keep their minds far away from fields where guarantees can’t be given.
Thanks for trying to repair communications and confirming how much sense I am making
Of course, I’ve really appreciated your input.
I like using this formula as a guideline for introspection, and the overall purpose is understanding the other person (which is related to curiosity, but not my purpose).
A negative after step 2, “no, I would still be just as confident” helps focus the conversation on actual cruxes. However, I did have a guy, having already understood I was asking for cruxes, say that the reason was a crux, but it wasn’t the complete reason (“yes, if God didn’t give grace I wouldn’t believe in him at all, but the Bible and 2000 year history are also important”)
Maybe if I was talking to someone else, they wouldn’t be able to say the extra reasons, being more timid or less introspective. But I’m pretty good at noticing when someone doesn’t react like “Oh, I’ve been 100% convinced and nothing is wrong with this logic”. This skill is very useful, and isn’t mentioned in the post.
The best use of this method is definitely drawing the picture using the picture to keep up with all the reasons and reasons for reasons. It makes it so much easier for both of us to stay on track and remember what was said.
Okay, so that’s the benefits and caveats of the method, though I’m confused on your “the strategy wants the other to tell a story on how they’d arrive to a new stance”.
I don’t understand this. If I believe in ghosts. And you use this method. The story would be how I would arrive to not believing in ghosts? Like just the negative of the original belief, not anything else new, right?
If so, then I don’t think that story is very hard if, after introspecting using this method, I figure out my reasons for believing in ghosts are flawed.
But maybe if the belief is very important like a religious one, properly setting someone’s expectations would be good. Like I might need to tell them “yes, you can still be a good person, be happy, have great friends, find love, etc” if they change their belief.
Was that your point?
You need to answer the question before introspection so you don’t have time to doubt your stance. You would need to assume or guess that the other implications would not be so out of wack to make it implausible or impossible to adopt the new stance. If I declare no prospect of position moving the thing is declared moot and we don’t discuss.
I think there can be a big gap between the embeddeness of a proposition between the participants. Somebody that doesn’t belive in ghosts can treat it like a stand-alone fact. But somebody that does believe will (might) have it entangled with other beliefs. This effect is more pronounced the less anticipated the question is and the deeper it cuts. A ghost belief can be entangled to memories of fear of death. Those associations can be hard to articulate yet they can have real effects on positions held.
It is amgibious what you refer to as “telling them”. Doing reassurances without reasons would be equivalent to a kind of “we are just separately doing intellectual stuff, there won’t be any discussion breaking forces invoked”. The other would be to argue that belief in the important things can be justified even after changing the stance. That kind of guarantee probably can fail. One could argue that the other could just adopt your belief system verbatim to be atleast as prosperous as you are. But it would come with having to adopt your positions on everything. If being of different opinion in a different field comes to cognitive dissonance with the new stance that could be a psychological problem they would have to deal with that you do not have to. That is there is a chance that there is a legit crisis of worldview after the discussion.
I guess the contrast in my mind is that argumentation takes the form of very small steps that are very well founded where all doubt is resolved as soon as there is the smallest hint of it. In a mathematical proof as you follow along you should be convinced that each line is warranted by the previous line. Sometimes when somebody assumes a lot of mathematical competency they use fewer midsteps. Then you can say “I don’t see how that follows from that” and the other person can expand the one step into multiple smaller steps. The method here seems that the question “Would you adopt X if Y was not the case?” is not particualrly amendable to going closer into detail how it is answered in the positive or negative. But I think there are lot of hard/laborusome cases where a lot of judgement needs to happen and it happens not in the interactive space but hidden in the private space of one persons head.
I don’t feel that was my point but I think it cuts close to the space. I think it now gets lucky that treating the question as a short story or a isolated fact is commonly easy but it has no guarantees that it will be easy or any tools to tackle things when things are hard. Does the method offer any advice when there is no quick or clear “yes” or “no” answer to “would you belive X if Y were the case”?