Agreed that a commenter who chooses to hold onto a mistaken belief rather than admit error is being imperfectly rational, and agreed that we are under no obligation to be “accommodating to their aversion.”
I’m more confused about the rest of this. Perhaps a concrete example will clarify my confusion.
Suppose Sam says something that’s clearly wrong, and suppose I have a choice between two ways of framing the counterarguments. One frame (F1) gives Sam a way of changing their mind without having to admit they’re wrong. The other (F2) does not.
Suppose further that Sam is the sort of person who, given a choice between changing their mind and admitting they were wrong on the one hand, and not changing their mind on the other, will choose not to change their mind.
You seem to agree that Sam is possible, and that changing Sam’s mind is worthwhile. And it seems clear that F1 has a better chance of changing Sam’s mind than F2 does. (Confirm?) So I think we would agree that in general, using F1 rather than F2 is worthwhile.
But, you say, on Less Wrong things are different. Here, using F2 rather than F1 is more likely to help Sam solve their “bigger rationality problems,” and therefore the preferred choice. (Confirm?)
So… OK. If I’ve understood correctly thus far, then my question is why is F2 more likely to solve their rationality problems here? (And, relatedly, why isn’t it also more likely to do so elsewhere?)
You mention that to help Sam solve those problems, we have to “allow ourselves to notice” those problems. You also suggest that Sam just isn’t a good fit for the site at all, or that they need to lurk more, or that they need to read the appropriate posts. I can sort of see how some of those things might be part of an answer to my question, but it’s not really clear to me what that answer is.
Can you clarify that?
(Incidently, it seems to me that that post is all about the fact that people are more likely to change their minds when it’s emotionally acceptable to do so, which is precisely what I’m talking about. But sure, I’m happy to stop using the phrase if you think it’s misleading.)
So… OK. If I’ve understood correctly thus far, then my question is why is F2 more likely to solve their rationality problems here? (And, relatedly, why isn’t it also more likely to do so elsewhere?)
F2 forces Sam to admit they were wrong. Not being able to admit you are wrong is a rationality problem, because not all truths are presented as F1 counterarguments—some, including experimental results, are F2 counterarguments to your state of mind. So F1 doesn’t attempt to solve the aversion to being wrong; F2 does.
The question of whether F2′s attempt succeeds often enough to be worth it is another question, one I don’t have any numbers or impressions on.
Suppose further that Sam is the sort of person who, given a choice between changing their mind and admitting they were wrong on the one hand, and not changing their mind on the other, will choose not to change their mind.
You said:
F2 forces Sam to admit they were wrong.
If the first statement is true, F2 doesn’t force Sam to admit they were wrong. What it does is force Sam not to change their mind.
If you’re rejecting my supposition… that is, if you’re asserting that Sam as described just doesn’t exist, or isn’t worth discussing… then I agree with you. But I explicitly asked you if that was what you meant, and you said it wasn’t.
If you’re accepting my supposition… well, that suggests that even though Sam won’t change his mind under F2, F2 is good because it makes Sam change his mind. That’s just nonsense.
If there’s a third possibility, I don’t see what it is.
Yeah, no, my idea was that F2 forces Sam to admit they were wrong, given that they change their mind. When considering the case of ‘on LessWrong’, I skipped the bit that says Sam that does not change their mind. Ooops. Yeah, I don’t think there are many Sams on LessWrong.
Moving on… let me requote the line that started this thread:
frequently people will write something along the lines of “It looks like you’re arguing X. X is bad because...” This is a very important aspect of politeness—it means taking the potential status hit to yourself of having misinterpreted the other person and provides them a line of retreat in case they really did mean X and now want to distance themselves from it.
That seems to me to be a pretty good summary of the strategy I used here… I summarized the position I saw you as arguing, then went on to explain what was wrong with that position.
Looking at the conversation, that strategy at least seems to have worked well… at least, it got us to a place where we could resolve the disagreement in a couple of short moves.
But you seem to be saying that, when dealing with people as rational as the typical LWer, it’s not a good strategy.
So, OK: what ought I have said instead, and how would saying that have worked better?
If the first statement is true, F2 doesn’t force Sam to admit they were wrong. What it does is force Sam not to change their mind.
This by itself would have worked, and to the extent it could be described as working better, it would have punished me for not properly constructing your model in my head, something which I consider required for a response.
Agreed that a commenter who chooses to hold onto a mistaken belief rather than admit error is being imperfectly rational, and agreed that we are under no obligation to be “accommodating to their aversion.”
I’m more confused about the rest of this. Perhaps a concrete example will clarify my confusion.
Suppose Sam says something that’s clearly wrong, and suppose I have a choice between two ways of framing the counterarguments. One frame (F1) gives Sam a way of changing their mind without having to admit they’re wrong. The other (F2) does not.
Suppose further that Sam is the sort of person who, given a choice between changing their mind and admitting they were wrong on the one hand, and not changing their mind on the other, will choose not to change their mind.
You seem to agree that Sam is possible, and that changing Sam’s mind is worthwhile. And it seems clear that F1 has a better chance of changing Sam’s mind than F2 does. (Confirm?) So I think we would agree that in general, using F1 rather than F2 is worthwhile.
But, you say, on Less Wrong things are different. Here, using F2 rather than F1 is more likely to help Sam solve their “bigger rationality problems,” and therefore the preferred choice. (Confirm?)
So… OK. If I’ve understood correctly thus far, then my question is why is F2 more likely to solve their rationality problems here? (And, relatedly, why isn’t it also more likely to do so elsewhere?)
You mention that to help Sam solve those problems, we have to “allow ourselves to notice” those problems. You also suggest that Sam just isn’t a good fit for the site at all, or that they need to lurk more, or that they need to read the appropriate posts. I can sort of see how some of those things might be part of an answer to my question, but it’s not really clear to me what that answer is.
Can you clarify that?
(Incidently, it seems to me that that post is all about the fact that people are more likely to change their minds when it’s emotionally acceptable to do so, which is precisely what I’m talking about. But sure, I’m happy to stop using the phrase if you think it’s misleading.)
F2 forces Sam to admit they were wrong. Not being able to admit you are wrong is a rationality problem, because not all truths are presented as F1 counterarguments—some, including experimental results, are F2 counterarguments to your state of mind. So F1 doesn’t attempt to solve the aversion to being wrong; F2 does.
The question of whether F2′s attempt succeeds often enough to be worth it is another question, one I don’t have any numbers or impressions on.
I said:
You said:
If the first statement is true, F2 doesn’t force Sam to admit they were wrong. What it does is force Sam not to change their mind.
If you’re rejecting my supposition… that is, if you’re asserting that Sam as described just doesn’t exist, or isn’t worth discussing… then I agree with you. But I explicitly asked you if that was what you meant, and you said it wasn’t.
If you’re accepting my supposition… well, that suggests that even though Sam won’t change his mind under F2, F2 is good because it makes Sam change his mind. That’s just nonsense.
If there’s a third possibility, I don’t see what it is.
Yeah, no, my idea was that F2 forces Sam to admit they were wrong, given that they change their mind. When considering the case of ‘on LessWrong’, I skipped the bit that says Sam that does not change their mind. Ooops. Yeah, I don’t think there are many Sams on LessWrong.
OK, glad we cleared that up.
Moving on… let me requote the line that started this thread:
That seems to me to be a pretty good summary of the strategy I used here… I summarized the position I saw you as arguing, then went on to explain what was wrong with that position.
Looking at the conversation, that strategy at least seems to have worked well… at least, it got us to a place where we could resolve the disagreement in a couple of short moves.
But you seem to be saying that, when dealing with people as rational as the typical LWer, it’s not a good strategy.
So, OK: what ought I have said instead, and how would saying that have worked better?
This by itself would have worked, and to the extent it could be described as working better, it would have punished me for not properly constructing your model in my head, something which I consider required for a response.
edit: the differences are very slight.