You seem to be asserting that if I give you an out you will take it and change your mind without admitting you were wrong, but if I don’t give you an out you will change your mind and admit you were wrong.
Which, OK, good for you. Of course, even better would be to not take the out if offered, and admit you were wrong even when you aren’t forced to… but still, the willingness to admit error when you don’t have a line of retreat is admirable.
The problem arises if we’re dealing with people who lack that willingness… who, given the choice between changing their minds and admitting they were wrong on the one hand, and not changing their minds on the other, will choose not to change their minds.
Are you suggesting that such people don’t exist? That trying to change their minds isn’t worthwhile? Something else?
You seem to be asserting that if I give you an out you will take it and change your mind without admitting you were wrong, but if I don’t give you an out you will change your mind and admit you were wrong.
I do not assert that, and I’m not just saying that because you proved me wrong but gave me an out by saying “You seem to be asserting”. :)
If I, personally, have been convinced that I was wrong about something, then I’ll say so, whether or not I have the option of pretending I actually meant something else. And that’s certainly encouraged by LW’s atmosphere (and it’s been explicitly discussed and advocated here at times). What I was disagreeing with was erratio’s implication that giving people that option is a polite and desirable thing to do.
You say that there are people “who, given the choice between changing their minds and admitting they were wrong on the one hand, and not changing their minds on the other, will choose not to change their minds,” and you are correct, and I also don’t claim that trying to change their minds isn’t worthwhile. But on Less Wrong, if a commenter would prefer to (and would be able to) actually hold onto a mistaken belief rather than acknowledge having been mistaken, then they have bigger rationality problems than just being wrong about some particular question; helping them solve that (which requires allowing ourselves to notice it) seems more important. I can’t say I’ve actually seen much of this here, but if we observed that some user frequently abandoned debates that they seemed to be losing, and later expressed the same opinions without acknowledging the strong counterarguments that they had previously ignored… then I’d just say that Less Wrong may not be a good fit for them (or that they need to lurk more and/or read more of things like “How To Actually Change Your Mind”, etc.). I would not say that we should have been more accommodating to their aversion to admitting error.
(Also, we should stop using the phrase “line of retreat” as we’re using it here, because it will make people think of the post “Leave a Line of Retreat” even though we’re talking about something pretty different.)
Agreed that a commenter who chooses to hold onto a mistaken belief rather than admit error is being imperfectly rational, and agreed that we are under no obligation to be “accommodating to their aversion.”
I’m more confused about the rest of this. Perhaps a concrete example will clarify my confusion.
Suppose Sam says something that’s clearly wrong, and suppose I have a choice between two ways of framing the counterarguments. One frame (F1) gives Sam a way of changing their mind without having to admit they’re wrong. The other (F2) does not.
Suppose further that Sam is the sort of person who, given a choice between changing their mind and admitting they were wrong on the one hand, and not changing their mind on the other, will choose not to change their mind.
You seem to agree that Sam is possible, and that changing Sam’s mind is worthwhile. And it seems clear that F1 has a better chance of changing Sam’s mind than F2 does. (Confirm?) So I think we would agree that in general, using F1 rather than F2 is worthwhile.
But, you say, on Less Wrong things are different. Here, using F2 rather than F1 is more likely to help Sam solve their “bigger rationality problems,” and therefore the preferred choice. (Confirm?)
So… OK. If I’ve understood correctly thus far, then my question is why is F2 more likely to solve their rationality problems here? (And, relatedly, why isn’t it also more likely to do so elsewhere?)
You mention that to help Sam solve those problems, we have to “allow ourselves to notice” those problems. You also suggest that Sam just isn’t a good fit for the site at all, or that they need to lurk more, or that they need to read the appropriate posts. I can sort of see how some of those things might be part of an answer to my question, but it’s not really clear to me what that answer is.
Can you clarify that?
(Incidently, it seems to me that that post is all about the fact that people are more likely to change their minds when it’s emotionally acceptable to do so, which is precisely what I’m talking about. But sure, I’m happy to stop using the phrase if you think it’s misleading.)
So… OK. If I’ve understood correctly thus far, then my question is why is F2 more likely to solve their rationality problems here? (And, relatedly, why isn’t it also more likely to do so elsewhere?)
F2 forces Sam to admit they were wrong. Not being able to admit you are wrong is a rationality problem, because not all truths are presented as F1 counterarguments—some, including experimental results, are F2 counterarguments to your state of mind. So F1 doesn’t attempt to solve the aversion to being wrong; F2 does.
The question of whether F2′s attempt succeeds often enough to be worth it is another question, one I don’t have any numbers or impressions on.
Suppose further that Sam is the sort of person who, given a choice between changing their mind and admitting they were wrong on the one hand, and not changing their mind on the other, will choose not to change their mind.
You said:
F2 forces Sam to admit they were wrong.
If the first statement is true, F2 doesn’t force Sam to admit they were wrong. What it does is force Sam not to change their mind.
If you’re rejecting my supposition… that is, if you’re asserting that Sam as described just doesn’t exist, or isn’t worth discussing… then I agree with you. But I explicitly asked you if that was what you meant, and you said it wasn’t.
If you’re accepting my supposition… well, that suggests that even though Sam won’t change his mind under F2, F2 is good because it makes Sam change his mind. That’s just nonsense.
If there’s a third possibility, I don’t see what it is.
Yeah, no, my idea was that F2 forces Sam to admit they were wrong, given that they change their mind. When considering the case of ‘on LessWrong’, I skipped the bit that says Sam that does not change their mind. Ooops. Yeah, I don’t think there are many Sams on LessWrong.
Moving on… let me requote the line that started this thread:
frequently people will write something along the lines of “It looks like you’re arguing X. X is bad because...” This is a very important aspect of politeness—it means taking the potential status hit to yourself of having misinterpreted the other person and provides them a line of retreat in case they really did mean X and now want to distance themselves from it.
That seems to me to be a pretty good summary of the strategy I used here… I summarized the position I saw you as arguing, then went on to explain what was wrong with that position.
Looking at the conversation, that strategy at least seems to have worked well… at least, it got us to a place where we could resolve the disagreement in a couple of short moves.
But you seem to be saying that, when dealing with people as rational as the typical LWer, it’s not a good strategy.
So, OK: what ought I have said instead, and how would saying that have worked better?
If the first statement is true, F2 doesn’t force Sam to admit they were wrong. What it does is force Sam not to change their mind.
This by itself would have worked, and to the extent it could be described as working better, it would have punished me for not properly constructing your model in my head, something which I consider required for a response.
You seem to be asserting that if I give you an out you will take it and change your mind without admitting you were wrong, but if I don’t give you an out you will change your mind and admit you were wrong.
Which, OK, good for you. Of course, even better would be to not take the out if offered, and admit you were wrong even when you aren’t forced to… but still, the willingness to admit error when you don’t have a line of retreat is admirable.
The problem arises if we’re dealing with people who lack that willingness… who, given the choice between changing their minds and admitting they were wrong on the one hand, and not changing their minds on the other, will choose not to change their minds.
Are you suggesting that such people don’t exist? That trying to change their minds isn’t worthwhile? Something else?
I do not assert that, and I’m not just saying that because you proved me wrong but gave me an out by saying “You seem to be asserting”. :)
If I, personally, have been convinced that I was wrong about something, then I’ll say so, whether or not I have the option of pretending I actually meant something else. And that’s certainly encouraged by LW’s atmosphere (and it’s been explicitly discussed and advocated here at times). What I was disagreeing with was erratio’s implication that giving people that option is a polite and desirable thing to do.
You say that there are people “who, given the choice between changing their minds and admitting they were wrong on the one hand, and not changing their minds on the other, will choose not to change their minds,” and you are correct, and I also don’t claim that trying to change their minds isn’t worthwhile. But on Less Wrong, if a commenter would prefer to (and would be able to) actually hold onto a mistaken belief rather than acknowledge having been mistaken, then they have bigger rationality problems than just being wrong about some particular question; helping them solve that (which requires allowing ourselves to notice it) seems more important. I can’t say I’ve actually seen much of this here, but if we observed that some user frequently abandoned debates that they seemed to be losing, and later expressed the same opinions without acknowledging the strong counterarguments that they had previously ignored… then I’d just say that Less Wrong may not be a good fit for them (or that they need to lurk more and/or read more of things like “How To Actually Change Your Mind”, etc.). I would not say that we should have been more accommodating to their aversion to admitting error.
(Also, we should stop using the phrase “line of retreat” as we’re using it here, because it will make people think of the post “Leave a Line of Retreat” even though we’re talking about something pretty different.)
Agreed that a commenter who chooses to hold onto a mistaken belief rather than admit error is being imperfectly rational, and agreed that we are under no obligation to be “accommodating to their aversion.”
I’m more confused about the rest of this. Perhaps a concrete example will clarify my confusion.
Suppose Sam says something that’s clearly wrong, and suppose I have a choice between two ways of framing the counterarguments. One frame (F1) gives Sam a way of changing their mind without having to admit they’re wrong. The other (F2) does not.
Suppose further that Sam is the sort of person who, given a choice between changing their mind and admitting they were wrong on the one hand, and not changing their mind on the other, will choose not to change their mind.
You seem to agree that Sam is possible, and that changing Sam’s mind is worthwhile. And it seems clear that F1 has a better chance of changing Sam’s mind than F2 does. (Confirm?) So I think we would agree that in general, using F1 rather than F2 is worthwhile.
But, you say, on Less Wrong things are different. Here, using F2 rather than F1 is more likely to help Sam solve their “bigger rationality problems,” and therefore the preferred choice. (Confirm?)
So… OK. If I’ve understood correctly thus far, then my question is why is F2 more likely to solve their rationality problems here? (And, relatedly, why isn’t it also more likely to do so elsewhere?)
You mention that to help Sam solve those problems, we have to “allow ourselves to notice” those problems. You also suggest that Sam just isn’t a good fit for the site at all, or that they need to lurk more, or that they need to read the appropriate posts. I can sort of see how some of those things might be part of an answer to my question, but it’s not really clear to me what that answer is.
Can you clarify that?
(Incidently, it seems to me that that post is all about the fact that people are more likely to change their minds when it’s emotionally acceptable to do so, which is precisely what I’m talking about. But sure, I’m happy to stop using the phrase if you think it’s misleading.)
F2 forces Sam to admit they were wrong. Not being able to admit you are wrong is a rationality problem, because not all truths are presented as F1 counterarguments—some, including experimental results, are F2 counterarguments to your state of mind. So F1 doesn’t attempt to solve the aversion to being wrong; F2 does.
The question of whether F2′s attempt succeeds often enough to be worth it is another question, one I don’t have any numbers or impressions on.
I said:
You said:
If the first statement is true, F2 doesn’t force Sam to admit they were wrong. What it does is force Sam not to change their mind.
If you’re rejecting my supposition… that is, if you’re asserting that Sam as described just doesn’t exist, or isn’t worth discussing… then I agree with you. But I explicitly asked you if that was what you meant, and you said it wasn’t.
If you’re accepting my supposition… well, that suggests that even though Sam won’t change his mind under F2, F2 is good because it makes Sam change his mind. That’s just nonsense.
If there’s a third possibility, I don’t see what it is.
Yeah, no, my idea was that F2 forces Sam to admit they were wrong, given that they change their mind. When considering the case of ‘on LessWrong’, I skipped the bit that says Sam that does not change their mind. Ooops. Yeah, I don’t think there are many Sams on LessWrong.
OK, glad we cleared that up.
Moving on… let me requote the line that started this thread:
That seems to me to be a pretty good summary of the strategy I used here… I summarized the position I saw you as arguing, then went on to explain what was wrong with that position.
Looking at the conversation, that strategy at least seems to have worked well… at least, it got us to a place where we could resolve the disagreement in a couple of short moves.
But you seem to be saying that, when dealing with people as rational as the typical LWer, it’s not a good strategy.
So, OK: what ought I have said instead, and how would saying that have worked better?
This by itself would have worked, and to the extent it could be described as working better, it would have punished me for not properly constructing your model in my head, something which I consider required for a response.
edit: the differences are very slight.