Treat it as a thing that might or might not be true, like other things? Sometimes it’s hard to tell whether it’s true, and in those cases it’s useful to be able to say something like “well, maybe, can’t know for sure”.
I’m trying to understand why this norm seems so crazy to me...
I definitely do something very much like this with people that I’m close with, in private. I have once been in a heated multi-person conversation, and politely excused myself and a friend, to step into another room. In that context, I then looked the friend in the eye, and said “it seems to me that you’re rationalizing [based on x evidence]. Are you sure you really believe what you’re saying here?”
And friends have sometimes helped me in similar ways, “the things that you’re saying don’t quite add up...”
(Things like this happen more often these days, now that rationalists have imported more Circling norms of sharing feelings and stories. Notably these norms include a big helping of NVC norms: owning your experience as your own, and keeping interpretation separate from observation.)
All things considered, I think this is a pretty radical move. But it seems like it depends a lot on the personal trust between me and the other person. I would feel much less comfortable with that kind of interaction with a random stranger, or in a public space.
Why?
Well for one thing, if I’m having a fight with someone, having someone else question my motivations can cause me to lose ground in the fight. It can be an aggressive move, used to undercut the arguments that one is trying to make.
For another, engaging with a person’s psychological guts like that is intimate, and vulnerable. I am much less likely to be defensive if I trust that the other person is sincerely looking out for my best interests.
I guess I feel like it’s basically not any of your business what’s happening in my mind. If you have an issue with my arguments, you can attack those, those are public. And you are, of course, free to have your own private opinion about my biases, but only the actual mistakes in reasoning that I make are in the common domain for you to correct.
In general, It seems like a bad norm have “psychological” evidence be admissible in discourse, because it biases the disagreements towards whoever is more charismatic / has more rhetorical skill in pointing out biases, as opposed to the the person who is more correct.
Also, it just doesn’t seem like it helps very much. “I have a hypothesis that you’re rationalizing.” The other party is like, “Ok. Well, I think my position is correct.” and then they go back to the object level (maybe with one of them more defensive). I can’t know what’s happening in your head, so I can’t really call you out on what’s happening there, or enforce norms there. [I would want to think about it more, but I think that might be a crux for me.]
. . .
Now I’m putting those feeling next to my sense of what we should do when one has someone like Gleb Tsipursky in the mix.
I think all of the above still stands. It is inappropriate for me to attack him at the level of his psychology, as opposed to pointing to specific bad-actions (including borderline actions), and telling him to stop, and if that fails, telling him that he is no-longer welcome here.
This was mostly for my own thinking, but I’d be glad to hear what you think, Jessica.
The concept of “not an argument” seems useful; “you’re rationalizing” isn’t an argument (unless it has evidence accompanying it). (This handles point 1)
I don’t really believe in tabooing discussion of mental states on the basis that they’re private, that seems like being intentionally stupid and blind, and puts a (low) ceiling on how much sense can be made of the world. (Truth is entangled!) Of course it can derail discussions but again, “not an argument”. (Eliezer’s post says it’s “dangerous” without elaborating, that’s basically giving a command rather than a model, which I’m suspicious of)
There’s a legitimate concern about blame/scapegoating but things can be worded to avoid that. (I think Wei did a good job here, noting that the intention is probably subconscious)
With someone like Gleb it’s useful to be able to point out to at least some people (possibly including him) that he’s doing stupid/harmful actions repeatedly in a pattern that suggests optimization. So people can build a model of what’s going on (which HAS to include mental states, since they’re a causally very important part of the universe!) and take appropriate action. If you can’t talk about adversarial optimization pressures you’re probably owned by them (and being owned by them would lead to not feeling safe talking about them).
Treat it as a thing that might or might not be true, like other things? Sometimes it’s hard to tell whether it’s true, and in those cases it’s useful to be able to say something like “well, maybe, can’t know for sure”.
I’m trying to understand why this norm seems so crazy to me...
I definitely do something very much like this with people that I’m close with, in private. I have once been in a heated multi-person conversation, and politely excused myself and a friend, to step into another room. In that context, I then looked the friend in the eye, and said “it seems to me that you’re rationalizing [based on x evidence]. Are you sure you really believe what you’re saying here?”
And friends have sometimes helped me in similar ways, “the things that you’re saying don’t quite add up...”
(Things like this happen more often these days, now that rationalists have imported more Circling norms of sharing feelings and stories. Notably these norms include a big helping of NVC norms: owning your experience as your own, and keeping interpretation separate from observation.)
All things considered, I think this is a pretty radical move. But it seems like it depends a lot on the personal trust between me and the other person. I would feel much less comfortable with that kind of interaction with a random stranger, or in a public space.
Why?
Well for one thing, if I’m having a fight with someone, having someone else question my motivations can cause me to lose ground in the fight. It can be an aggressive move, used to undercut the arguments that one is trying to make.
For another, engaging with a person’s psychological guts like that is intimate, and vulnerable. I am much less likely to be defensive if I trust that the other person is sincerely looking out for my best interests.
I guess I feel like it’s basically not any of your business what’s happening in my mind. If you have an issue with my arguments, you can attack those, those are public. And you are, of course, free to have your own private opinion about my biases, but only the actual mistakes in reasoning that I make are in the common domain for you to correct.
In general, It seems like a bad norm have “psychological” evidence be admissible in discourse, because it biases the disagreements towards whoever is more charismatic / has more rhetorical skill in pointing out biases, as opposed to the the person who is more correct.
The arbital page on Psychoanalyzing is very relevant.
Also, it just doesn’t seem like it helps very much. “I have a hypothesis that you’re rationalizing.” The other party is like, “Ok. Well, I think my position is correct.” and then they go back to the object level (maybe with one of them more defensive). I can’t know what’s happening in your head, so I can’t really call you out on what’s happening there, or enforce norms there. [I would want to think about it more, but I think that might be a crux for me.]
. . .
Now I’m putting those feeling next to my sense of what we should do when one has someone like Gleb Tsipursky in the mix.
I think all of the above still stands. It is inappropriate for me to attack him at the level of his psychology, as opposed to pointing to specific bad-actions (including borderline actions), and telling him to stop, and if that fails, telling him that he is no-longer welcome here.
This was mostly for my own thinking, but I’d be glad to hear what you think, Jessica.
The concept of “not an argument” seems useful; “you’re rationalizing” isn’t an argument (unless it has evidence accompanying it). (This handles point 1)
I don’t really believe in tabooing discussion of mental states on the basis that they’re private, that seems like being intentionally stupid and blind, and puts a (low) ceiling on how much sense can be made of the world. (Truth is entangled!) Of course it can derail discussions but again, “not an argument”. (Eliezer’s post says it’s “dangerous” without elaborating, that’s basically giving a command rather than a model, which I’m suspicious of)
There’s a legitimate concern about blame/scapegoating but things can be worded to avoid that. (I think Wei did a good job here, noting that the intention is probably subconscious)
With someone like Gleb it’s useful to be able to point out to at least some people (possibly including him) that he’s doing stupid/harmful actions repeatedly in a pattern that suggests optimization. So people can build a model of what’s going on (which HAS to include mental states, since they’re a causally very important part of the universe!) and take appropriate action. If you can’t talk about adversarial optimization pressures you’re probably owned by them (and being owned by them would lead to not feeling safe talking about them).