When conscious intent is selectively discouraged more than unconscious intent, the result is rule by unconscious intent. Those who can conveniently forget, who can maintain narcissistic fantasies, who can avoid introspection, who can be ruled by emotions with hidden causes, will be the only ones able to deceive (or otherwise to violate norms) blamelessly.
Conscious intent being selectively discouraged more than unconscious intent does not logically imply that unconscious intent to deceive will be blameless or “free from or not deserving blame”, only that it will be blamed less.
(I think you may have an unconscious motivation to commit this logical error in order to further your side of the argument. Normally I wouldn’t say this out loud, or in public, but you seem to be proposing a norm where people do state such beliefs freely. Is that right? And do you think this instance also falls under “lying”?)
I think conscious intent being selectively discouraged more than unconscious intent can make sense for several reasons:
Someone deceiving with conscious intent can apply more compute / intelligence and other resources for optimizing and maintaining the lie, which means the deception can be much bigger and more consequential, thereby causing greater damage to others.
Deceiving with conscious intent implies that the person endorses lying in that situation which means you probably need to do something substantially different to dissuade that person from lying in a similar situation in the future, compared to someone deceiving with unconscious intent. In the latter case, it might suffice to diplomatically (e.g., privately) bring up the issue to that person’s conscious awareness, so they can consciously override their unconscious motivations.
Conscious lies tend to be harder to detect (due to more optimizing power applied towards creating the appearance of truth). Economics research into optimal punishment suggests that (all else equal) crimes that are harder to detect should be punished more.
Unconscious deception is hard to distinguish from innocent mistakes. If you try to punish what you think are cases of unconscious deception, you’ll end up making a lot people feel like they were punished unfairly, either because they’re truly innocent, or because they’re not consciously aware of any deceptive intent and therefore think they’re innocent. You inevitably make a lot of enemies to you personally or to the norm you’re proposing.
(There are some issues in the way I stated points 1-4 above that I can see but don’t feel like spending more time to fix. I would rather spend my time on other topics but nobody is bringing up these points so I feel like I have to, given how much the parent comment has been upvoted.)
Conscious intent being selectively discouraged more than unconscious intent does not logically imply that unconscious intent to deceive will be blameless or “free from or not deserving blame”, only that it will be blamed less.
Yes, I was speaking imprecisely. A better phrasing is “when only conscious intent is blamed, …”
you seem to be proposing a norm where people do state such beliefs freely. Is that right?
Yes. (I think your opinion is correct in this case)
And do you think this instance also falls under “lying”?
It would fall under hyperbole. I think some but not all hyperboles are lies, and I weakly think this one was.
Regarding the 4 points:
I think 1 is true
2 is generally false (people dissuaded from unconsciously lying once will almost always keep unconsciously lying; not lying to yourself is hard and takes work; and someone who’s consciously lying can also stop lying when called out privately if that’s more convenient)
3 is generally false, people who are consciously lying will often subconsciously give signals that they are lying that others can pick up on (e.g. seeming nervous, taking longer to answer questions), compared to people who subconsciously lie, who usually feel safer, as there as an internal blameless narrative being written constantly.
4 is irrelevant due to the point about conscious/unconscious not being a boundary that can be pinned down by a justice process; if you’re considering this you should mainly think about what the justice process is able to pin down rather than the conscious/unconscious split.
In general I worry more about irrational adversariality than rational adversariality, and I especially worry about pressures towards making people have lower integrity of mind (e.g. pressures to destroy one’s own world-representation). I think someone who worries more about rational adversariality could more reasonably worry more about conscious lying than unconscious lying. (Still, that doesn’t tell them what to do about it; telling people “don’t consciously lie” doesn’t work, since some people will choose not to follow that advice; so a justice procedure is still necessary, and will have issues with pinning down the conscious/unconscious split)
> I think you may have an unconscious motivation to commit this logical error in order to further your side of the argument. Normally I wouldn’t say this out loud, or in public, you seem to be proposing a norm where people do state such beliefs freely. Is that right?
Yes. (I think your opinion is correct in this case)
Wow. Thanks for saying so explicitly, I wouldn’t have guessed that, and am surprised. How do you imagine that it plays out, or how it properly ought to play out when someone makes an accusation / insinuation of another person like this?
Treat it as a thing that might or might not be true, like other things? Sometimes it’s hard to tell whether it’s true, and in those cases it’s useful to be able to say something like “well, maybe, can’t know for sure”.
I’m trying to understand why this norm seems so crazy to me...
I definitely do something very much like this with people that I’m close with, in private. I have once been in a heated multi-person conversation, and politely excused myself and a friend, to step into another room. In that context, I then looked the friend in the eye, and said “it seems to me that you’re rationalizing [based on x evidence]. Are you sure you really believe what you’re saying here?”
And friends have sometimes helped me in similar ways, “the things that you’re saying don’t quite add up...”
(Things like this happen more often these days, now that rationalists have imported more Circling norms of sharing feelings and stories. Notably these norms include a big helping of NVC norms: owning your experience as your own, and keeping interpretation separate from observation.)
All things considered, I think this is a pretty radical move. But it seems like it depends a lot on the personal trust between me and the other person. I would feel much less comfortable with that kind of interaction with a random stranger, or in a public space.
Why?
Well for one thing, if I’m having a fight with someone, having someone else question my motivations can cause me to lose ground in the fight. It can be an aggressive move, used to undercut the arguments that one is trying to make.
For another, engaging with a person’s psychological guts like that is intimate, and vulnerable. I am much less likely to be defensive if I trust that the other person is sincerely looking out for my best interests.
I guess I feel like it’s basically not any of your business what’s happening in my mind. If you have an issue with my arguments, you can attack those, those are public. And you are, of course, free to have your own private opinion about my biases, but only the actual mistakes in reasoning that I make are in the common domain for you to correct.
In general, It seems like a bad norm have “psychological” evidence be admissible in discourse, because it biases the disagreements towards whoever is more charismatic / has more rhetorical skill in pointing out biases, as opposed to the the person who is more correct.
Also, it just doesn’t seem like it helps very much. “I have a hypothesis that you’re rationalizing.” The other party is like, “Ok. Well, I think my position is correct.” and then they go back to the object level (maybe with one of them more defensive). I can’t know what’s happening in your head, so I can’t really call you out on what’s happening there, or enforce norms there. [I would want to think about it more, but I think that might be a crux for me.]
. . .
Now I’m putting those feeling next to my sense of what we should do when one has someone like Gleb Tsipursky in the mix.
I think all of the above still stands. It is inappropriate for me to attack him at the level of his psychology, as opposed to pointing to specific bad-actions (including borderline actions), and telling him to stop, and if that fails, telling him that he is no-longer welcome here.
This was mostly for my own thinking, but I’d be glad to hear what you think, Jessica.
The concept of “not an argument” seems useful; “you’re rationalizing” isn’t an argument (unless it has evidence accompanying it). (This handles point 1)
I don’t really believe in tabooing discussion of mental states on the basis that they’re private, that seems like being intentionally stupid and blind, and puts a (low) ceiling on how much sense can be made of the world. (Truth is entangled!) Of course it can derail discussions but again, “not an argument”. (Eliezer’s post says it’s “dangerous” without elaborating, that’s basically giving a command rather than a model, which I’m suspicious of)
There’s a legitimate concern about blame/scapegoating but things can be worded to avoid that. (I think Wei did a good job here, noting that the intention is probably subconscious)
With someone like Gleb it’s useful to be able to point out to at least some people (possibly including him) that he’s doing stupid/harmful actions repeatedly in a pattern that suggests optimization. So people can build a model of what’s going on (which HAS to include mental states, since they’re a causally very important part of the universe!) and take appropriate action. If you can’t talk about adversarial optimization pressures you’re probably owned by them (and being owned by them would lead to not feeling safe talking about them).
Someone deceiving with conscious intent can apply more compute / intelligence and other resources for optimizing and maintaining the lie, which means the deception can be much bigger and more consequential, thereby causing greater damage to others.
Unconscious deception is hard to distinguish from innocent mistakes.
Surely someone consciously intending to deceive can apply some of that extra compute to making it harder to distinguish their behavior from an innocent mistake.
Conscious intent being selectively discouraged more than unconscious intent does not logically imply that unconscious intent to deceive will be blameless or “free from or not deserving blame”, only that it will be blamed less.
(I think you may have an unconscious motivation to commit this logical error in order to further your side of the argument. Normally I wouldn’t say this out loud, or in public, but you seem to be proposing a norm where people do state such beliefs freely. Is that right? And do you think this instance also falls under “lying”?)
I think conscious intent being selectively discouraged more than unconscious intent can make sense for several reasons:
Someone deceiving with conscious intent can apply more compute / intelligence and other resources for optimizing and maintaining the lie, which means the deception can be much bigger and more consequential, thereby causing greater damage to others.
Deceiving with conscious intent implies that the person endorses lying in that situation which means you probably need to do something substantially different to dissuade that person from lying in a similar situation in the future, compared to someone deceiving with unconscious intent. In the latter case, it might suffice to diplomatically (e.g., privately) bring up the issue to that person’s conscious awareness, so they can consciously override their unconscious motivations.
Conscious lies tend to be harder to detect (due to more optimizing power applied towards creating the appearance of truth). Economics research into optimal punishment suggests that (all else equal) crimes that are harder to detect should be punished more.
Unconscious deception is hard to distinguish from innocent mistakes. If you try to punish what you think are cases of unconscious deception, you’ll end up making a lot people feel like they were punished unfairly, either because they’re truly innocent, or because they’re not consciously aware of any deceptive intent and therefore think they’re innocent. You inevitably make a lot of enemies to you personally or to the norm you’re proposing.
(There are some issues in the way I stated points 1-4 above that I can see but don’t feel like spending more time to fix. I would rather spend my time on other topics but nobody is bringing up these points so I feel like I have to, given how much the parent comment has been upvoted.)
Yes, I was speaking imprecisely. A better phrasing is “when only conscious intent is blamed, …”
Yes. (I think your opinion is correct in this case)
It would fall under hyperbole. I think some but not all hyperboles are lies, and I weakly think this one was.
Regarding the 4 points:
I think 1 is true
2 is generally false (people dissuaded from unconsciously lying once will almost always keep unconsciously lying; not lying to yourself is hard and takes work; and someone who’s consciously lying can also stop lying when called out privately if that’s more convenient)
3 is generally false, people who are consciously lying will often subconsciously give signals that they are lying that others can pick up on (e.g. seeming nervous, taking longer to answer questions), compared to people who subconsciously lie, who usually feel safer, as there as an internal blameless narrative being written constantly.
4 is irrelevant due to the point about conscious/unconscious not being a boundary that can be pinned down by a justice process; if you’re considering this you should mainly think about what the justice process is able to pin down rather than the conscious/unconscious split.
In general I worry more about irrational adversariality than rational adversariality, and I especially worry about pressures towards making people have lower integrity of mind (e.g. pressures to destroy one’s own world-representation). I think someone who worries more about rational adversariality could more reasonably worry more about conscious lying than unconscious lying. (Still, that doesn’t tell them what to do about it; telling people “don’t consciously lie” doesn’t work, since some people will choose not to follow that advice; so a justice procedure is still necessary, and will have issues with pinning down the conscious/unconscious split)
Wow. Thanks for saying so explicitly, I wouldn’t have guessed that, and am surprised. How do you imagine that it plays out, or how it properly ought to play out when someone makes an accusation / insinuation of another person like this?
Treat it as a thing that might or might not be true, like other things? Sometimes it’s hard to tell whether it’s true, and in those cases it’s useful to be able to say something like “well, maybe, can’t know for sure”.
I’m trying to understand why this norm seems so crazy to me...
I definitely do something very much like this with people that I’m close with, in private. I have once been in a heated multi-person conversation, and politely excused myself and a friend, to step into another room. In that context, I then looked the friend in the eye, and said “it seems to me that you’re rationalizing [based on x evidence]. Are you sure you really believe what you’re saying here?”
And friends have sometimes helped me in similar ways, “the things that you’re saying don’t quite add up...”
(Things like this happen more often these days, now that rationalists have imported more Circling norms of sharing feelings and stories. Notably these norms include a big helping of NVC norms: owning your experience as your own, and keeping interpretation separate from observation.)
All things considered, I think this is a pretty radical move. But it seems like it depends a lot on the personal trust between me and the other person. I would feel much less comfortable with that kind of interaction with a random stranger, or in a public space.
Why?
Well for one thing, if I’m having a fight with someone, having someone else question my motivations can cause me to lose ground in the fight. It can be an aggressive move, used to undercut the arguments that one is trying to make.
For another, engaging with a person’s psychological guts like that is intimate, and vulnerable. I am much less likely to be defensive if I trust that the other person is sincerely looking out for my best interests.
I guess I feel like it’s basically not any of your business what’s happening in my mind. If you have an issue with my arguments, you can attack those, those are public. And you are, of course, free to have your own private opinion about my biases, but only the actual mistakes in reasoning that I make are in the common domain for you to correct.
In general, It seems like a bad norm have “psychological” evidence be admissible in discourse, because it biases the disagreements towards whoever is more charismatic / has more rhetorical skill in pointing out biases, as opposed to the the person who is more correct.
The arbital page on Psychoanalyzing is very relevant.
Also, it just doesn’t seem like it helps very much. “I have a hypothesis that you’re rationalizing.” The other party is like, “Ok. Well, I think my position is correct.” and then they go back to the object level (maybe with one of them more defensive). I can’t know what’s happening in your head, so I can’t really call you out on what’s happening there, or enforce norms there. [I would want to think about it more, but I think that might be a crux for me.]
. . .
Now I’m putting those feeling next to my sense of what we should do when one has someone like Gleb Tsipursky in the mix.
I think all of the above still stands. It is inappropriate for me to attack him at the level of his psychology, as opposed to pointing to specific bad-actions (including borderline actions), and telling him to stop, and if that fails, telling him that he is no-longer welcome here.
This was mostly for my own thinking, but I’d be glad to hear what you think, Jessica.
The concept of “not an argument” seems useful; “you’re rationalizing” isn’t an argument (unless it has evidence accompanying it). (This handles point 1)
I don’t really believe in tabooing discussion of mental states on the basis that they’re private, that seems like being intentionally stupid and blind, and puts a (low) ceiling on how much sense can be made of the world. (Truth is entangled!) Of course it can derail discussions but again, “not an argument”. (Eliezer’s post says it’s “dangerous” without elaborating, that’s basically giving a command rather than a model, which I’m suspicious of)
There’s a legitimate concern about blame/scapegoating but things can be worded to avoid that. (I think Wei did a good job here, noting that the intention is probably subconscious)
With someone like Gleb it’s useful to be able to point out to at least some people (possibly including him) that he’s doing stupid/harmful actions repeatedly in a pattern that suggests optimization. So people can build a model of what’s going on (which HAS to include mental states, since they’re a causally very important part of the universe!) and take appropriate action. If you can’t talk about adversarial optimization pressures you’re probably owned by them (and being owned by them would lead to not feeling safe talking about them).
Surely someone consciously intending to deceive can apply some of that extra compute to making it harder to distinguish their behavior from an innocent mistake.