(The following was originally the second half of this post. I was worried that I didn’t have time to really develop it fully, and meanwhile the half-baked version of it sort of undercut the post in ways I did’t like. Putting it here for now. Eventually hopefully will have all of this crystallized into a post that articulates what I think people actually shoulddo about all this)
A problem with “sitting bolt upright in alarm” is that it’s not something you can sustainably do all the time. The point of elevated attention is to pay, well, more attention to things that are locally important. If you’re always paying maximal attention to one area, you’re
a) going to miss other important areas
b) probably going to stress yourself out in ways that are longterm unhelpful. (in particular if “elevated attention” not only uses all of your existing attention, but redirects resources you have previously been using for things other than attention)
Deliberate lying (in my circles) seems quite rare to me.
I’m less confident about non-lying patterns of deception. (Basically, everyone around me seems smart enough for “lying” to be a bad strategy. But other forms of active malicious deception might be going on and I’m not confident I’d be able to tell).
But, given my current rate of “detect deliberate lying” and “detect explicit malicious deception”, it does make sense to sit bolt upright in alarm whenever someone looks like they’re deceiving. My detection of it, at least, is a rare event.
The next steps I follow after “stop and pay attention” are:
Check that you understood what the person said, and verify that what happened was lying rather than “having a very different model than you.”
(I think the latter is common enough and the former rare enough that this should usually be your prior)
If they definitely, deliberately lied, and they don’t have some kind of extenuating circumstance, it’s quite reasonable for there to be major social repercussions.
If someone is willing to lie to my face (and is reasonably competent at it), this not only means I can’t trust them with anything important, but it means I can’t trust them when they say they’ve changed. I’d need to have a compelling reason for why they lied in the first place, and why they thought it was okay then but wrong now, or wouldn’t be okay in the future, before I could make serious plans with them.
By contrast, a few other activities seem much more common. You can’t sit bolt upright in alarm whenever these happen, because then you’d be sitting bolt upright in alarm all the time and stressing your body out.
The world is full of plenty of terrible things where the “appropriate” level of freak-out just isn’t very helpful. Your grim-o-meter is for helping you buckle down and run a sprint, not for running a marathon. But I think there’s something useful, occasionally, about letting yourself go into a heightened stress state, where you fully feel the importance. Especially if you think a problem might be so important that this particular marathon is the one that you’re going to run.
Activities that seem common, and concerning:
Motivated reasoning
Humans are bad at lying [citation needed?] but really good at making up stories that they believe (at least mostly) that are convenient. And they are pretty good at selectively ignoring things when convenient.
I’d prefer to work exclusively with people who are good at avoiding rationalization. At some point, I’d like the internal sanity waterline of the rationalsphere to rise to the level where I treat motivated as a rare event that warrants Sitting Bolt Upright in Alarm. But we’re not at that level yet. I’m not at that level yet.
Low key deception (conscious or otherwise)
I think it’s common for people to end up believing their own marketing –looking for the largest plausible outcome that could justify their plan, and avoiding looking at the facts that’d make their plans seem least promising.
I think there’s sometimes a slipperiness where people avoid letting their opinions getting pinned down, and end up presenting different facets of themselves to different people at different times. And moreover, present different facets of themselves to themselves at different times, without noticing the disconnect.
I think it’s quite likely that we should be coordinating on a staghunt to systematically fix the above two issues. I think such a staghunt looks very different from that ones you do to address deliberate deception.
If someone deliberately deceives me, the issue is that I can’t even trust them on the meta-level.
If someone is rationalizing and believing their own marketing, I think the issue is “rationalizing and believing your own marketing is the default state, and it requires a lot of skills built on top of each other in order to stop, and there are a lot of complicated tradeoffs you’re making along the way, and this will take along time even for well intentioned people.”
And meanwhile a large chunk of the problem is “people have very different models and ontologies that output very different beliefs and plans”, so a lot of things that look like rationalization is just very different models.
And meanwhile a large chunk of the problem is “people have very different models and ontologies that output very different beliefs and plans”, so a lot of things that look like rationalization is just very different models.
“just” very different model demands the question of why they (and you) prefer such different models.
It’s motivated cognition all the way down. Choice of model is subject to the same biases that the object-level untruths are. In fact, motivated use of less-than-useful model is probably the MOST common case I encounter of the kinds of self- and other-deception we’re discussing.
I think some differences of models are due to motivated cognition, but I think many or most models comes more down to different problems that you’re solving.
For example, I had many arguments with habryka about whether there should be norms around keeping the office clean that involved continuous effort on the part of individuals. His opinion was that you should just solve the problem with specialization and systemization. I think motivated cognition have played a role in each of our models, but there were legitimate reasons to prefer one over the other, and those reasons were entangled with each other in messy ways that requires several days of conversation to untangle. (See “Hufflepuff Leadership and Fighting Entropy” for some details about the models, and hopefully an upcoming blogpost about resolving disagreements when you don’t share ontologies)
(The following was originally the second half of this post. I was worried that I didn’t have time to really develop it fully, and meanwhile the half-baked version of it sort of undercut the post in ways I did’t like. Putting it here for now. Eventually hopefully will have all of this crystallized into a post that articulates what I think people actually should do about all this)
A problem with “sitting bolt upright in alarm” is that it’s not something you can sustainably do all the time. The point of elevated attention is to pay, well, more attention to things that are locally important. If you’re always paying maximal attention to one area, you’re
a) going to miss other important areas
b) probably going to stress yourself out in ways that are longterm unhelpful. (in particular if “elevated attention” not only uses all of your existing attention, but redirects resources you have previously been using for things other than attention)
Deliberate lying (in my circles) seems quite rare to me.
I’m less confident about non-lying patterns of deception. (Basically, everyone around me seems smart enough for “lying” to be a bad strategy. But other forms of active malicious deception might be going on and I’m not confident I’d be able to tell).
But, given my current rate of “detect deliberate lying” and “detect explicit malicious deception”, it does make sense to sit bolt upright in alarm whenever someone looks like they’re deceiving. My detection of it, at least, is a rare event.
The next steps I follow after “stop and pay attention” are:
Check that you understood what the person said, and verify that what happened was lying rather than “having a very different model than you.”
(I think the latter is common enough and the former rare enough that this should usually be your prior)
If they definitely, deliberately lied, and they don’t have some kind of extenuating circumstance, it’s quite reasonable for there to be major social repercussions.
If someone is willing to lie to my face (and is reasonably competent at it), this not only means I can’t trust them with anything important, but it means I can’t trust them when they say they’ve changed. I’d need to have a compelling reason for why they lied in the first place, and why they thought it was okay then but wrong now, or wouldn’t be okay in the future, before I could make serious plans with them.
By contrast, a few other activities seem much more common. You can’t sit bolt upright in alarm whenever these happen, because then you’d be sitting bolt upright in alarm all the time and stressing your body out.
The world is full of plenty of terrible things where the “appropriate” level of freak-out just isn’t very helpful. Your grim-o-meter is for helping you buckle down and run a sprint, not for running a marathon. But I think there’s something useful, occasionally, about letting yourself go into a heightened stress state, where you fully feel the importance. Especially if you think a problem might be so important that this particular marathon is the one that you’re going to run.
Activities that seem common, and concerning:
Motivated reasoning
Humans are bad at lying [citation needed?] but really good at making up stories that they believe (at least mostly) that are convenient. And they are pretty good at selectively ignoring things when convenient.
I’d prefer to work exclusively with people who are good at avoiding rationalization. At some point, I’d like the internal sanity waterline of the rationalsphere to rise to the level where I treat motivated as a rare event that warrants Sitting Bolt Upright in Alarm. But we’re not at that level yet. I’m not at that level yet.
Low key deception (conscious or otherwise)
I think it’s common for people to end up believing their own marketing –looking for the largest plausible outcome that could justify their plan, and avoiding looking at the facts that’d make their plans seem least promising.
I think there’s sometimes a slipperiness where people avoid letting their opinions getting pinned down, and end up presenting different facets of themselves to different people at different times. And moreover, present different facets of themselves to themselves at different times, without noticing the disconnect.
I think it’s quite likely that we should be coordinating on a staghunt to systematically fix the above two issues. I think such a staghunt looks very different from that ones you do to address deliberate deception.
If someone deliberately deceives me, the issue is that I can’t even trust them on the meta-level.
If someone is rationalizing and believing their own marketing, I think the issue is “rationalizing and believing your own marketing is the default state, and it requires a lot of skills built on top of each other in order to stop, and there are a lot of complicated tradeoffs you’re making along the way, and this will take along time even for well intentioned people.”
And meanwhile a large chunk of the problem is “people have very different models and ontologies that output very different beliefs and plans”, so a lot of things that look like rationalization is just very different models.
“just” very different model demands the question of why they (and you) prefer such different models.
It’s motivated cognition all the way down. Choice of model is subject to the same biases that the object-level untruths are. In fact, motivated use of less-than-useful model is probably the MOST common case I encounter of the kinds of self- and other-deception we’re discussing.
I think some differences of models are due to motivated cognition, but I think many or most models comes more down to different problems that you’re solving.
For example, I had many arguments with habryka about whether there should be norms around keeping the office clean that involved continuous effort on the part of individuals. His opinion was that you should just solve the problem with specialization and systemization. I think motivated cognition have played a role in each of our models, but there were legitimate reasons to prefer one over the other, and those reasons were entangled with each other in messy ways that requires several days of conversation to untangle. (See “Hufflepuff Leadership and Fighting Entropy” for some details about the models, and hopefully an upcoming blogpost about resolving disagreements when you don’t share ontologies)