A central disagreement seems to be: If you see a person who looks obviously wrong about a thing, and you have a plausible story for them being politically motivated… is it more like that:
a) their position is mostly explained via politically motivation
b) their position is mostly explained via them having a very different model than you, built out of legitimate facts and theories?
It seemed like Jessica and Ben lean towards assuming A. I lean towards assuming B.
My reason is that many of the times I’ve seen someone be accused of A (or been accused of A myself), there’s been an explanation of a different belief/worldview that actually just seemed reasonable to me. People seem to have a tendency to jump to uncharitable interpretations of things, esp. from people who are in some sense competitors.
But, asking myself “what sort of evidence would lead me to an opposite prior?”, one thing that comes to mind is: if I saw people regularly shifting their positions in questionable ways that didn’t seem defensible. And what then occurred to me that if I’m looking at the median effective altruist, I think I totally see this behavior all the time. And I see this sort of behavior non-zero among the leaders of EA/x-risk/rationality orgs.
And this didn’t register as a big deal to me, cuz, I dunno, rank-and-file EA and rationalist newbies are going to have bad epistemics, shrug. And meanwhile EA leadership still seemed to have generally good epistemics on net (and/or be on positive trajectories for their epistemics).
But I can definitely imagine an order-of-experiences where I first observed various people having demonstrably bad epistemics, and then raising to attention the hypothesis that this was particularly troubling, and then forming a prior based on it, and then forming a framework built around that prior, and then interpreting evidence through that framework.
This isn’t quite the same as identifying a clear crux of mine – I still have the salient experiences of people clearly failing to understand each other’s deep models, and there still seem like important costs of jumping to the “motivated reasoning” hypothesis. So that’s still an important part of my framework. But imagining the alternate order-of-experiences felt like an important motion towards a real crux.
My model of politically motivated reasoning is that it usually feels reasonable to the person at the time. So does reasoning that is not so motivated. Noticing that you feel the view is reasonable isn’t even strong evidence that you weren’t doing this, let alone that others aren’t doing it.
This also matches my experience—the times when I have noticed I used politically motivated reasoning, it seemed reasonable to me until this was pointed out.
I agree with this, but it doesn’t feel like it quite addresses the thing that needs addressing.
[I started writing a reply here, and then felt like it was necessary to bring up the object level disagreements to really disentangle anything.
I actually learn slightly towards “it would be good to discuss the object level of which people/orgs have confusing and possibly deceptive communication practices, but in a separate post, and taking a lot of care to distinguish what’s an accusation and what’s thinking out loud”]
What makes you think A and B are mutually exclusive? Or even significantly anticorrelated? If there are enough very different models built out of legitimate facts and theories for everyone to have one of their own, how can you tell they aren’t picking them for political reasons?
Note: (not sure if you had this in mind when you made your comment), the OP comment here wasn’t meant to be an argument per se – it’s meant to be trying to articulate what’s going on in my mind and what sort of motions would seem necessary for it to change. It’s more descriptive than normative.
My goal here is expose the workings of my belief structure, partly so others can help untangle things if applicable, and partly to try to demonstrate what doublecrux feels like when I do it (to help provide some examples for my current doublecrux sequence)
There a few different (orthogonal?) ways I can imagine my mind shifting here:
A: increase my prior on how motivated people are, as a likely explanation of why they seem obviously wrong – even people-whose epistemics-I-trust-pretty-well*.
B: increase my prior on the collective epistemic harm caused by people-whose-epistemics-I-trust, regardless of how motivated they are. (i.e. if people are concealing information for strategic reasons, I might respect their strategic reasons as valid, but still eventually think that this concealment is sufficiently damaging that it’s not worth the cost, even if they weren’t motivated at all)
C: refine the manner in which I classify people into “average epistemics” vs “medium epistemics” vs “epistemics I trust pretty well.” (For example, an easy mistake to make is that just because one person at an organization has good epistemics, the whole org must have good epistemics. I think I still fall prey to this more than I’d like)
D: I decrease my prior on how much I should assume people-whose-epistemics-I-trust-pretty-well are coming from importantly different background models, which might be built on important insights, or which I should assign non-trivial chance to being a good model of the world.
E: I should change my policy of “socially, in conversation, reduce the degree to which I advocate policies along the lines of ’try to understand people’s background models before forming (or stating publicly) judgments about their degree of motivation.
All of these are knobs that can be tweaked, rather than booleans to be flipped. And (hopefully obvious) this isn’t actually an exhaustive list of how my mind might change, just trying to articulate some of the more salient options.
It seems plausible that I should do A, B, or C (but, I have not yet been persuaded that my current weights are wrong). It does not seem plausible currently that I should do D. E is sufficiently complicated that I’m not sure I have a sense of how plausible it is, but current arguments I’ve encountered haven’t seemed that overwhelming.
Clarification question: Is this default to B over A meant to apply to the population at large, or for people who are in our orbits?
It seems like your model here actually views A as more likely than B in general but thinks EA/rationality at higher levels constitutes an exception, despite your observation of many cases of A in that place.
I am specifically talking about EA/rationality at higher levels (i.e. people who have been around a long time, especially people who read the sequences or ideally who have worked through some kind of epistemological issue in public)
There’s never been much of a fence around EA/rationality space, so it shouldn’t be surprising that you can find evidence of people having bad epistemics if you go looking for it. (Or, even if you’re just passively tracking the background rate of bad epistemically)
From my perspective, it’s definitely a huge chunk of the problem here that people are coming from different ontologies, paradigms, weighing complicated tradeoffs against each other and often making different judgment calls of “exactly which narrow target in between the rock and the hard place are you trying to hit?”
It might also be part of the problem that people are being motivated or deceptive.
But, my evidence for the former is “I’ve observed it directly” (at the very least, in the form of Ben/you/Jessica/Zack not understanding my paradigm despite 20 hours of discussion, and perhaps vice versa), and the evidence for the latter is AFAICT more like “base rates”.
(“But base rates tho” is actually a pretty good argument, which is why I think this whole discussion is real important)
It might also be part of the problem that people are being motivated or deceptive. [...] the evidence for the latter is AFAICT more like “base rates”.
When we talked 28 June, it definitely seemed to me like you believed in the existence of self-censorship due to social pressure. Are you not counting that as motivated or deceptive, or have I misunderstood you very badly?
Note on the word “deceptive”: I need some word to talk about the concept of “saying something that has the causal effect of listeners making less accurate predictions about reality, when the speaker possessed the knowledge to not do so, and attempts to correct the error will be resisted.” (The part about resistence to correction is important for distinguishing “deception”-in-this-sense from simple mistakes: if I erroneously claim that 57 is prime and someone points out that it’s not, I’ll immediately say, “Oops, you’re right,” rather than digging my heels in.)
I’m sympathetic to the criticism that lying isn’t the right word for this; so far my best alternatives are “deceptive” and “misleading.” If someone thinks those are still too inappropriately judgey-blamey, I’m eager to hear alternatives, or to use a neologism for the purposes of a particular conversation, but ultimately, I need a word for the thing.
If an Outer Party member in the world of George Orwell’s 1984 says, “Oceania has always been at war with Eastasia,” even though they clearly remember events from last week, when Oceania was at war with Eurasia instead, I don’t want to call that deep model divergence, coming from a different ontology, or weighing complicated tradeoffs between paradigms. Or at least, there’s more to the story than that. The divergence between this person’s deep model and mine isn’t just a random accident such that I should humbly accept that the Outside View says they’re as likely to be right as me. Uncommon priors require origin disputes, but in this case, I have a pretty strong candidate for an origin dispute that has something to do with the the Outer Party member being terrified of the Ministry of Love. And I think that what goes for subjects of a totalitarian state who fear being tortured and murdered, also goes in a much subtler form for upper-middle class people in the Bay Area who fear not getting invited to parties.
Obviously, this isn’t license to indiscriminately say, “You’re just saying that because you’re afraid of not getting invited to parties!” to any idea you dislike. (After all, I, too, prefer to get invited to parties.) But it is reason to be interested in modeling this class of distortion on people’s beliefs.
Judging a person as being misleading implies to me that I have a less accurate model of the world if I take what they say at face value.
Plenty of self-censorship isn’t of that quality. My model might be less accurate then the counterfactual model where the other person shared all the information towards which they have access, but it doesn’t get worse through the communication.
There are words like ‘guarded’ that you can use for people who self center a lot.
Apologies. A few things to disambiguate and address separately:
1. In that comment I was referring primarily to discussions about the trustworthiness and/or systematic distortion-ness of various EA and rationalist orgs and/or leadership, which I had mentally bucketed as fairly separate from our conversation. BUT even in that context “Only counterargument is base rates” is not a fair summary. I was feeling somewhat frustrated at the time I wrote that but that’s not a good excuse. (The behavior I think I endorse most is trying to avoid continuing the conversation in a comment thread at all, but I’ve obviously been failing hard at that)
2. My take on our prior conversation was more about “things that are socially costly to talk about, that are more like ‘mainstream politics’ than like ‘rationalist politics.’” Yes, there’s a large cluster of things related to mainstream politics and social justice where weighing in at all just feels like it’s going to make my life worse (this is less about not getting invited to parties and more about having more of my life filled with stressful conversations for battles that I don’t think are the best thing to prioritize fighting)
Note on the word “deceptive”: I need some word to talk about the concept of “saying something that has the causal effect of listeners making less accurate predictions about reality.
The reason it’s still tempting to use “deception” is because I’m focusing on the effects on listeners rather than the self-deceived speaker. If Winston says, “Oceania has always been at war at Eastasia” and I believe him, there’s a sense in which we want to say that I “have been deceived” (even if it’s not really Winston’s fault, thus the passive voice).
Self-deception doesn’t imply other people aren’t harmed, merely that the speaker is deceiving themselves first before they deceive others. Saying “what you said to me was based on self-deception” doesn’t then imply that I wasn’t deceived, merely points at where the deception first occurred.
For instance, the Arbinger institute uses the term “self-deception” to refer to when someone treats others as objects and forgets they’re people.
Note on the word “deceptive”: I need some word to talk about the concept of “saying something that has the causal effect of listeners making less accurate predictions about reality, when the speaker possessed the knowledge to not do so, and attempts to correct the error will be resisted.” (The part about resistence to correction is important for distinguishing “deception”-in-this-sense from simple mistakes: if I erroneously claim that 57 is prime and someone points out that it’s not, I’ll immediately say, “Oops, you’re right,” rather than digging my heels in.)
I’m sympathetic to the criticism that lying isn’t the right word for this; so far my best alternatives are “deceptive” and “misleading.” If someone thinks those are still too inappropriately judgey-blamey, I’m eager to hear alternatives, or to use a neologism for the purposes of a particular conversation, but ultimately, I need a word for the thing.
FWIW I think “deceptive” and “misleading” are pretty fine here (depends somewhat on context but I’ve thought the language everyone’s been using in this thread so far was fine)
I think the active-ingredient in the “there’s something resisting correction” has a flavor that isn’t quite captured by deceptive (self-deceptive is closer). I think the phrase that most captures this for me is perniciously motivated, or something like that.
One noteworthy update I made:
A central disagreement seems to be: If you see a person who looks obviously wrong about a thing, and you have a plausible story for them being politically motivated… is it more like that:
a) their position is mostly explained via politically motivation
b) their position is mostly explained via them having a very different model than you, built out of legitimate facts and theories?
It seemed like Jessica and Ben lean towards assuming A. I lean towards assuming B.
My reason is that many of the times I’ve seen someone be accused of A (or been accused of A myself), there’s been an explanation of a different belief/worldview that actually just seemed reasonable to me. People seem to have a tendency to jump to uncharitable interpretations of things, esp. from people who are in some sense competitors.
But, asking myself “what sort of evidence would lead me to an opposite prior?”, one thing that comes to mind is: if I saw people regularly shifting their positions in questionable ways that didn’t seem defensible. And what then occurred to me that if I’m looking at the median effective altruist, I think I totally see this behavior all the time. And I see this sort of behavior non-zero among the leaders of EA/x-risk/rationality orgs.
And this didn’t register as a big deal to me, cuz, I dunno, rank-and-file EA and rationalist newbies are going to have bad epistemics, shrug. And meanwhile EA leadership still seemed to have generally good epistemics on net (and/or be on positive trajectories for their epistemics).
But I can definitely imagine an order-of-experiences where I first observed various people having demonstrably bad epistemics, and then raising to attention the hypothesis that this was particularly troubling, and then forming a prior based on it, and then forming a framework built around that prior, and then interpreting evidence through that framework.
This isn’t quite the same as identifying a clear crux of mine – I still have the salient experiences of people clearly failing to understand each other’s deep models, and there still seem like important costs of jumping to the “motivated reasoning” hypothesis. So that’s still an important part of my framework. But imagining the alternate order-of-experiences felt like an important motion towards a real crux.
My model of politically motivated reasoning is that it usually feels reasonable to the person at the time. So does reasoning that is not so motivated. Noticing that you feel the view is reasonable isn’t even strong evidence that you weren’t doing this, let alone that others aren’t doing it.
This also matches my experience—the times when I have noticed I used politically motivated reasoning, it seemed reasonable to me until this was pointed out.
I agree with this, but it doesn’t feel like it quite addresses the thing that needs addressing.
[I started writing a reply here, and then felt like it was necessary to bring up the object level disagreements to really disentangle anything.
I actually learn slightly towards “it would be good to discuss the object level of which people/orgs have confusing and possibly deceptive communication practices, but in a separate post, and taking a lot of care to distinguish what’s an accusation and what’s thinking out loud”]
What makes you think A and B are mutually exclusive? Or even significantly anticorrelated? If there are enough very different models built out of legitimate facts and theories for everyone to have one of their own, how can you tell they aren’t picking them for political reasons?
Not saying they’re exclusive.
Note: (not sure if you had this in mind when you made your comment), the OP comment here wasn’t meant to be an argument per se – it’s meant to be trying to articulate what’s going on in my mind and what sort of motions would seem necessary for it to change. It’s more descriptive than normative.
My goal here is expose the workings of my belief structure, partly so others can help untangle things if applicable, and partly to try to demonstrate what doublecrux feels like when I do it (to help provide some examples for my current doublecrux sequence)
There a few different (orthogonal?) ways I can imagine my mind shifting here:
A: increase my prior on how motivated people are, as a likely explanation of why they seem obviously wrong – even people-whose epistemics-I-trust-pretty-well*.
B: increase my prior on the collective epistemic harm caused by people-whose-epistemics-I-trust, regardless of how motivated they are. (i.e. if people are concealing information for strategic reasons, I might respect their strategic reasons as valid, but still eventually think that this concealment is sufficiently damaging that it’s not worth the cost, even if they weren’t motivated at all)
C: refine the manner in which I classify people into “average epistemics” vs “medium epistemics” vs “epistemics I trust pretty well.” (For example, an easy mistake to make is that just because one person at an organization has good epistemics, the whole org must have good epistemics. I think I still fall prey to this more than I’d like)
D: I decrease my prior on how much I should assume people-whose-epistemics-I-trust-pretty-well are coming from importantly different background models, which might be built on important insights, or which I should assign non-trivial chance to being a good model of the world.
E: I should change my policy of “socially, in conversation, reduce the degree to which I advocate policies along the lines of ’try to understand people’s background models before forming (or stating publicly) judgments about their degree of motivation.
All of these are knobs that can be tweaked, rather than booleans to be flipped. And (hopefully obvious) this isn’t actually an exhaustive list of how my mind might change, just trying to articulate some of the more salient options.
It seems plausible that I should do A, B, or C (but, I have not yet been persuaded that my current weights are wrong). It does not seem plausible currently that I should do D. E is sufficiently complicated that I’m not sure I have a sense of how plausible it is, but current arguments I’ve encountered haven’t seemed that overwhelming.
Clarification question: Is this default to B over A meant to apply to the population at large, or for people who are in our orbits?
It seems like your model here actually views A as more likely than B in general but thinks EA/rationality at higher levels constitutes an exception, despite your observation of many cases of A in that place.
I am specifically talking about EA/rationality at higher levels (i.e. people who have been around a long time, especially people who read the sequences or ideally who have worked through some kind of epistemological issue in public)
There’s never been much of a fence around EA/rationality space, so it shouldn’t be surprising that you can find evidence of people having bad epistemics if you go looking for it. (Or, even if you’re just passively tracking the background rate of bad epistemically)
From my perspective, it’s definitely a huge chunk of the problem here that people are coming from different ontologies, paradigms, weighing complicated tradeoffs against each other and often making different judgment calls of “exactly which narrow target in between the rock and the hard place are you trying to hit?”
It might also be part of the problem that people are being motivated or deceptive.
But, my evidence for the former is “I’ve observed it directly” (at the very least, in the form of Ben/you/Jessica/Zack not understanding my paradigm despite 20 hours of discussion, and perhaps vice versa), and the evidence for the latter is AFAICT more like “base rates”.
(“But base rates tho” is actually a pretty good argument, which is why I think this whole discussion is real important)
When we talked 28 June, it definitely seemed to me like you believed in the existence of self-censorship due to social pressure. Are you not counting that as motivated or deceptive, or have I misunderstood you very badly?
Note on the word “deceptive”: I need some word to talk about the concept of “saying something that has the causal effect of listeners making less accurate predictions about reality, when the speaker possessed the knowledge to not do so, and attempts to correct the error will be resisted.” (The part about resistence to correction is important for distinguishing “deception”-in-this-sense from simple mistakes: if I erroneously claim that 57 is prime and someone points out that it’s not, I’ll immediately say, “Oops, you’re right,” rather than digging my heels in.)
I’m sympathetic to the criticism that lying isn’t the right word for this; so far my best alternatives are “deceptive” and “misleading.” If someone thinks those are still too inappropriately judgey-blamey, I’m eager to hear alternatives, or to use a neologism for the purposes of a particular conversation, but ultimately, I need a word for the thing.
If an Outer Party member in the world of George Orwell’s 1984 says, “Oceania has always been at war with Eastasia,” even though they clearly remember events from last week, when Oceania was at war with Eurasia instead, I don’t want to call that deep model divergence, coming from a different ontology, or weighing complicated tradeoffs between paradigms. Or at least, there’s more to the story than that. The divergence between this person’s deep model and mine isn’t just a random accident such that I should humbly accept that the Outside View says they’re as likely to be right as me. Uncommon priors require origin disputes, but in this case, I have a pretty strong candidate for an origin dispute that has something to do with the the Outer Party member being terrified of the Ministry of Love. And I think that what goes for subjects of a totalitarian state who fear being tortured and murdered, also goes in a much subtler form for upper-middle class people in the Bay Area who fear not getting invited to parties.
Obviously, this isn’t license to indiscriminately say, “You’re just saying that because you’re afraid of not getting invited to parties!” to any idea you dislike. (After all, I, too, prefer to get invited to parties.) But it is reason to be interested in modeling this class of distortion on people’s beliefs.
Judging a person as being misleading implies to me that I have a less accurate model of the world if I take what they say at face value.
Plenty of self-censorship isn’t of that quality. My model might be less accurate then the counterfactual model where the other person shared all the information towards which they have access, but it doesn’t get worse through the communication.
There are words like ‘guarded’ that you can use for people who self center a lot.
Apologies. A few things to disambiguate and address separately:
1. In that comment I was referring primarily to discussions about the trustworthiness and/or systematic distortion-ness of various EA and rationalist orgs and/or leadership, which I had mentally bucketed as fairly separate from our conversation. BUT even in that context “Only counterargument is base rates” is not a fair summary. I was feeling somewhat frustrated at the time I wrote that but that’s not a good excuse. (The behavior I think I endorse most is trying to avoid continuing the conversation in a comment thread at all, but I’ve obviously been failing hard at that)
2. My take on our prior conversation was more about “things that are socially costly to talk about, that are more like ‘mainstream politics’ than like ‘rationalist politics.’” Yes, there’s a large cluster of things related to mainstream politics and social justice where weighing in at all just feels like it’s going to make my life worse (this is less about not getting invited to parties and more about having more of my life filled with stressful conversations for battles that I don’t think are the best thing to prioritize fighting)
OK. Looking forward to future posts.
The word “self-deception” is often used for this.
The reason it’s still tempting to use “deception” is because I’m focusing on the effects on listeners rather than the self-deceived speaker. If Winston says, “Oceania has always been at war at Eastasia” and I believe him, there’s a sense in which we want to say that I “have been deceived” (even if it’s not really Winston’s fault, thus the passive voice).
Self-deception doesn’t imply other people aren’t harmed, merely that the speaker is deceiving themselves first before they deceive others. Saying “what you said to me was based on self-deception” doesn’t then imply that I wasn’t deceived, merely points at where the deception first occurred.
For instance, the Arbinger institute uses the term “self-deception” to refer to when someone treats others as objects and forgets they’re people.
FWIW I think “deceptive” and “misleading” are pretty fine here (depends somewhat on context but I’ve thought the language everyone’s been using in this thread so far was fine)
I think the active-ingredient in the “there’s something resisting correction” has a flavor that isn’t quite captured by deceptive (self-deceptive is closer). I think the phrase that most captures this for me is perniciously motivated, or something like that.