Clarification question: Is this default to B over A meant to apply to the population at large, or for people who are in our orbits?
It seems like your model here actually views A as more likely than B in general but thinks EA/rationality at higher levels constitutes an exception, despite your observation of many cases of A in that place.
I am specifically talking about EA/rationality at higher levels (i.e. people who have been around a long time, especially people who read the sequences or ideally who have worked through some kind of epistemological issue in public)
There’s never been much of a fence around EA/rationality space, so it shouldn’t be surprising that you can find evidence of people having bad epistemics if you go looking for it. (Or, even if you’re just passively tracking the background rate of bad epistemically)
From my perspective, it’s definitely a huge chunk of the problem here that people are coming from different ontologies, paradigms, weighing complicated tradeoffs against each other and often making different judgment calls of “exactly which narrow target in between the rock and the hard place are you trying to hit?”
It might also be part of the problem that people are being motivated or deceptive.
But, my evidence for the former is “I’ve observed it directly” (at the very least, in the form of Ben/you/Jessica/Zack not understanding my paradigm despite 20 hours of discussion, and perhaps vice versa), and the evidence for the latter is AFAICT more like “base rates”.
(“But base rates tho” is actually a pretty good argument, which is why I think this whole discussion is real important)
It might also be part of the problem that people are being motivated or deceptive. [...] the evidence for the latter is AFAICT more like “base rates”.
When we talked 28 June, it definitely seemed to me like you believed in the existence of self-censorship due to social pressure. Are you not counting that as motivated or deceptive, or have I misunderstood you very badly?
Note on the word “deceptive”: I need some word to talk about the concept of “saying something that has the causal effect of listeners making less accurate predictions about reality, when the speaker possessed the knowledge to not do so, and attempts to correct the error will be resisted.” (The part about resistence to correction is important for distinguishing “deception”-in-this-sense from simple mistakes: if I erroneously claim that 57 is prime and someone points out that it’s not, I’ll immediately say, “Oops, you’re right,” rather than digging my heels in.)
I’m sympathetic to the criticism that lying isn’t the right word for this; so far my best alternatives are “deceptive” and “misleading.” If someone thinks those are still too inappropriately judgey-blamey, I’m eager to hear alternatives, or to use a neologism for the purposes of a particular conversation, but ultimately, I need a word for the thing.
If an Outer Party member in the world of George Orwell’s 1984 says, “Oceania has always been at war with Eastasia,” even though they clearly remember events from last week, when Oceania was at war with Eurasia instead, I don’t want to call that deep model divergence, coming from a different ontology, or weighing complicated tradeoffs between paradigms. Or at least, there’s more to the story than that. The divergence between this person’s deep model and mine isn’t just a random accident such that I should humbly accept that the Outside View says they’re as likely to be right as me. Uncommon priors require origin disputes, but in this case, I have a pretty strong candidate for an origin dispute that has something to do with the the Outer Party member being terrified of the Ministry of Love. And I think that what goes for subjects of a totalitarian state who fear being tortured and murdered, also goes in a much subtler form for upper-middle class people in the Bay Area who fear not getting invited to parties.
Obviously, this isn’t license to indiscriminately say, “You’re just saying that because you’re afraid of not getting invited to parties!” to any idea you dislike. (After all, I, too, prefer to get invited to parties.) But it is reason to be interested in modeling this class of distortion on people’s beliefs.
Judging a person as being misleading implies to me that I have a less accurate model of the world if I take what they say at face value.
Plenty of self-censorship isn’t of that quality. My model might be less accurate then the counterfactual model where the other person shared all the information towards which they have access, but it doesn’t get worse through the communication.
There are words like ‘guarded’ that you can use for people who self center a lot.
Apologies. A few things to disambiguate and address separately:
1. In that comment I was referring primarily to discussions about the trustworthiness and/or systematic distortion-ness of various EA and rationalist orgs and/or leadership, which I had mentally bucketed as fairly separate from our conversation. BUT even in that context “Only counterargument is base rates” is not a fair summary. I was feeling somewhat frustrated at the time I wrote that but that’s not a good excuse. (The behavior I think I endorse most is trying to avoid continuing the conversation in a comment thread at all, but I’ve obviously been failing hard at that)
2. My take on our prior conversation was more about “things that are socially costly to talk about, that are more like ‘mainstream politics’ than like ‘rationalist politics.’” Yes, there’s a large cluster of things related to mainstream politics and social justice where weighing in at all just feels like it’s going to make my life worse (this is less about not getting invited to parties and more about having more of my life filled with stressful conversations for battles that I don’t think are the best thing to prioritize fighting)
Note on the word “deceptive”: I need some word to talk about the concept of “saying something that has the causal effect of listeners making less accurate predictions about reality.
The reason it’s still tempting to use “deception” is because I’m focusing on the effects on listeners rather than the self-deceived speaker. If Winston says, “Oceania has always been at war at Eastasia” and I believe him, there’s a sense in which we want to say that I “have been deceived” (even if it’s not really Winston’s fault, thus the passive voice).
Self-deception doesn’t imply other people aren’t harmed, merely that the speaker is deceiving themselves first before they deceive others. Saying “what you said to me was based on self-deception” doesn’t then imply that I wasn’t deceived, merely points at where the deception first occurred.
For instance, the Arbinger institute uses the term “self-deception” to refer to when someone treats others as objects and forgets they’re people.
Note on the word “deceptive”: I need some word to talk about the concept of “saying something that has the causal effect of listeners making less accurate predictions about reality, when the speaker possessed the knowledge to not do so, and attempts to correct the error will be resisted.” (The part about resistence to correction is important for distinguishing “deception”-in-this-sense from simple mistakes: if I erroneously claim that 57 is prime and someone points out that it’s not, I’ll immediately say, “Oops, you’re right,” rather than digging my heels in.)
I’m sympathetic to the criticism that lying isn’t the right word for this; so far my best alternatives are “deceptive” and “misleading.” If someone thinks those are still too inappropriately judgey-blamey, I’m eager to hear alternatives, or to use a neologism for the purposes of a particular conversation, but ultimately, I need a word for the thing.
FWIW I think “deceptive” and “misleading” are pretty fine here (depends somewhat on context but I’ve thought the language everyone’s been using in this thread so far was fine)
I think the active-ingredient in the “there’s something resisting correction” has a flavor that isn’t quite captured by deceptive (self-deceptive is closer). I think the phrase that most captures this for me is perniciously motivated, or something like that.
Clarification question: Is this default to B over A meant to apply to the population at large, or for people who are in our orbits?
It seems like your model here actually views A as more likely than B in general but thinks EA/rationality at higher levels constitutes an exception, despite your observation of many cases of A in that place.
I am specifically talking about EA/rationality at higher levels (i.e. people who have been around a long time, especially people who read the sequences or ideally who have worked through some kind of epistemological issue in public)
There’s never been much of a fence around EA/rationality space, so it shouldn’t be surprising that you can find evidence of people having bad epistemics if you go looking for it. (Or, even if you’re just passively tracking the background rate of bad epistemically)
From my perspective, it’s definitely a huge chunk of the problem here that people are coming from different ontologies, paradigms, weighing complicated tradeoffs against each other and often making different judgment calls of “exactly which narrow target in between the rock and the hard place are you trying to hit?”
It might also be part of the problem that people are being motivated or deceptive.
But, my evidence for the former is “I’ve observed it directly” (at the very least, in the form of Ben/you/Jessica/Zack not understanding my paradigm despite 20 hours of discussion, and perhaps vice versa), and the evidence for the latter is AFAICT more like “base rates”.
(“But base rates tho” is actually a pretty good argument, which is why I think this whole discussion is real important)
When we talked 28 June, it definitely seemed to me like you believed in the existence of self-censorship due to social pressure. Are you not counting that as motivated or deceptive, or have I misunderstood you very badly?
Note on the word “deceptive”: I need some word to talk about the concept of “saying something that has the causal effect of listeners making less accurate predictions about reality, when the speaker possessed the knowledge to not do so, and attempts to correct the error will be resisted.” (The part about resistence to correction is important for distinguishing “deception”-in-this-sense from simple mistakes: if I erroneously claim that 57 is prime and someone points out that it’s not, I’ll immediately say, “Oops, you’re right,” rather than digging my heels in.)
I’m sympathetic to the criticism that lying isn’t the right word for this; so far my best alternatives are “deceptive” and “misleading.” If someone thinks those are still too inappropriately judgey-blamey, I’m eager to hear alternatives, or to use a neologism for the purposes of a particular conversation, but ultimately, I need a word for the thing.
If an Outer Party member in the world of George Orwell’s 1984 says, “Oceania has always been at war with Eastasia,” even though they clearly remember events from last week, when Oceania was at war with Eurasia instead, I don’t want to call that deep model divergence, coming from a different ontology, or weighing complicated tradeoffs between paradigms. Or at least, there’s more to the story than that. The divergence between this person’s deep model and mine isn’t just a random accident such that I should humbly accept that the Outside View says they’re as likely to be right as me. Uncommon priors require origin disputes, but in this case, I have a pretty strong candidate for an origin dispute that has something to do with the the Outer Party member being terrified of the Ministry of Love. And I think that what goes for subjects of a totalitarian state who fear being tortured and murdered, also goes in a much subtler form for upper-middle class people in the Bay Area who fear not getting invited to parties.
Obviously, this isn’t license to indiscriminately say, “You’re just saying that because you’re afraid of not getting invited to parties!” to any idea you dislike. (After all, I, too, prefer to get invited to parties.) But it is reason to be interested in modeling this class of distortion on people’s beliefs.
Judging a person as being misleading implies to me that I have a less accurate model of the world if I take what they say at face value.
Plenty of self-censorship isn’t of that quality. My model might be less accurate then the counterfactual model where the other person shared all the information towards which they have access, but it doesn’t get worse through the communication.
There are words like ‘guarded’ that you can use for people who self center a lot.
Apologies. A few things to disambiguate and address separately:
1. In that comment I was referring primarily to discussions about the trustworthiness and/or systematic distortion-ness of various EA and rationalist orgs and/or leadership, which I had mentally bucketed as fairly separate from our conversation. BUT even in that context “Only counterargument is base rates” is not a fair summary. I was feeling somewhat frustrated at the time I wrote that but that’s not a good excuse. (The behavior I think I endorse most is trying to avoid continuing the conversation in a comment thread at all, but I’ve obviously been failing hard at that)
2. My take on our prior conversation was more about “things that are socially costly to talk about, that are more like ‘mainstream politics’ than like ‘rationalist politics.’” Yes, there’s a large cluster of things related to mainstream politics and social justice where weighing in at all just feels like it’s going to make my life worse (this is less about not getting invited to parties and more about having more of my life filled with stressful conversations for battles that I don’t think are the best thing to prioritize fighting)
OK. Looking forward to future posts.
The word “self-deception” is often used for this.
The reason it’s still tempting to use “deception” is because I’m focusing on the effects on listeners rather than the self-deceived speaker. If Winston says, “Oceania has always been at war at Eastasia” and I believe him, there’s a sense in which we want to say that I “have been deceived” (even if it’s not really Winston’s fault, thus the passive voice).
Self-deception doesn’t imply other people aren’t harmed, merely that the speaker is deceiving themselves first before they deceive others. Saying “what you said to me was based on self-deception” doesn’t then imply that I wasn’t deceived, merely points at where the deception first occurred.
For instance, the Arbinger institute uses the term “self-deception” to refer to when someone treats others as objects and forgets they’re people.
FWIW I think “deceptive” and “misleading” are pretty fine here (depends somewhat on context but I’ve thought the language everyone’s been using in this thread so far was fine)
I think the active-ingredient in the “there’s something resisting correction” has a flavor that isn’t quite captured by deceptive (self-deceptive is closer). I think the phrase that most captures this for me is perniciously motivated, or something like that.