So, rationality largely isn’t actually about doing thinking clearly [...] it’s an aesthetic identity movement around HPMoR as a central node [...] This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.
This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.
Specifically: if you fail to make a hard mental disinction between “rationality”-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called “rationalists” about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance (“Am I crazy? Are they crazy? What’s going on?? Auuuuuugh”) in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn’t.
But … it shouldn’t. Sure, self-identification with the “rationalist” brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that’s an empirical question that you can’t answer by taking the brand name literally.
How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.
Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected “the community” to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander’s comments in “The Ideology Is Not the Movement” (“[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture”).
This isn’t to say that the so-called “rationalist” community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don’t see any better community to run away to—at the moment. (Though I’m keeping an eye on the Quillette people.) But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
(Full disclosure: uh, I guess I would also count as part of the “Vassar crowd” these days??)
But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
For Ben’s criticisms of EA, it’s my opinion that while I agree with many of his conclusions, I don’t agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn’t respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn’t himself agree with, his interlocutors are persistently acting in bad faith. I haven’t interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don’t find them a compelling account of people’s real motivations in discourse.
I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it what his model of bad faith is.
Recently I’ve often found myself wishing for better (widely-understood) terminology for phenomena that it’s otherwise tempting to call “bad faith”, “intellectual dishonesty”, &c. I think it’s pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that’s worth distinguishing from “innocent” mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”)
If our discourse norms require us to “assume good faith”, but there’s an important sense in which that assumption isn’t true (because motivated misunderstandings resist correction in a way that simple mistakes don’t), but we can’t talk about the ways it isn’t true without violating the discourse norm, then that’s actually a pretty serious problem for our collective sanity!
So, I’ve read the two posts on Benquo’s blog you’ve linked to. The first one “Bad Intent Is a Disposition, Not a Feeling”, depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and ‘mens rea’ on his blog to see if he had posted any updated thoughts on the subject. There weren’t results from the date of publication of that post onward on either of those topics on his blog, so it doesn’t appear he has publicly updated his thoughts on these topics. That was over 2 years ago.
The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn’t totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:
Sadly, being honest about your sense that someone else is arguing in bad faith is Officially Not OK. It is read as a grave and inappropriate attack. And as long as that is the case, he could reasonably expect that bringing it up would lead to getting yelled at by everyone and losing the interaction. So maybe he felt and feels like he has no good options here.
Benquo’s conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn’t the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I’m not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities’ discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.
Part of the problem is that it seems how Benquo construes ‘bad faith’ is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn’t a super high priority for me, I’m not sure that I will get around to it. However, there is enough material in Benquo’s posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.
I don’t know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.
I understand the “Vassar Crowd” to be a group of Michael Vassar’s friends who:
were highly critical of EA.
were critical of somewhat less so of the rationality community.
were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.
Maybe you meet those qualifications, but as I understand it the “Vassar Crowd” started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn’t posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance’s Long-Term World Improvement mailing list.
It doesn’t seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn’t appear from the outside it is as cohesive anymore, I assume in large part because of Vassar’s decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.
So while I appreciate the disclosure, I don’t know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.
This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.
Specifically: if you fail to make a hard mental disinction between “rationality”-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called “rationalists” about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance (“Am I crazy? Are they crazy? What’s going on?? Auuuuuugh”) in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn’t.
But … it shouldn’t. Sure, self-identification with the “rationalist” brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that’s an empirical question that you can’t answer by taking the brand name literally.
I thought the “rationalist” æsthetic-identity-movement’s marketing literature expressed this very poetically—
Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected “the community” to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander’s comments in “The Ideology Is Not the Movement” (“[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture”).
This isn’t to say that the so-called “rationalist” community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don’t see any better community to run away to—at the moment. (Though I’m keeping an eye on the Quillette people.) But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
(Full disclosure: uh, I guess I would also count as part of the “Vassar crowd” these days??)
For Ben’s criticisms of EA, it’s my opinion that while I agree with many of his conclusions, I don’t agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn’t respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn’t himself agree with, his interlocutors are persistently acting in bad faith. I haven’t interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don’t find them a compelling account of people’s real motivations in discourse.
I think the most relevant post by Ben here is “Bad Intent Is a Disposition, Not a Feeling”. (Highly recommended!)
Recently I’ve often found myself wishing for better (widely-understood) terminology for phenomena that it’s otherwise tempting to call “bad faith”, “intellectual dishonesty”, &c. I think it’s pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that’s worth distinguishing from “innocent” mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”)
If our discourse norms require us to “assume good faith”, but there’s an important sense in which that assumption isn’t true (because motivated misunderstandings resist correction in a way that simple mistakes don’t), but we can’t talk about the ways it isn’t true without violating the discourse norm, then that’s actually a pretty serious problem for our collective sanity!
So, I’ve read the two posts on Benquo’s blog you’ve linked to. The first one “Bad Intent Is a Disposition, Not a Feeling”, depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and ‘mens rea’ on his blog to see if he had posted any updated thoughts on the subject. There weren’t results from the date of publication of that post onward on either of those topics on his blog, so it doesn’t appear he has publicly updated his thoughts on these topics. That was over 2 years ago.
The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn’t totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:
Benquo’s conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn’t the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I’m not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities’ discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.
Part of the problem is that it seems how Benquo construes ‘bad faith’ is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn’t a super high priority for me, I’m not sure that I will get around to it. However, there is enough material in Benquo’s posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.
I don’t know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.
I’ll take a look at these links. Thanks.
I understand the “Vassar Crowd” to be a group of Michael Vassar’s friends who:
were highly critical of EA.
were critical of somewhat less so of the rationality community.
were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.
Maybe you meet those qualifications, but as I understand it the “Vassar Crowd” started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn’t posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance’s Long-Term World Improvement mailing list.
It doesn’t seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn’t appear from the outside it is as cohesive anymore, I assume in large part because of Vassar’s decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.
So while I appreciate the disclosure, I don’t know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.