Yeah, I think this points at a thing that bothers me about Connor’s list, even though it seems clear to me that Connor’s advice should be “in the mix”.
Some imperfect ways of trying to point at the thing:
1. ‘Playing video games all the time even though this doesn’t feel deeply fulfilling or productive’ is bad. ‘Forcing yourself to never have fun and thereby burning out’ is also bad. Outside of the most extreme examples, it can be hard to figure out exactly where to draw the line and what’s healthy, what conduces to flourishing, etc. But just tracking these as two important failure modes, without assuming one of these error categories is universally better than the other, can help.
(I feel like “flourishing” is a better word than “healthy” here, because it’s more… I want to say, “transhumanist”? Acknowledges that life is about achieving good things, not just cautiously avoiding bad things?)
2. I feel like a lot of Connor’s phrasings, taken fully seriously, almost risk… totalizing in the opposite direction? Insofar as that’s a thing. And totalizing toward complacency, mainstream-conformity, and non-ambition leads to sad, soft, quiet failure modes, the absence of many good things; whereas totalizing in the opposite direction leads to louder, more Reddit-viral failure modes; so there is a large risk that we’ll be less able to course-correct if we go too far in the ‘stability over innovation’ direction.
3. I feel like the Connor list would be a large overcorrection for most people, since this advice doesn’t build in a way to tell whether you’re going too far in this direction, and most people aren’t at high risk for psychosis/mania/etc.
I sort of feel like adopting this full list (vs. just having it ‘in the mix’) would mean building a large share of rationalist institutions, rituals, and norms around ‘let’s steer a wide berth around psychosis-adjacent behavior’.
It seems clear to me that there are ways of doing the Rationality Community better, but I guess I don’t currently buy that this particular problem is so… core? Or so universal?
What specifically is our evidence that in absolute terms, psychosis-adjacent patterns are a larger rationality-community problem than depression-adjacent patterns, OCD-adjacent patterns, dissociation-adjacent patterns, etc., etc.?
4. Ceteris paribus, it’s a sign of rationality if someone compartmentalizes less, is better able to make changes to their lives in response to new information (including, e.g., installing trigger-action plans), takes more actions that are good for their long-term values and not just short-term rewards, etc.
I worry that a culture built around your suggestions, Connor (vs. one that just has those in the mix as considerations), would pathologize a lot of ‘signs of rationality’ and drive away or regress-to-the-mean the people who make this community different from a randomly selected community.
This paragraph especially raised this worry for me:
Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the “community” (and even disendorsed). So this isn’t a question of “leadership” of some kind asking too much from people (except Vassar)—it’s more a question of building a healthy culture. Let us not confuse blame with seeking to become better.
I don’t know anything about what things you wanted to push for, and with that context I assume I’d go ‘oh yeah, that is obviously unhealthy and unreasonable’?
But as written, without the context, this reads to me like it’s pathologizing rationality, treating ambition and ‘just try things’ as unhealthy, etc.
I really worry about a possible future version of the community that treats ‘getting very excited about rationality and wanting to push it to new heights’ as childishly naive, old hat / obviously could never work, or (worse!) as a clear sign of an “unhealthy” mind.
(Unless, like, we actually reach the point of confidence that we’ve run out of big ways to improve our rationality. If we run out of improvements, then I want to believe we’ve run out of improvements. But I don’t think that’s our situation today.)
5. There’s such a thing as being too incautious, adventurous, and experimental; there’s also such a thing as being too cautious and unadventurous, and insufficiently experimental. I actually think that the rationalists have a lot of both problems, rather than things being heavily stacked in the ‘too incautious’ category. (Though maybe this is because I interact with a different subset of rationalists.)
I’m excited about the idea of figuring out how to make a more “grounded” rationalist community, one that treats all the crazy x-risk, transhumanism, Bayes, etc. stuff as “just more normality” (or something like that). But I’m more wary of the thing you’re pointing at, which feels more to me like “giving up on the weird stuff” or “trying to build a weirdness-free compartment in your mind” than like trying to integrate the weird rationalist stuff into being a human being.
I think this is also a case of ‘reverse all advice you hear’. No one is at the optimum on most dimensions, so a lot of people will benefit from the advice ‘be more X’ and a lot of people will benefit from the advice ‘be less X’. I’m guessing your (Connor’s) advice applies perfectly to lots of people, but for me...
Even after working at MIRI and living in the Bay for eight years, I don’t have any close rationalist friends who I talk to (e.g.) once a week, and that makes me sad.
I have non-rationalist friends who I do lots of stuff with, but in those interactions I mostly don’t feel like I can fully be ‘me’, because most of the things I’m thinking about moment-to-moment and most of the things that feel deeply important to me don’t fit the mental schemas non-rationalists round things off to. I end up feeling like I have to either play-act at fitting a more normal role, or spend almost all my leisure time bridging inferential gap after inferential gap. (And no, self-modifying to better fit mainstream schemas does not appeal to me!)
I’d love to go to these parties you’re complaining about that are focused on “model-building and… learning”!
Actually, the thing I want is more extreme than that: I’d love to go to more ‘let’s do CFAR-workshop-style stuff together’ or ‘let’s talk about existential risk’ parties.
I think the personal problem I’ve had is the opposite of the one you’re pointing at: I feel like (for my idiosyncratic preferences) there’s usually not enough social affordance to talk about “real stuff” at rationalist-hosted parties, versus talking about pleasantries. This makes me feel like I’m playing a role / reading a script, which I find draining and a little soul-crushing.
In contrast, events where I don’t feel like there’s a ‘pretend to be normal’ expectation (and where I can talk about my bizarre actual goals and problems) feel very freeing and fulfilling to me, and like they’re feeding me nutrients I’ve been low on rather than empty calories.
“Making donations to other [lower-EV] causes helps you take them seriously, in the way that trading with real-but-trivial amounts of money instead of paper trading moves you strongly from Far Mode into Near Mode”
OK, but what about the skills of ‘doing the thing you think is highest-EV’, ‘trying to figure out what the highest-EV thing is’, or ‘developing deeper and more specialized knowledge on the highest-EV things (vs. flitting between topics)’? I feel like those are pretty important skills too, and more neglected by the world at large; and they have the advantage of being good actions on their own terms, rather than relying on a speculative theory that says this might help me do higher-EV things later.
I feel especially excited about trying to come up with new projects that might be extremely-high-EV, rather than just evaluating existing stuff.
I again feel like in my own life, I don’t have enough naive EA conversations about humanity’s big Hamming problems / bottlenecks. (Which is presumably mostly my fault! Certainly it’s up to me to fix this stuff. But if the community were uniformly bad in the opposite direction, then I wouldn’t expect to be able to have this problem.)
“I’d like to see Bay Area rationalist culture put some emphasis on real holidays rather than only rolling their own.”
Rationalist solstice is a real holiday! 😠
I went to a mostly-unironic rationalist July 4 party that I liked a lot, which updates me toward your view. But I think I still mostly come down on the opposite side of this tradeoff, if I were only optimizing for my own happiness.
‘No Christmas’ feels sad and cut-off-from-mainstream-culture to me, but ‘pantomiming Christmas without endorsing its values or virtues’ feels empty to me. “Rationalizing” Christmas feels like the perfect approach here (for me personally): make a new holiday that’s about things I actually care about and value, that draws out neglected aspects of Christmas (or precursor holidays like Saturnalia). I’d love to attend a rationalist seder, a rationalist Easter, a rationalist Chanukkah, etc. (Where ‘rationalist’ refers to changing the traditions themselves, not just ‘a bunch of rationalists celebrating together in a way that studiously tries to avoid any acknowledgment of anything weird about us’.)
I sort of feel like adopting this full list (vs. just having it ‘in the mix’) would mean building a large share of rationalist institutions, rituals, and norms around ‘let’s steer a wide berth around psychosis-adjacent behavior’.
… and say: “yes, exactly, that’s the point”.
Or, one might read this:
Ceteris paribus, it’s a sign of rationality if someone compartmentalizes less, is better able to make changes to their lives in response to new information (including, e.g., installing trigger-action plans), takes more actions that are good for their long-term values and not just short-term rewards, etc.
… and say: “yes, exactly, and that’s bad”.
(Does that seem absurd to you? But consider that one might not take at face value the notion that the change in response to new information is warranted, that the “long-term values” have been properly apprehended—or even real, instead of confabulated; etc.)
One might read this:
I worry that a culture built around your suggestions, Connor (vs. one that just has those in the mix as considerations), would pathologize a lot of ‘signs of rationality’ and drive away or regress-to-the-mean the people who make this community different from a randomly selected community.
… and say: “yes, just so, and this is good, because many of the ways in which this community is different from a randomly selected community are bad”.
This paragraph especially raised this worry for me:
Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the “community” (and even disendorsed). So this isn’t a question of “leadership” of some kind asking too much from people (except Vassar)—it’s more a question of building a healthy culture. Let us not confuse blame with seeking to become better.
I don’t know anything about what things you wanted to push for, and with that context I assume I’d go ‘oh yeah, that is obviously unhealthy and unreasonable’?
But is this unhealthy and unreasonable, or is it actually prudent? In other words—to continue the previous pattern—one might read this:
But as written, without the context, this reads to me like it’s pathologizing rationality, treating ambition and ‘just try things’ as unhealthy, etc.
… and say: “yes, we have erred much too far in the opposite direction, this is precisely a good change to make”.
We can put things in this way: you are saying, essentially, that Connor’s criticisms and recommendations indicate changes that would undermine the essence of the rationalist community. But might one not say, in response: “yes, and that’s the point, because the rationalist community is fundamentally a bad idea and does more harm than good by existing”? (Note that this is different from saying that rationality, either as a meme or as a personal principle, is bad or harmful somehow.)
To keep track of the discussion so far, it seems like there are at least three dimensions of disagreement:
1. Mainstream vs. Rationalists Cage Match
1A. Overall, the rationality community is way better than mainstream society.
1B. The rationality community is about as good as mainstream society.
1C. The rationality community is way worse than mainstream society.
My model is that I, Connor, Anna, and Vassar agree with 1A, and hypothetical-Said-commenter agrees with 1C. (The rationalists are pretty weird, so it makes sense that 1B would be a less common view.)
2. Psychoticism vs. Anti-Psychoticism
2A. The rationality community has a big, highly tractable problem: it’s way too high on ‘broadly psychoticism-adjacent characteristics’.
2B. The rationality community has a big, highly tractable problem: it’s way too low on those characteristics.
2C. The rationality community is basically fine on this metric. Like, we should be more cautious around drugs, but aside from drug use there isn’t a big clear thing it makes sense for most community members to change here.
My model is that Connor, Anna, and hypothetical-Said-commenter endorse 2A, Vassar endorses 2B, and I currently endorse 2C. (I think there are problems here, but more like ‘some community members are immunocompromised and need special protections’, less like ‘there’s an obesity epidemic ravaging the community’.)
Actually, I feel a bit confused about Anna’s view here, since she seems very critical of mainstream society’s (low-psychoticism?) culture, but she also seems to think the rationalist community is causing lots of unnecessary harm by destabilizing community members, encouraging overly-rapid changes of belief and behavior, etc.
If I had to speculate (properly very wrongly) about Anna’s view here, maybe it’s that there’s a third path where you take ideas incredibly seriously, but otherwise are very low-psychoticism and very ‘grounded’?
The mental image that comes to mind for me is a 60-year-old rural east coast libertarian with a very ‘get off my lawn, you stupid kids’ perspective on mainstream culture. Relatively independent, without being devoid of culture/tradition/community; takes her own ideology very seriously, and doesn’t compromise with the mainstream Modesty-style; but also is very solid, stable, and habit-based, and doesn’t constantly go off and do wild things just because someone tossed the idea out there.
(My question would then be whether you can have all those things plus rationality, or whether the rationality would inherently ruin it because you keep having to update all your beliefs, including your beliefs about your core identity and values. Also, whether this is anything remotely like what Anna or anyone else would advocate?)
3. Rationality Community: Good or Bad?
There are various ways to operationalize this, but I’ll go with:
3A. The rationality community is doing amazing. There isn’t much to improve on. We’re at least as cool as Dath Ilan teenagers, and plausibly cooler.
3B. The rationality community is doing OK. There’s some medium-sized low-hanging fruit we could grab to realize modest improvements, and some large high-hanging fruit we can build toward over time, but mostly people are being pretty sensible and the norms are fine (somewhere between “meh” and “good”).
3C. The rationality community is doing quite poorly. There’s large, known low-hanging fruit we could use to easily transform the community into a way way better (happier, more effective, etc.) entity.
3D. The rationality community is irredeemably bad, isn’t doing useful stuff, should dissolve, etc.
My model is that I endorse 3B (‘we’re doing OK’); Connor, Anna, and Vassar endorse 3C (‘we’re doing quite poorly’); and hypothetical-Said-commenter endorses 3D.
This maps pretty well onto people’s views-as-modeled-by-me in question 2, though you could obviously think psychoticism isn’t a big rationalist problem while also thinking there are other huge specific problems / low-hanging fruit for the rationalists.
I guess I’m pretty sympathetic to 3C. Maybe I’d endorse 3C instead in a different mood. If I had to guess at the big thing rationalists are failing at, it would probably be ‘not enough vulnerability / honesty / Hamming-ness’ and/or ‘not enough dakka / follow-through / commitment’?
I probably completely mangled some of y’alls views, so please correct me here.
A lot of the comments in response to Connor’s point are turning this into a 2D axis with ‘mainstream norms’ on one side and ‘weird/DIY norms’ on the other and trying to play tug-of-war, but I actually think the thing is way more nuanced than this suggests.
Proposal:
Investigate the phenomenon of totalization. Where does it come from, what motivates it, what kinds of people fall into it… To what extent is it coming from external vs internal pressure? Are there ‘good’ kinds of totalizing and ‘bad’ kinds?
Among people who totalize, what kinds of vulnerabilities do they experience as a result? Do they get exploited more by bad actors? Do they make common sense mistakes? Etc.
I am willing to bet there is a ‘good’ kind of totalizing and a ‘bad’ kind. And I think my comment about elitism was one of the bad kinds. And I think it’s not that hard to tell which is which? I think it’s hard to tell ‘from the inside’ but I… think I could tell from the outside with enough observation and asking them questions?
A very basic hypothesis is: To the extent that a totalizing impulse is coming from addiction (underspecified term here, I don’t want to unpack rn), it is not healthy. To the extent that a totalizing impulse is coming from an open-hearted, non-clingy, soulful conviction, it is healthy.
I would test that hypothesis, if it were my project. Others may have different hypotheses.
I want to note that the view / reasoning given in my comment applies (or could apply) quite a bit more broadly than the specific “psychoticism” issue (and indeed I took Connor’s top-level comment to be aimed more broadly than that). (I don’t know, actually, that I have much to say about that specific issue, beyond what I’ve already said elsethread here.)
I do like the “rural east coast libertarian” image. (As far as “can you have that and also rationality” question, well, why not? But perhaps the better question is “can you have that and Bay Area rationalist culture”—to which the answer might be, “why would you want to?”)
(I would not take this modus tollens, I don’t think the “community” is even close to fundamentally bad, I just think some serious reforms are in order for some of the culture that we let younger people build here.)
Indeed, I did not suspect that you would—but (I conjecture?) you also do not agree with Rob’s characterizations of the consequences of your points. It’s one who agrees with Rob’s positive take, but opposes his normative views on the community, that would take the other logical branch here.
Yeah, I think this points at a thing that bothers me about Connor’s list, even though it seems clear to me that Connor’s advice should be “in the mix”.
Some imperfect ways of trying to point at the thing:
1. ‘Playing video games all the time even though this doesn’t feel deeply fulfilling or productive’ is bad. ‘Forcing yourself to never have fun and thereby burning out’ is also bad. Outside of the most extreme examples, it can be hard to figure out exactly where to draw the line and what’s healthy, what conduces to flourishing, etc. But just tracking these as two important failure modes, without assuming one of these error categories is universally better than the other, can help.
(I feel like “flourishing” is a better word than “healthy” here, because it’s more… I want to say, “transhumanist”? Acknowledges that life is about achieving good things, not just cautiously avoiding bad things?)
2. I feel like a lot of Connor’s phrasings, taken fully seriously, almost risk… totalizing in the opposite direction? Insofar as that’s a thing. And totalizing toward complacency, mainstream-conformity, and non-ambition leads to sad, soft, quiet failure modes, the absence of many good things; whereas totalizing in the opposite direction leads to louder, more Reddit-viral failure modes; so there is a large risk that we’ll be less able to course-correct if we go too far in the ‘stability over innovation’ direction.
3. I feel like the Connor list would be a large overcorrection for most people, since this advice doesn’t build in a way to tell whether you’re going too far in this direction, and most people aren’t at high risk for psychosis/mania/etc.
I sort of feel like adopting this full list (vs. just having it ‘in the mix’) would mean building a large share of rationalist institutions, rituals, and norms around ‘let’s steer a wide berth around psychosis-adjacent behavior’.
It seems clear to me that there are ways of doing the Rationality Community better, but I guess I don’t currently buy that this particular problem is so… core? Or so universal?
What specifically is our evidence that in absolute terms, psychosis-adjacent patterns are a larger rationality-community problem than depression-adjacent patterns, OCD-adjacent patterns, dissociation-adjacent patterns, etc., etc.?
4. Ceteris paribus, it’s a sign of rationality if someone compartmentalizes less, is better able to make changes to their lives in response to new information (including, e.g., installing trigger-action plans), takes more actions that are good for their long-term values and not just short-term rewards, etc.
I worry that a culture built around your suggestions, Connor (vs. one that just has those in the mix as considerations), would pathologize a lot of ‘signs of rationality’ and drive away or regress-to-the-mean the people who make this community different from a randomly selected community.
This paragraph especially raised this worry for me:
I don’t know anything about what things you wanted to push for, and with that context I assume I’d go ‘oh yeah, that is obviously unhealthy and unreasonable’?
But as written, without the context, this reads to me like it’s pathologizing rationality, treating ambition and ‘just try things’ as unhealthy, etc.
I really worry about a possible future version of the community that treats ‘getting very excited about rationality and wanting to push it to new heights’ as childishly naive, old hat / obviously could never work, or (worse!) as a clear sign of an “unhealthy” mind.
(Unless, like, we actually reach the point of confidence that we’ve run out of big ways to improve our rationality. If we run out of improvements, then I want to believe we’ve run out of improvements. But I don’t think that’s our situation today.)
5. There’s such a thing as being too incautious, adventurous, and experimental; there’s also such a thing as being too cautious and unadventurous, and insufficiently experimental. I actually think that the rationalists have a lot of both problems, rather than things being heavily stacked in the ‘too incautious’ category. (Though maybe this is because I interact with a different subset of rationalists.)
An idea in this space that makes me feel excited rather than worried, is Anna’s description of a “Center for Bridging between Common Sense and Singularity Scenarios” and her examples and proposals in Reality-Revealing and Reality-Masking Puzzles.
I’m excited about the idea of figuring out how to make a more “grounded” rationalist community, one that treats all the crazy x-risk, transhumanism, Bayes, etc. stuff as “just more normality” (or something like that). But I’m more wary of the thing you’re pointing at, which feels more to me like “giving up on the weird stuff” or “trying to build a weirdness-free compartment in your mind” than like trying to integrate the weird rationalist stuff into being a human being.
I think this is also a case of ‘reverse all advice you hear’. No one is at the optimum on most dimensions, so a lot of people will benefit from the advice ‘be more X’ and a lot of people will benefit from the advice ‘be less X’. I’m guessing your (Connor’s) advice applies perfectly to lots of people, but for me...
Even after working at MIRI and living in the Bay for eight years, I don’t have any close rationalist friends who I talk to (e.g.) once a week, and that makes me sad.
I have non-rationalist friends who I do lots of stuff with, but in those interactions I mostly don’t feel like I can fully be ‘me’, because most of the things I’m thinking about moment-to-moment and most of the things that feel deeply important to me don’t fit the mental schemas non-rationalists round things off to. I end up feeling like I have to either play-act at fitting a more normal role, or spend almost all my leisure time bridging inferential gap after inferential gap. (And no, self-modifying to better fit mainstream schemas does not appeal to me!)
I’d love to go to these parties you’re complaining about that are focused on “model-building and… learning”!
Actually, the thing I want is more extreme than that: I’d love to go to more ‘let’s do CFAR-workshop-style stuff together’ or ‘let’s talk about existential risk’ parties.
I think the personal problem I’ve had is the opposite of the one you’re pointing at: I feel like (for my idiosyncratic preferences) there’s usually not enough social affordance to talk about “real stuff” at rationalist-hosted parties, versus talking about pleasantries. This makes me feel like I’m playing a role / reading a script, which I find draining and a little soul-crushing.
In contrast, events where I don’t feel like there’s a ‘pretend to be normal’ expectation (and where I can talk about my bizarre actual goals and problems) feel very freeing and fulfilling to me, and like they’re feeding me nutrients I’ve been low on rather than empty calories.
“Making donations to other [lower-EV] causes helps you take them seriously, in the way that trading with real-but-trivial amounts of money instead of paper trading moves you strongly from Far Mode into Near Mode”
OK, but what about the skills of ‘doing the thing you think is highest-EV’, ‘trying to figure out what the highest-EV thing is’, or ‘developing deeper and more specialized knowledge on the highest-EV things (vs. flitting between topics)’? I feel like those are pretty important skills too, and more neglected by the world at large; and they have the advantage of being good actions on their own terms, rather than relying on a speculative theory that says this might help me do higher-EV things later.
I feel especially excited about trying to come up with new projects that might be extremely-high-EV, rather than just evaluating existing stuff.
I again feel like in my own life, I don’t have enough naive EA conversations about humanity’s big Hamming problems / bottlenecks. (Which is presumably mostly my fault! Certainly it’s up to me to fix this stuff. But if the community were uniformly bad in the opposite direction, then I wouldn’t expect to be able to have this problem.)
“I’d like to see Bay Area rationalist culture put some emphasis on real holidays rather than only rolling their own.”
Rationalist solstice is a real holiday! 😠
I went to a mostly-unironic rationalist July 4 party that I liked a lot, which updates me toward your view. But I think I still mostly come down on the opposite side of this tradeoff, if I were only optimizing for my own happiness.
‘No Christmas’ feels sad and cut-off-from-mainstream-culture to me, but ‘pantomiming Christmas without endorsing its values or virtues’ feels empty to me. “Rationalizing” Christmas feels like the perfect approach here (for me personally): make a new holiday that’s about things I actually care about and value, that draws out neglected aspects of Christmas (or precursor holidays like Saturnalia). I’d love to attend a rationalist seder, a rationalist Easter, a rationalist Chanukkah, etc. (Where ‘rationalist’ refers to changing the traditions themselves, not just ‘a bunch of rationalists celebrating together in a way that studiously tries to avoid any acknowledgment of anything weird about us’.)
I think that many people (and I have not decided yet if I am one such) may respond to this with “one man’s modus tollens is another’s modus ponens”.
That is, one might read things like this:
… and say: “yes, exactly, that’s the point”.
Or, one might read this:
… and say: “yes, exactly, and that’s bad”.
(Does that seem absurd to you? But consider that one might not take at face value the notion that the change in response to new information is warranted, that the “long-term values” have been properly apprehended—or even real, instead of confabulated; etc.)
One might read this:
… and say: “yes, just so, and this is good, because many of the ways in which this community is different from a randomly selected community are bad”.
But is this unhealthy and unreasonable, or is it actually prudent? In other words—to continue the previous pattern—one might read this:
… and say: “yes, we have erred much too far in the opposite direction, this is precisely a good change to make”.
We can put things in this way: you are saying, essentially, that Connor’s criticisms and recommendations indicate changes that would undermine the essence of the rationalist community. But might one not say, in response: “yes, and that’s the point, because the rationalist community is fundamentally a bad idea and does more harm than good by existing”? (Note that this is different from saying that rationality, either as a meme or as a personal principle, is bad or harmful somehow.)
Yeah, I disagree with that view.
To keep track of the discussion so far, it seems like there are at least three dimensions of disagreement:
1. Mainstream vs. Rationalists Cage Match
1A. Overall, the rationality community is way better than mainstream society.
1B. The rationality community is about as good as mainstream society.
1C. The rationality community is way worse than mainstream society.
My model is that I, Connor, Anna, and Vassar agree with 1A, and hypothetical-Said-commenter agrees with 1C. (The rationalists are pretty weird, so it makes sense that 1B would be a less common view.)
2. Psychoticism vs. Anti-Psychoticism
2A. The rationality community has a big, highly tractable problem: it’s way too high on ‘broadly psychoticism-adjacent characteristics’.
2B. The rationality community has a big, highly tractable problem: it’s way too low on those characteristics.
2C. The rationality community is basically fine on this metric. Like, we should be more cautious around drugs, but aside from drug use there isn’t a big clear thing it makes sense for most community members to change here.
My model is that Connor, Anna, and hypothetical-Said-commenter endorse 2A, Vassar endorses 2B, and I currently endorse 2C. (I think there are problems here, but more like ‘some community members are immunocompromised and need special protections’, less like ‘there’s an obesity epidemic ravaging the community’.)
Actually, I feel a bit confused about Anna’s view here, since she seems very critical of mainstream society’s (low-psychoticism?) culture, but she also seems to think the rationalist community is causing lots of unnecessary harm by destabilizing community members, encouraging overly-rapid changes of belief and behavior, etc.
If I had to speculate (properly very wrongly) about Anna’s view here, maybe it’s that there’s a third path where you take ideas incredibly seriously, but otherwise are very low-psychoticism and very ‘grounded’?
The mental image that comes to mind for me is a 60-year-old rural east coast libertarian with a very ‘get off my lawn, you stupid kids’ perspective on mainstream culture. Relatively independent, without being devoid of culture/tradition/community; takes her own ideology very seriously, and doesn’t compromise with the mainstream Modesty-style; but also is very solid, stable, and habit-based, and doesn’t constantly go off and do wild things just because someone tossed the idea out there.
(My question would then be whether you can have all those things plus rationality, or whether the rationality would inherently ruin it because you keep having to update all your beliefs, including your beliefs about your core identity and values. Also, whether this is anything remotely like what Anna or anyone else would advocate?)
3. Rationality Community: Good or Bad?
There are various ways to operationalize this, but I’ll go with:
3A. The rationality community is doing amazing. There isn’t much to improve on. We’re at least as cool as Dath Ilan teenagers, and plausibly cooler.
3B. The rationality community is doing OK. There’s some medium-sized low-hanging fruit we could grab to realize modest improvements, and some large high-hanging fruit we can build toward over time, but mostly people are being pretty sensible and the norms are fine (somewhere between “meh” and “good”).
3C. The rationality community is doing quite poorly. There’s large, known low-hanging fruit we could use to easily transform the community into a way way better (happier, more effective, etc.) entity.
3D. The rationality community is irredeemably bad, isn’t doing useful stuff, should dissolve, etc.
My model is that I endorse 3B (‘we’re doing OK’); Connor, Anna, and Vassar endorse 3C (‘we’re doing quite poorly’); and hypothetical-Said-commenter endorses 3D.
This maps pretty well onto people’s views-as-modeled-by-me in question 2, though you could obviously think psychoticism isn’t a big rationalist problem while also thinking there are other huge specific problems / low-hanging fruit for the rationalists.
I guess I’m pretty sympathetic to 3C. Maybe I’d endorse 3C instead in a different mood. If I had to guess at the big thing rationalists are failing at, it would probably be ‘not enough vulnerability / honesty / Hamming-ness’ and/or ‘not enough dakka / follow-through / commitment’?
I probably completely mangled some of y’alls views, so please correct me here.
A lot of the comments in response to Connor’s point are turning this into a 2D axis with ‘mainstream norms’ on one side and ‘weird/DIY norms’ on the other and trying to play tug-of-war, but I actually think the thing is way more nuanced than this suggests.
Proposal:
Investigate the phenomenon of totalization. Where does it come from, what motivates it, what kinds of people fall into it… To what extent is it coming from external vs internal pressure? Are there ‘good’ kinds of totalizing and ‘bad’ kinds?
Among people who totalize, what kinds of vulnerabilities do they experience as a result? Do they get exploited more by bad actors? Do they make common sense mistakes? Etc.
I am willing to bet there is a ‘good’ kind of totalizing and a ‘bad’ kind. And I think my comment about elitism was one of the bad kinds. And I think it’s not that hard to tell which is which? I think it’s hard to tell ‘from the inside’ but I… think I could tell from the outside with enough observation and asking them questions?
A very basic hypothesis is: To the extent that a totalizing impulse is coming from addiction (underspecified term here, I don’t want to unpack rn), it is not healthy. To the extent that a totalizing impulse is coming from an open-hearted, non-clingy, soulful conviction, it is healthy.
I would test that hypothesis, if it were my project. Others may have different hypotheses.
I want to note that the view / reasoning given in my comment applies (or could apply) quite a bit more broadly than the specific “psychoticism” issue (and indeed I took Connor’s top-level comment to be aimed more broadly than that). (I don’t know, actually, that I have much to say about that specific issue, beyond what I’ve already said elsethread here.)
I do like the “rural east coast libertarian” image. (As far as “can you have that and also rationality” question, well, why not? But perhaps the better question is “can you have that and Bay Area rationalist culture”—to which the answer might be, “why would you want to?”)
(I would not take this modus tollens, I don’t think the “community” is even close to fundamentally bad, I just think some serious reforms are in order for some of the culture that we let younger people build here.)
Indeed, I did not suspect that you would—but (I conjecture?) you also do not agree with Rob’s characterizations of the consequences of your points. It’s one who agrees with Rob’s positive take, but opposes his normative views on the community, that would take the other logical branch here.
> a larger rationality-community problem than depression-adjacent patterns, OCD-adjacent patterns, dissociation-adjacent patterns
Well, Connor’s list would probably help with most of these as well. (Not that I disagree with your point.)