I want to bring up a concept I found very useful for thinking about how to become less susceptible to these sorts of things.
(NB that while I don’t agree with much of the criticism here, I do think “the community” does modestly increase psychosis risk, and the Ziz and Vassar bubbles do so to extraordinary degrees. I also think there’s a bunch of low-hanging fruit here, so I’d like us to take this seriously and get psychosis risk lower than baseline.)
(ETA because people bring this up in the comments: law of equal and opposite advice applies. Many people seem to not have the problems that I’ve seen many other people really struggle with. That’s fine. Also I state these strongly—if you took all this advice strongly, you would swing way too far in the opposite direction. I do not anticipate anyone will do that but other people seem to be concerned about it so I will note that here. Please adjust the tone and strength-of-claim until it feels right to you, unless you are young and new to the “community” and then take it more strongly than feels right to you.)
Anyways, the concept: I heard the word “totalizing” on Twitter at some point (h/t to somebody). It now seems fundamental to my understanding of these dynamics. “Totalizing” was used in the sense of a “totalizing ideology”. This may just be a subculture term without a realer definition, but it means something like “an ideology that claims to affect/define meaning for all parts of your life, rather than just some”—and implicitly also that this ideology has a major effect and causes some behaviors at odds with default behavior.
This definition heavily overlaps with the stuff people typically associate with cults. For example, discouraging contact with family/outside, or having a whole lot hanging on whether the leaders approve of you. Both of these clearly affect how much you can have going on in your “outside” life.
Note that obviously totalization is on an axis. It’s not just about time spent on an ideology, but how much mental space that ideology takes up.
I think some of the biggest negative influences on me in the rationality community also had the trait of pushing towards totalization, though were unalike in many other ways. One was ideological and peer pressure to turn socializing/parties/entertainment into networking/learning, which meant that part of my life also could become about the ideology. Another was the idea of turning my thoughts/thinking/being into more fodder to think about thinking processes and self-improve, which cannibalized more of my default state.
I think engaging with new, more totalizing versions of the ideology or culture is a major way that people get more psychotic. Consider the maximum-entropy model of psychosis, so named because you aren’t specifying any of the neural or psychological mechanisms, you’re taking strictly what you can verify and being maximally agnostic about it. In this model, you might define psychosis as when “thought gets too far away from normal, and your new mental state is devoid of many of the guardrails/protections/negative-feedback-loops/sanity-checks that your normal mental states have.” (This model gels nicely with the fact that psychosis can be treated so well via drinking water, doing dishes, not thinking for awhile, tranquilizers, socializing, etc. (h/t anon).) In this max-ent model of psychosis, it is pretty obvious how totalization leads to psychosis. Changing more state, reducing more guardrails, rolling your own psychological protections that are guaranteed to have flaws, and cutting out all the normal stuff in your life that resets state. (Changing a bunch of psychological stuff at once is generally a terrible idea for the same reason, though that’s a general psychosis tip rather than a totalization-related one.)
I still don’t have a concise or great theoretical explanation for why totalization seems so predictive of ideological damage. I have a lot of reasons for why it seems clearly bad regarding your belief-structure, and some other reasons why it may just be strongly correlated with overreach in ways that aren’t perfectly causal. But without getting into precisely why, I think it’s an important lens to view the rationalist “community” in.
So I think one of the main things I want to see less of in the rationalist/EA “communities” is totalization.
This has a billion object-level points, most of which will be left as an exercise to the reader:
Don’t proselytize EA to high schoolers. Don’t proselytize other crazy ideologies without guardrails to young people. Only do that after your ideology has proven to make a healthy community with normal levels of burnout/psychosis. I think we can get there in a few years, but I don’t think we’re there yet. It just actually takes time to evolve the right memes, unfortunately.
To repeat the perennial criticism… it makes sense that the rationality community ends up pretty insular, but it seems good for loads of reasons to have more outside contact and ties. I think at the very least, encouraging people to hire outside the community and do hobbies outside the community are good starting points.
I’ve long felt that at parties and social events (in the Bay Area anyways) less time should be spent on model-building and networking and learning, and more time should be spent on anything else. Spending your time networking or learning at parties is fine if those are pretty different than your normal life, but we don’t really have that luxury.
Someone recently tried to tell me they wanted to put all their charitable money into AI safety specifically, because it was their comparative advantage. I disagree with this even on a personal basis with small amounts. Making donations to other causes helps you take them seriously, in the way that trading with real-but-trivial amounts of money instead of paper trading moves you strongly from Far Mode into Near Mode. I think paying 10% overhead of charitable money to lower-EV causes is going to be much better for AI safety in the long-run due to seriousness-in-exploration, AND I shouldn’t even have to justify it as such—I should be able to say something like “it’s just unvirtuous to put all eggs in one basket, don’t do it”. I think the old arguments about obviously putting all your money into the highest-EV charity at a given time are similarly wrong.
I love that Lightcone has a bunch of books outside the standard rationalist literature, about Jobs, Bezos, LKY, etc etc.
In general, I don’t like when people try to re-write social mechanisms (I’m fine with tinkering, small experiments, etc). This feels to me like one of the fastest ways to de-stabilize people, as well as the dumbest Chesterton’s fence to take down because of how socializing is in the wheelhouse of cultural gradient descent and not at all remotely in the wheelhouse of theorizing.
I’m much more wary of psychological theorizing, x-rationality, etc due to basically the exact points in the bullet above—your mind is in the wheelhouse of gradient descent, not guided theorizing. I walk this one—I quit my last project in part because of this. Other forms of tinkering-style psychological experimentation or growth are likely more ok. But even “lots of debugging” seems bad here, basically because it gives you too much episteme of how your brain works and not enough techne or metis to balance it out. You end up subtly or not-subtly pushing in all sorts of directions that don’t work, and it causes problems. I think the single biggest improvement to debugging (both for ROI and for health) is if there was a culture of often saying “this one’s hopeless, leave it be” much earlier and explicitly, or saying “yeah almost all of this is effectively-unchangeable”. Going multiple levels down the tree to solve a bug is going too far. It’s too easy to get totalized by the bug-fixing spirit if you regard everything as mutable.
As dumb as jobs are, I’m much more pro-job than I used to be for a bunch of reasons. The core reasons are obv not because of psychosis, but other facets of totalization-escape seems like a major deal.
As dumb as regular schedules, are, ditto. Having things that you repeatedly have to succeed in doing leaves you genuinely much less room for going psychotic. Being nocturnal and such are also offenders in this category.
I’d like to see Bay Area rationalist culture put some emphasis on real holidays rather than only rolling their own. E.g. Solstice instead of Christmas seems fine, but also we should have a lot of emphasis on Christmas too? I had a housemate who ran amazing Easter celebrations in Event Horizon that were extremely positive, and I loved that they captured the real spirit of Easter rather than trying to inject the spirit of Rationality into the corpse of Easter to create some animated zombie holiday. In this vein I also love Petrov Day but slightly worry that we focus much less on July 4th or Thanksgiving or other holidays that are more shared with others. I guess maybe I should just be glad we haven’t rationalized those...
Co-dependency and totalizing relationships seem relevant here although not much new to say.
Anna’s crusade for hobbies over the last several years has seemed extremely useful on this topic directly and indirectly.
I got one comment on a draft of this about how someone basically still endorsed years later their totalization after their CFAR workshop. I think this is sort of fine—very excitable and [other characterizations] people can easily become fairly totalized when entering a new world. However, I still think that a culture which totalized them somewhat less would have been better.
Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the “community” (and even disendorsed). So this isn’t a question of “leadership” of some kind asking too much from people (except Vassar)—it’s more a question of building a healthy culture. Let us not confuse blame with seeking to become better.
Yeah, I think this points at a thing that bothers me about Connor’s list, even though it seems clear to me that Connor’s advice should be “in the mix”.
Some imperfect ways of trying to point at the thing:
1. ‘Playing video games all the time even though this doesn’t feel deeply fulfilling or productive’ is bad. ‘Forcing yourself to never have fun and thereby burning out’ is also bad. Outside of the most extreme examples, it can be hard to figure out exactly where to draw the line and what’s healthy, what conduces to flourishing, etc. But just tracking these as two important failure modes, without assuming one of these error categories is universally better than the other, can help.
(I feel like “flourishing” is a better word than “healthy” here, because it’s more… I want to say, “transhumanist”? Acknowledges that life is about achieving good things, not just cautiously avoiding bad things?)
2. I feel like a lot of Connor’s phrasings, taken fully seriously, almost risk… totalizing in the opposite direction? Insofar as that’s a thing. And totalizing toward complacency, mainstream-conformity, and non-ambition leads to sad, soft, quiet failure modes, the absence of many good things; whereas totalizing in the opposite direction leads to louder, more Reddit-viral failure modes; so there is a large risk that we’ll be less able to course-correct if we go too far in the ‘stability over innovation’ direction.
3. I feel like the Connor list would be a large overcorrection for most people, since this advice doesn’t build in a way to tell whether you’re going too far in this direction, and most people aren’t at high risk for psychosis/mania/etc.
I sort of feel like adopting this full list (vs. just having it ‘in the mix’) would mean building a large share of rationalist institutions, rituals, and norms around ‘let’s steer a wide berth around psychosis-adjacent behavior’.
It seems clear to me that there are ways of doing the Rationality Community better, but I guess I don’t currently buy that this particular problem is so… core? Or so universal?
What specifically is our evidence that in absolute terms, psychosis-adjacent patterns are a larger rationality-community problem than depression-adjacent patterns, OCD-adjacent patterns, dissociation-adjacent patterns, etc., etc.?
4. Ceteris paribus, it’s a sign of rationality if someone compartmentalizes less, is better able to make changes to their lives in response to new information (including, e.g., installing trigger-action plans), takes more actions that are good for their long-term values and not just short-term rewards, etc.
I worry that a culture built around your suggestions, Connor (vs. one that just has those in the mix as considerations), would pathologize a lot of ‘signs of rationality’ and drive away or regress-to-the-mean the people who make this community different from a randomly selected community.
This paragraph especially raised this worry for me:
Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the “community” (and even disendorsed). So this isn’t a question of “leadership” of some kind asking too much from people (except Vassar)—it’s more a question of building a healthy culture. Let us not confuse blame with seeking to become better.
I don’t know anything about what things you wanted to push for, and with that context I assume I’d go ‘oh yeah, that is obviously unhealthy and unreasonable’?
But as written, without the context, this reads to me like it’s pathologizing rationality, treating ambition and ‘just try things’ as unhealthy, etc.
I really worry about a possible future version of the community that treats ‘getting very excited about rationality and wanting to push it to new heights’ as childishly naive, old hat / obviously could never work, or (worse!) as a clear sign of an “unhealthy” mind.
(Unless, like, we actually reach the point of confidence that we’ve run out of big ways to improve our rationality. If we run out of improvements, then I want to believe we’ve run out of improvements. But I don’t think that’s our situation today.)
5. There’s such a thing as being too incautious, adventurous, and experimental; there’s also such a thing as being too cautious and unadventurous, and insufficiently experimental. I actually think that the rationalists have a lot of both problems, rather than things being heavily stacked in the ‘too incautious’ category. (Though maybe this is because I interact with a different subset of rationalists.)
I’m excited about the idea of figuring out how to make a more “grounded” rationalist community, one that treats all the crazy x-risk, transhumanism, Bayes, etc. stuff as “just more normality” (or something like that). But I’m more wary of the thing you’re pointing at, which feels more to me like “giving up on the weird stuff” or “trying to build a weirdness-free compartment in your mind” than like trying to integrate the weird rationalist stuff into being a human being.
I think this is also a case of ‘reverse all advice you hear’. No one is at the optimum on most dimensions, so a lot of people will benefit from the advice ‘be more X’ and a lot of people will benefit from the advice ‘be less X’. I’m guessing your (Connor’s) advice applies perfectly to lots of people, but for me...
Even after working at MIRI and living in the Bay for eight years, I don’t have any close rationalist friends who I talk to (e.g.) once a week, and that makes me sad.
I have non-rationalist friends who I do lots of stuff with, but in those interactions I mostly don’t feel like I can fully be ‘me’, because most of the things I’m thinking about moment-to-moment and most of the things that feel deeply important to me don’t fit the mental schemas non-rationalists round things off to. I end up feeling like I have to either play-act at fitting a more normal role, or spend almost all my leisure time bridging inferential gap after inferential gap. (And no, self-modifying to better fit mainstream schemas does not appeal to me!)
I’d love to go to these parties you’re complaining about that are focused on “model-building and… learning”!
Actually, the thing I want is more extreme than that: I’d love to go to more ‘let’s do CFAR-workshop-style stuff together’ or ‘let’s talk about existential risk’ parties.
I think the personal problem I’ve had is the opposite of the one you’re pointing at: I feel like (for my idiosyncratic preferences) there’s usually not enough social affordance to talk about “real stuff” at rationalist-hosted parties, versus talking about pleasantries. This makes me feel like I’m playing a role / reading a script, which I find draining and a little soul-crushing.
In contrast, events where I don’t feel like there’s a ‘pretend to be normal’ expectation (and where I can talk about my bizarre actual goals and problems) feel very freeing and fulfilling to me, and like they’re feeding me nutrients I’ve been low on rather than empty calories.
“Making donations to other [lower-EV] causes helps you take them seriously, in the way that trading with real-but-trivial amounts of money instead of paper trading moves you strongly from Far Mode into Near Mode”
OK, but what about the skills of ‘doing the thing you think is highest-EV’, ‘trying to figure out what the highest-EV thing is’, or ‘developing deeper and more specialized knowledge on the highest-EV things (vs. flitting between topics)’? I feel like those are pretty important skills too, and more neglected by the world at large; and they have the advantage of being good actions on their own terms, rather than relying on a speculative theory that says this might help me do higher-EV things later.
I feel especially excited about trying to come up with new projects that might be extremely-high-EV, rather than just evaluating existing stuff.
I again feel like in my own life, I don’t have enough naive EA conversations about humanity’s big Hamming problems / bottlenecks. (Which is presumably mostly my fault! Certainly it’s up to me to fix this stuff. But if the community were uniformly bad in the opposite direction, then I wouldn’t expect to be able to have this problem.)
“I’d like to see Bay Area rationalist culture put some emphasis on real holidays rather than only rolling their own.”
Rationalist solstice is a real holiday! 😠
I went to a mostly-unironic rationalist July 4 party that I liked a lot, which updates me toward your view. But I think I still mostly come down on the opposite side of this tradeoff, if I were only optimizing for my own happiness.
‘No Christmas’ feels sad and cut-off-from-mainstream-culture to me, but ‘pantomiming Christmas without endorsing its values or virtues’ feels empty to me. “Rationalizing” Christmas feels like the perfect approach here (for me personally): make a new holiday that’s about things I actually care about and value, that draws out neglected aspects of Christmas (or precursor holidays like Saturnalia). I’d love to attend a rationalist seder, a rationalist Easter, a rationalist Chanukkah, etc. (Where ‘rationalist’ refers to changing the traditions themselves, not just ‘a bunch of rationalists celebrating together in a way that studiously tries to avoid any acknowledgment of anything weird about us’.)
I sort of feel like adopting this full list (vs. just having it ‘in the mix’) would mean building a large share of rationalist institutions, rituals, and norms around ‘let’s steer a wide berth around psychosis-adjacent behavior’.
… and say: “yes, exactly, that’s the point”.
Or, one might read this:
Ceteris paribus, it’s a sign of rationality if someone compartmentalizes less, is better able to make changes to their lives in response to new information (including, e.g., installing trigger-action plans), takes more actions that are good for their long-term values and not just short-term rewards, etc.
… and say: “yes, exactly, and that’s bad”.
(Does that seem absurd to you? But consider that one might not take at face value the notion that the change in response to new information is warranted, that the “long-term values” have been properly apprehended—or even real, instead of confabulated; etc.)
One might read this:
I worry that a culture built around your suggestions, Connor (vs. one that just has those in the mix as considerations), would pathologize a lot of ‘signs of rationality’ and drive away or regress-to-the-mean the people who make this community different from a randomly selected community.
… and say: “yes, just so, and this is good, because many of the ways in which this community is different from a randomly selected community are bad”.
This paragraph especially raised this worry for me:
Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the “community” (and even disendorsed). So this isn’t a question of “leadership” of some kind asking too much from people (except Vassar)—it’s more a question of building a healthy culture. Let us not confuse blame with seeking to become better.
I don’t know anything about what things you wanted to push for, and with that context I assume I’d go ‘oh yeah, that is obviously unhealthy and unreasonable’?
But is this unhealthy and unreasonable, or is it actually prudent? In other words—to continue the previous pattern—one might read this:
But as written, without the context, this reads to me like it’s pathologizing rationality, treating ambition and ‘just try things’ as unhealthy, etc.
… and say: “yes, we have erred much too far in the opposite direction, this is precisely a good change to make”.
We can put things in this way: you are saying, essentially, that Connor’s criticisms and recommendations indicate changes that would undermine the essence of the rationalist community. But might one not say, in response: “yes, and that’s the point, because the rationalist community is fundamentally a bad idea and does more harm than good by existing”? (Note that this is different from saying that rationality, either as a meme or as a personal principle, is bad or harmful somehow.)
To keep track of the discussion so far, it seems like there are at least three dimensions of disagreement:
1. Mainstream vs. Rationalists Cage Match
1A. Overall, the rationality community is way better than mainstream society.
1B. The rationality community is about as good as mainstream society.
1C. The rationality community is way worse than mainstream society.
My model is that I, Connor, Anna, and Vassar agree with 1A, and hypothetical-Said-commenter agrees with 1C. (The rationalists are pretty weird, so it makes sense that 1B would be a less common view.)
2. Psychoticism vs. Anti-Psychoticism
2A. The rationality community has a big, highly tractable problem: it’s way too high on ‘broadly psychoticism-adjacent characteristics’.
2B. The rationality community has a big, highly tractable problem: it’s way too low on those characteristics.
2C. The rationality community is basically fine on this metric. Like, we should be more cautious around drugs, but aside from drug use there isn’t a big clear thing it makes sense for most community members to change here.
My model is that Connor, Anna, and hypothetical-Said-commenter endorse 2A, Vassar endorses 2B, and I currently endorse 2C. (I think there are problems here, but more like ‘some community members are immunocompromised and need special protections’, less like ‘there’s an obesity epidemic ravaging the community’.)
Actually, I feel a bit confused about Anna’s view here, since she seems very critical of mainstream society’s (low-psychoticism?) culture, but she also seems to think the rationalist community is causing lots of unnecessary harm by destabilizing community members, encouraging overly-rapid changes of belief and behavior, etc.
If I had to speculate (properly very wrongly) about Anna’s view here, maybe it’s that there’s a third path where you take ideas incredibly seriously, but otherwise are very low-psychoticism and very ‘grounded’?
The mental image that comes to mind for me is a 60-year-old rural east coast libertarian with a very ‘get off my lawn, you stupid kids’ perspective on mainstream culture. Relatively independent, without being devoid of culture/tradition/community; takes her own ideology very seriously, and doesn’t compromise with the mainstream Modesty-style; but also is very solid, stable, and habit-based, and doesn’t constantly go off and do wild things just because someone tossed the idea out there.
(My question would then be whether you can have all those things plus rationality, or whether the rationality would inherently ruin it because you keep having to update all your beliefs, including your beliefs about your core identity and values. Also, whether this is anything remotely like what Anna or anyone else would advocate?)
3. Rationality Community: Good or Bad?
There are various ways to operationalize this, but I’ll go with:
3A. The rationality community is doing amazing. There isn’t much to improve on. We’re at least as cool as Dath Ilan teenagers, and plausibly cooler.
3B. The rationality community is doing OK. There’s some medium-sized low-hanging fruit we could grab to realize modest improvements, and some large high-hanging fruit we can build toward over time, but mostly people are being pretty sensible and the norms are fine (somewhere between “meh” and “good”).
3C. The rationality community is doing quite poorly. There’s large, known low-hanging fruit we could use to easily transform the community into a way way better (happier, more effective, etc.) entity.
3D. The rationality community is irredeemably bad, isn’t doing useful stuff, should dissolve, etc.
My model is that I endorse 3B (‘we’re doing OK’); Connor, Anna, and Vassar endorse 3C (‘we’re doing quite poorly’); and hypothetical-Said-commenter endorses 3D.
This maps pretty well onto people’s views-as-modeled-by-me in question 2, though you could obviously think psychoticism isn’t a big rationalist problem while also thinking there are other huge specific problems / low-hanging fruit for the rationalists.
I guess I’m pretty sympathetic to 3C. Maybe I’d endorse 3C instead in a different mood. If I had to guess at the big thing rationalists are failing at, it would probably be ‘not enough vulnerability / honesty / Hamming-ness’ and/or ‘not enough dakka / follow-through / commitment’?
I probably completely mangled some of y’alls views, so please correct me here.
A lot of the comments in response to Connor’s point are turning this into a 2D axis with ‘mainstream norms’ on one side and ‘weird/DIY norms’ on the other and trying to play tug-of-war, but I actually think the thing is way more nuanced than this suggests.
Proposal:
Investigate the phenomenon of totalization. Where does it come from, what motivates it, what kinds of people fall into it… To what extent is it coming from external vs internal pressure? Are there ‘good’ kinds of totalizing and ‘bad’ kinds?
Among people who totalize, what kinds of vulnerabilities do they experience as a result? Do they get exploited more by bad actors? Do they make common sense mistakes? Etc.
I am willing to bet there is a ‘good’ kind of totalizing and a ‘bad’ kind. And I think my comment about elitism was one of the bad kinds. And I think it’s not that hard to tell which is which? I think it’s hard to tell ‘from the inside’ but I… think I could tell from the outside with enough observation and asking them questions?
A very basic hypothesis is: To the extent that a totalizing impulse is coming from addiction (underspecified term here, I don’t want to unpack rn), it is not healthy. To the extent that a totalizing impulse is coming from an open-hearted, non-clingy, soulful conviction, it is healthy.
I would test that hypothesis, if it were my project. Others may have different hypotheses.
I want to note that the view / reasoning given in my comment applies (or could apply) quite a bit more broadly than the specific “psychoticism” issue (and indeed I took Connor’s top-level comment to be aimed more broadly than that). (I don’t know, actually, that I have much to say about that specific issue, beyond what I’ve already said elsethread here.)
I do like the “rural east coast libertarian” image. (As far as “can you have that and also rationality” question, well, why not? But perhaps the better question is “can you have that and Bay Area rationalist culture”—to which the answer might be, “why would you want to?”)
(I would not take this modus tollens, I don’t think the “community” is even close to fundamentally bad, I just think some serious reforms are in order for some of the culture that we let younger people build here.)
Indeed, I did not suspect that you would—but (I conjecture?) you also do not agree with Rob’s characterizations of the consequences of your points. It’s one who agrees with Rob’s positive take, but opposes his normative views on the community, that would take the other logical branch here.
(Also, I think rationality should still be less totalizing than many people take it to be, because a lot of people replace common sense with rationality. Instead one should totalize themselves very slowly, over years, watching for all sorts of mis-steps and mistakes, and merge their past life with their new life. Sure, rationality will eventually pervade your thinking, but that doesn’t mean at age 22 you throw out all of society’s wisdom and roll your own.)
Reservationism is the proper antidote to the (prematurely) totalizing nature of rationality.
That is: take whatever rationality tells you, and judge it with your own existing common sense, practical reason, and understanding of the world. Reject whatever seems to you to be unreasonable. Take on whatever seems to you to be right and proper. Excise or replace existing parts of your epistemology and worldview only when it genuinely seems to you that those parts are dysfunctional or incorrect, regardless of what the rationality you encounter is telling you about them.
(Don’t take this quick summary as a substitute for reading the linked essay; read it yourself, judge it for yourself.)
Note, by the way, that rationality—as taught in the Sequences—alreadyrecommendsthis! If anyone fails to approach the practice of rationality in this proper way, they are failing to do that which we have explicitly been told to do! If your rationality is “prematurely totalizing”, then you’re doing it wrong.
Consider also how many times we have heard a version of this: “When I read the Sequences, the ideas found therein seemed so obvious—like they’d put into words things I’ve always somehow known or thought, but had never been able to formulate so clearly and concisely!”. This is not a coincidence! If you learn of a “rationality”-related idea, and it seems to you to be obviously correct, such that you find that not only is it obvious that you should integrate it into your worldview, but indeed that you’ve already integrated it (so naturally and perfectly does it fit)—well, good! But if you encounter an idea that is strange, and counterintuitive, then examine it well, before you rush to integrate it; examine it with your existing reason—which will necessarily include all the “rationality” that you have already carefully and prudently integrated.
I don’t think there’s actually a contradiction between Eliezer’s post and Connor’s comment. But maybe you should bring up specifics if you think there is one.
I resonate as someone who wanted to ‘totalize’ themselves when I lived in the Bay Area rationalist scene. One hint as to why: I have felt, from a young age, compelled towards being one of the elite. I don’t think this is the case for most rationalists or anything, but noting my own personal motivation in case this helps anyone introspect on their own motivations more readily.
It was important for my identity / ego to be “one of the top / best people” and to associate with the best people. I had a natural way of dismissing anyone I thought was “below” my threshold of worthiness—I basically “didn’t think about them” and had no room in my brain for them. (I recognize the problematic-ness of that now? Like these kinds of thoughts lead to genocide, exploitation, runaway power, slavery, and a bunch of other horrible things. As such, I now find this ‘way of seeing’ morally repulsive.)
The whole rationality game was of egoic interest to me, because it seemed like a clear and even correct way of distinguishing the elite from the non-elite. Obviously Eliezer and Anna and others were just better than other people and better at thinking which is hugely important obviously and AI risk is something that most people don’t take seriously oh my god what is wrong with most people ahhh we’re all gonna die. (I didn’t really think thoughts like this or feel this way. But it would take more for me to give an accurate representation so I settled for a caricature. I hope you’re more charitable to your own insides.)
When a foolish ego wants something like this, it basically does everything it can to immerse themselves in it, and while it’s very motivating and good for learning, it is compelled towards totalization and will make foolish sacrifices. In the same way, perhaps, that young pretty Koreans sacrifice for the sake of becoming an idol in the kpop world.
MAPLE is like rehab for ego addicts. I find myself visiting my parents each year (after not having spoken to them for a decade) and valuing ‘just spending time together with people’. Like going bowling for the sake of togetherness, more than for the sake of bowling. And people here have a plethora of hobbies like woodworking, playing music, juggling, mushroom picking, baking, etc. Some people here want to totalize but are largely frustrated by their inability to do so to the extent they want to, and I think it’s an unhealthy addiction pattern that they haven’t figured out how to break. :l
I note that the things which you’re resonating with, which Connor proposes and which you expect would have helped you, or helped protect you...
...protect you from things which were not problems for me.
Which is not to say that those things are bad. Like, saving people from problems they have (that I do not have) sounds good to me.
But it does mean that there is [a good thing] for at least [some people] already, and while it may be right to trade off against that, I would want us to be eyes-open that it might be a tradeoff, rather than assuming that sliding in the Connor-Unreal direction is strictly and costlessly good.
Hmm, I want to point out I did not say anything about what I expected would have helped me or helped ‘protect’ me. I don’t see anything on that in my comment…
I also don’t think it’d be good for me to be saved from my problems...? but maybe I’m misunderstanding what you meant.
I definitely like Connor’s post. My “hear hear” was a kind of friendly encouragement for him speaking to something that felt real. I like the totalization concept. Was a good comment imo.
I do not particularly endorse his proposal… It seems like a non-starter. A better proposal might be to run some workshops or something that try to investigate this ‘totalization’ phenomenon in the community and what’s going on with it. That sounds fun! I’d totally be into doing this. Prob can’t though.
I agree with most of this point. I’ve added an ETA to the original to reflect this. My quibble (that I think is actually important) is that I think it should be less of a tradeoff and more of an {each person does the thing that is right for them}.
Endorsed, but that means when we’re talking about setting group norms and community standards, what we’re really shooting for is stuff that makes all the options available to everyone, and which helps people figure out what would be good for them as individuals.
Where one attractor near what you were proposing (i.e. not what you were proposing but what people might hear in your proposal, or what your proposal might amount to in practice) is “new way good, old way bad.”
Instead of “old way insufficient, new way more all-encompassing and cosmopolitan.”
Yeah, ideally would have lampshaded this more. My bad.
The part that gets extra complex is that I personally think ~2/3+ of people who say totalization is fine for them are in fact wrong and are missing out on tons of subtle things that you don’t notice until longer-term. But obviously the mostly likely thing is that I’m wrong about this. Hard to tell either way. I’d like to point this out more somehow so I can find out, but I’d sort of hoped my original comment would make things click for people without further time. I suppose I’ll have to think about how to broach this further.
I want to bring up a concept I found very useful for thinking about how to become less susceptible to these sorts of things.
(NB that while I don’t agree with much of the criticism here, I do think “the community” does modestly increase psychosis risk, and the Ziz and Vassar bubbles do so to extraordinary degrees. I also think there’s a bunch of low-hanging fruit here, so I’d like us to take this seriously and get psychosis risk lower than baseline.)
(ETA because people bring this up in the comments: law of equal and opposite advice applies. Many people seem to not have the problems that I’ve seen many other people really struggle with. That’s fine. Also I state these strongly—if you took all this advice strongly, you would swing way too far in the opposite direction. I do not anticipate anyone will do that but other people seem to be concerned about it so I will note that here. Please adjust the tone and strength-of-claim until it feels right to you, unless you are young and new to the “community” and then take it more strongly than feels right to you.)
Anyways, the concept: I heard the word “totalizing” on Twitter at some point (h/t to somebody). It now seems fundamental to my understanding of these dynamics. “Totalizing” was used in the sense of a “totalizing ideology”. This may just be a subculture term without a realer definition, but it means something like “an ideology that claims to affect/define meaning for all parts of your life, rather than just some”—and implicitly also that this ideology has a major effect and causes some behaviors at odds with default behavior.
This definition heavily overlaps with the stuff people typically associate with cults. For example, discouraging contact with family/outside, or having a whole lot hanging on whether the leaders approve of you. Both of these clearly affect how much you can have going on in your “outside” life.
Note that obviously totalization is on an axis. It’s not just about time spent on an ideology, but how much mental space that ideology takes up.
I think some of the biggest negative influences on me in the rationality community also had the trait of pushing towards totalization, though were unalike in many other ways. One was ideological and peer pressure to turn socializing/parties/entertainment into networking/learning, which meant that part of my life also could become about the ideology. Another was the idea of turning my thoughts/thinking/being into more fodder to think about thinking processes and self-improve, which cannibalized more of my default state.
I think engaging with new, more totalizing versions of the ideology or culture is a major way that people get more psychotic. Consider the maximum-entropy model of psychosis, so named because you aren’t specifying any of the neural or psychological mechanisms, you’re taking strictly what you can verify and being maximally agnostic about it. In this model, you might define psychosis as when “thought gets too far away from normal, and your new mental state is devoid of many of the guardrails/protections/negative-feedback-loops/sanity-checks that your normal mental states have.” (This model gels nicely with the fact that psychosis can be treated so well via drinking water, doing dishes, not thinking for awhile, tranquilizers, socializing, etc. (h/t anon).) In this max-ent model of psychosis, it is pretty obvious how totalization leads to psychosis. Changing more state, reducing more guardrails, rolling your own psychological protections that are guaranteed to have flaws, and cutting out all the normal stuff in your life that resets state. (Changing a bunch of psychological stuff at once is generally a terrible idea for the same reason, though that’s a general psychosis tip rather than a totalization-related one.)
I still don’t have a concise or great theoretical explanation for why totalization seems so predictive of ideological damage. I have a lot of reasons for why it seems clearly bad regarding your belief-structure, and some other reasons why it may just be strongly correlated with overreach in ways that aren’t perfectly causal. But without getting into precisely why, I think it’s an important lens to view the rationalist “community” in.
So I think one of the main things I want to see less of in the rationalist/EA “communities” is totalization.
This has a billion object-level points, most of which will be left as an exercise to the reader:
Don’t proselytize EA to high schoolers. Don’t proselytize other crazy ideologies without guardrails to young people. Only do that after your ideology has proven to make a healthy community with normal levels of burnout/psychosis. I think we can get there in a few years, but I don’t think we’re there yet. It just actually takes time to evolve the right memes, unfortunately.
To repeat the perennial criticism… it makes sense that the rationality community ends up pretty insular, but it seems good for loads of reasons to have more outside contact and ties. I think at the very least, encouraging people to hire outside the community and do hobbies outside the community are good starting points.
I’ve long felt that at parties and social events (in the Bay Area anyways) less time should be spent on model-building and networking and learning, and more time should be spent on anything else. Spending your time networking or learning at parties is fine if those are pretty different than your normal life, but we don’t really have that luxury.
Someone recently tried to tell me they wanted to put all their charitable money into AI safety specifically, because it was their comparative advantage. I disagree with this even on a personal basis with small amounts. Making donations to other causes helps you take them seriously, in the way that trading with real-but-trivial amounts of money instead of paper trading moves you strongly from Far Mode into Near Mode. I think paying 10% overhead of charitable money to lower-EV causes is going to be much better for AI safety in the long-run due to seriousness-in-exploration, AND I shouldn’t even have to justify it as such—I should be able to say something like “it’s just unvirtuous to put all eggs in one basket, don’t do it”. I think the old arguments about obviously putting all your money into the highest-EV charity at a given time are similarly wrong.
I love that Lightcone has a bunch of books outside the standard rationalist literature, about Jobs, Bezos, LKY, etc etc.
In general, I don’t like when people try to re-write social mechanisms (I’m fine with tinkering, small experiments, etc). This feels to me like one of the fastest ways to de-stabilize people, as well as the dumbest Chesterton’s fence to take down because of how socializing is in the wheelhouse of cultural gradient descent and not at all remotely in the wheelhouse of theorizing.
I’m much more wary of psychological theorizing, x-rationality, etc due to basically the exact points in the bullet above—your mind is in the wheelhouse of gradient descent, not guided theorizing. I walk this one—I quit my last project in part because of this. Other forms of tinkering-style psychological experimentation or growth are likely more ok. But even “lots of debugging” seems bad here, basically because it gives you too much episteme of how your brain works and not enough techne or metis to balance it out. You end up subtly or not-subtly pushing in all sorts of directions that don’t work, and it causes problems. I think the single biggest improvement to debugging (both for ROI and for health) is if there was a culture of often saying “this one’s hopeless, leave it be” much earlier and explicitly, or saying “yeah almost all of this is effectively-unchangeable”. Going multiple levels down the tree to solve a bug is going too far. It’s too easy to get totalized by the bug-fixing spirit if you regard everything as mutable.
As dumb as jobs are, I’m much more pro-job than I used to be for a bunch of reasons. The core reasons are obv not because of psychosis, but other facets of totalization-escape seems like a major deal.
As dumb as regular schedules, are, ditto. Having things that you repeatedly have to succeed in doing leaves you genuinely much less room for going psychotic. Being nocturnal and such are also offenders in this category.
I’d like to see Bay Area rationalist culture put some emphasis on real holidays rather than only rolling their own. E.g. Solstice instead of Christmas seems fine, but also we should have a lot of emphasis on Christmas too? I had a housemate who ran amazing Easter celebrations in Event Horizon that were extremely positive, and I loved that they captured the real spirit of Easter rather than trying to inject the spirit of Rationality into the corpse of Easter to create some animated zombie holiday. In this vein I also love Petrov Day but slightly worry that we focus much less on July 4th or Thanksgiving or other holidays that are more shared with others. I guess maybe I should just be glad we haven’t rationalized those...
Co-dependency and totalizing relationships seem relevant here although not much new to say.
Anna’s crusade for hobbies over the last several years has seemed extremely useful on this topic directly and indirectly.
I got one comment on a draft of this about how someone basically still endorsed years later their totalization after their CFAR workshop. I think this is sort of fine—very excitable and [other characterizations] people can easily become fairly totalized when entering a new world. However, I still think that a culture which totalized them somewhat less would have been better.
Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the “community” (and even disendorsed). So this isn’t a question of “leadership” of some kind asking too much from people (except Vassar)—it’s more a question of building a healthy culture. Let us not confuse blame with seeking to become better.
Rationality ought to be totalizing. https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory
Yeah, I think this points at a thing that bothers me about Connor’s list, even though it seems clear to me that Connor’s advice should be “in the mix”.
Some imperfect ways of trying to point at the thing:
1. ‘Playing video games all the time even though this doesn’t feel deeply fulfilling or productive’ is bad. ‘Forcing yourself to never have fun and thereby burning out’ is also bad. Outside of the most extreme examples, it can be hard to figure out exactly where to draw the line and what’s healthy, what conduces to flourishing, etc. But just tracking these as two important failure modes, without assuming one of these error categories is universally better than the other, can help.
(I feel like “flourishing” is a better word than “healthy” here, because it’s more… I want to say, “transhumanist”? Acknowledges that life is about achieving good things, not just cautiously avoiding bad things?)
2. I feel like a lot of Connor’s phrasings, taken fully seriously, almost risk… totalizing in the opposite direction? Insofar as that’s a thing. And totalizing toward complacency, mainstream-conformity, and non-ambition leads to sad, soft, quiet failure modes, the absence of many good things; whereas totalizing in the opposite direction leads to louder, more Reddit-viral failure modes; so there is a large risk that we’ll be less able to course-correct if we go too far in the ‘stability over innovation’ direction.
3. I feel like the Connor list would be a large overcorrection for most people, since this advice doesn’t build in a way to tell whether you’re going too far in this direction, and most people aren’t at high risk for psychosis/mania/etc.
I sort of feel like adopting this full list (vs. just having it ‘in the mix’) would mean building a large share of rationalist institutions, rituals, and norms around ‘let’s steer a wide berth around psychosis-adjacent behavior’.
It seems clear to me that there are ways of doing the Rationality Community better, but I guess I don’t currently buy that this particular problem is so… core? Or so universal?
What specifically is our evidence that in absolute terms, psychosis-adjacent patterns are a larger rationality-community problem than depression-adjacent patterns, OCD-adjacent patterns, dissociation-adjacent patterns, etc., etc.?
4. Ceteris paribus, it’s a sign of rationality if someone compartmentalizes less, is better able to make changes to their lives in response to new information (including, e.g., installing trigger-action plans), takes more actions that are good for their long-term values and not just short-term rewards, etc.
I worry that a culture built around your suggestions, Connor (vs. one that just has those in the mix as considerations), would pathologize a lot of ‘signs of rationality’ and drive away or regress-to-the-mean the people who make this community different from a randomly selected community.
This paragraph especially raised this worry for me:
I don’t know anything about what things you wanted to push for, and with that context I assume I’d go ‘oh yeah, that is obviously unhealthy and unreasonable’?
But as written, without the context, this reads to me like it’s pathologizing rationality, treating ambition and ‘just try things’ as unhealthy, etc.
I really worry about a possible future version of the community that treats ‘getting very excited about rationality and wanting to push it to new heights’ as childishly naive, old hat / obviously could never work, or (worse!) as a clear sign of an “unhealthy” mind.
(Unless, like, we actually reach the point of confidence that we’ve run out of big ways to improve our rationality. If we run out of improvements, then I want to believe we’ve run out of improvements. But I don’t think that’s our situation today.)
5. There’s such a thing as being too incautious, adventurous, and experimental; there’s also such a thing as being too cautious and unadventurous, and insufficiently experimental. I actually think that the rationalists have a lot of both problems, rather than things being heavily stacked in the ‘too incautious’ category. (Though maybe this is because I interact with a different subset of rationalists.)
An idea in this space that makes me feel excited rather than worried, is Anna’s description of a “Center for Bridging between Common Sense and Singularity Scenarios” and her examples and proposals in Reality-Revealing and Reality-Masking Puzzles.
I’m excited about the idea of figuring out how to make a more “grounded” rationalist community, one that treats all the crazy x-risk, transhumanism, Bayes, etc. stuff as “just more normality” (or something like that). But I’m more wary of the thing you’re pointing at, which feels more to me like “giving up on the weird stuff” or “trying to build a weirdness-free compartment in your mind” than like trying to integrate the weird rationalist stuff into being a human being.
I think this is also a case of ‘reverse all advice you hear’. No one is at the optimum on most dimensions, so a lot of people will benefit from the advice ‘be more X’ and a lot of people will benefit from the advice ‘be less X’. I’m guessing your (Connor’s) advice applies perfectly to lots of people, but for me...
Even after working at MIRI and living in the Bay for eight years, I don’t have any close rationalist friends who I talk to (e.g.) once a week, and that makes me sad.
I have non-rationalist friends who I do lots of stuff with, but in those interactions I mostly don’t feel like I can fully be ‘me’, because most of the things I’m thinking about moment-to-moment and most of the things that feel deeply important to me don’t fit the mental schemas non-rationalists round things off to. I end up feeling like I have to either play-act at fitting a more normal role, or spend almost all my leisure time bridging inferential gap after inferential gap. (And no, self-modifying to better fit mainstream schemas does not appeal to me!)
I’d love to go to these parties you’re complaining about that are focused on “model-building and… learning”!
Actually, the thing I want is more extreme than that: I’d love to go to more ‘let’s do CFAR-workshop-style stuff together’ or ‘let’s talk about existential risk’ parties.
I think the personal problem I’ve had is the opposite of the one you’re pointing at: I feel like (for my idiosyncratic preferences) there’s usually not enough social affordance to talk about “real stuff” at rationalist-hosted parties, versus talking about pleasantries. This makes me feel like I’m playing a role / reading a script, which I find draining and a little soul-crushing.
In contrast, events where I don’t feel like there’s a ‘pretend to be normal’ expectation (and where I can talk about my bizarre actual goals and problems) feel very freeing and fulfilling to me, and like they’re feeding me nutrients I’ve been low on rather than empty calories.
“Making donations to other [lower-EV] causes helps you take them seriously, in the way that trading with real-but-trivial amounts of money instead of paper trading moves you strongly from Far Mode into Near Mode”
OK, but what about the skills of ‘doing the thing you think is highest-EV’, ‘trying to figure out what the highest-EV thing is’, or ‘developing deeper and more specialized knowledge on the highest-EV things (vs. flitting between topics)’? I feel like those are pretty important skills too, and more neglected by the world at large; and they have the advantage of being good actions on their own terms, rather than relying on a speculative theory that says this might help me do higher-EV things later.
I feel especially excited about trying to come up with new projects that might be extremely-high-EV, rather than just evaluating existing stuff.
I again feel like in my own life, I don’t have enough naive EA conversations about humanity’s big Hamming problems / bottlenecks. (Which is presumably mostly my fault! Certainly it’s up to me to fix this stuff. But if the community were uniformly bad in the opposite direction, then I wouldn’t expect to be able to have this problem.)
“I’d like to see Bay Area rationalist culture put some emphasis on real holidays rather than only rolling their own.”
Rationalist solstice is a real holiday! 😠
I went to a mostly-unironic rationalist July 4 party that I liked a lot, which updates me toward your view. But I think I still mostly come down on the opposite side of this tradeoff, if I were only optimizing for my own happiness.
‘No Christmas’ feels sad and cut-off-from-mainstream-culture to me, but ‘pantomiming Christmas without endorsing its values or virtues’ feels empty to me. “Rationalizing” Christmas feels like the perfect approach here (for me personally): make a new holiday that’s about things I actually care about and value, that draws out neglected aspects of Christmas (or precursor holidays like Saturnalia). I’d love to attend a rationalist seder, a rationalist Easter, a rationalist Chanukkah, etc. (Where ‘rationalist’ refers to changing the traditions themselves, not just ‘a bunch of rationalists celebrating together in a way that studiously tries to avoid any acknowledgment of anything weird about us’.)
I think that many people (and I have not decided yet if I am one such) may respond to this with “one man’s modus tollens is another’s modus ponens”.
That is, one might read things like this:
… and say: “yes, exactly, that’s the point”.
Or, one might read this:
… and say: “yes, exactly, and that’s bad”.
(Does that seem absurd to you? But consider that one might not take at face value the notion that the change in response to new information is warranted, that the “long-term values” have been properly apprehended—or even real, instead of confabulated; etc.)
One might read this:
… and say: “yes, just so, and this is good, because many of the ways in which this community is different from a randomly selected community are bad”.
But is this unhealthy and unreasonable, or is it actually prudent? In other words—to continue the previous pattern—one might read this:
… and say: “yes, we have erred much too far in the opposite direction, this is precisely a good change to make”.
We can put things in this way: you are saying, essentially, that Connor’s criticisms and recommendations indicate changes that would undermine the essence of the rationalist community. But might one not say, in response: “yes, and that’s the point, because the rationalist community is fundamentally a bad idea and does more harm than good by existing”? (Note that this is different from saying that rationality, either as a meme or as a personal principle, is bad or harmful somehow.)
Yeah, I disagree with that view.
To keep track of the discussion so far, it seems like there are at least three dimensions of disagreement:
1. Mainstream vs. Rationalists Cage Match
1A. Overall, the rationality community is way better than mainstream society.
1B. The rationality community is about as good as mainstream society.
1C. The rationality community is way worse than mainstream society.
My model is that I, Connor, Anna, and Vassar agree with 1A, and hypothetical-Said-commenter agrees with 1C. (The rationalists are pretty weird, so it makes sense that 1B would be a less common view.)
2. Psychoticism vs. Anti-Psychoticism
2A. The rationality community has a big, highly tractable problem: it’s way too high on ‘broadly psychoticism-adjacent characteristics’.
2B. The rationality community has a big, highly tractable problem: it’s way too low on those characteristics.
2C. The rationality community is basically fine on this metric. Like, we should be more cautious around drugs, but aside from drug use there isn’t a big clear thing it makes sense for most community members to change here.
My model is that Connor, Anna, and hypothetical-Said-commenter endorse 2A, Vassar endorses 2B, and I currently endorse 2C. (I think there are problems here, but more like ‘some community members are immunocompromised and need special protections’, less like ‘there’s an obesity epidemic ravaging the community’.)
Actually, I feel a bit confused about Anna’s view here, since she seems very critical of mainstream society’s (low-psychoticism?) culture, but she also seems to think the rationalist community is causing lots of unnecessary harm by destabilizing community members, encouraging overly-rapid changes of belief and behavior, etc.
If I had to speculate (properly very wrongly) about Anna’s view here, maybe it’s that there’s a third path where you take ideas incredibly seriously, but otherwise are very low-psychoticism and very ‘grounded’?
The mental image that comes to mind for me is a 60-year-old rural east coast libertarian with a very ‘get off my lawn, you stupid kids’ perspective on mainstream culture. Relatively independent, without being devoid of culture/tradition/community; takes her own ideology very seriously, and doesn’t compromise with the mainstream Modesty-style; but also is very solid, stable, and habit-based, and doesn’t constantly go off and do wild things just because someone tossed the idea out there.
(My question would then be whether you can have all those things plus rationality, or whether the rationality would inherently ruin it because you keep having to update all your beliefs, including your beliefs about your core identity and values. Also, whether this is anything remotely like what Anna or anyone else would advocate?)
3. Rationality Community: Good or Bad?
There are various ways to operationalize this, but I’ll go with:
3A. The rationality community is doing amazing. There isn’t much to improve on. We’re at least as cool as Dath Ilan teenagers, and plausibly cooler.
3B. The rationality community is doing OK. There’s some medium-sized low-hanging fruit we could grab to realize modest improvements, and some large high-hanging fruit we can build toward over time, but mostly people are being pretty sensible and the norms are fine (somewhere between “meh” and “good”).
3C. The rationality community is doing quite poorly. There’s large, known low-hanging fruit we could use to easily transform the community into a way way better (happier, more effective, etc.) entity.
3D. The rationality community is irredeemably bad, isn’t doing useful stuff, should dissolve, etc.
My model is that I endorse 3B (‘we’re doing OK’); Connor, Anna, and Vassar endorse 3C (‘we’re doing quite poorly’); and hypothetical-Said-commenter endorses 3D.
This maps pretty well onto people’s views-as-modeled-by-me in question 2, though you could obviously think psychoticism isn’t a big rationalist problem while also thinking there are other huge specific problems / low-hanging fruit for the rationalists.
I guess I’m pretty sympathetic to 3C. Maybe I’d endorse 3C instead in a different mood. If I had to guess at the big thing rationalists are failing at, it would probably be ‘not enough vulnerability / honesty / Hamming-ness’ and/or ‘not enough dakka / follow-through / commitment’?
I probably completely mangled some of y’alls views, so please correct me here.
A lot of the comments in response to Connor’s point are turning this into a 2D axis with ‘mainstream norms’ on one side and ‘weird/DIY norms’ on the other and trying to play tug-of-war, but I actually think the thing is way more nuanced than this suggests.
Proposal:
Investigate the phenomenon of totalization. Where does it come from, what motivates it, what kinds of people fall into it… To what extent is it coming from external vs internal pressure? Are there ‘good’ kinds of totalizing and ‘bad’ kinds?
Among people who totalize, what kinds of vulnerabilities do they experience as a result? Do they get exploited more by bad actors? Do they make common sense mistakes? Etc.
I am willing to bet there is a ‘good’ kind of totalizing and a ‘bad’ kind. And I think my comment about elitism was one of the bad kinds. And I think it’s not that hard to tell which is which? I think it’s hard to tell ‘from the inside’ but I… think I could tell from the outside with enough observation and asking them questions?
A very basic hypothesis is: To the extent that a totalizing impulse is coming from addiction (underspecified term here, I don’t want to unpack rn), it is not healthy. To the extent that a totalizing impulse is coming from an open-hearted, non-clingy, soulful conviction, it is healthy.
I would test that hypothesis, if it were my project. Others may have different hypotheses.
I want to note that the view / reasoning given in my comment applies (or could apply) quite a bit more broadly than the specific “psychoticism” issue (and indeed I took Connor’s top-level comment to be aimed more broadly than that). (I don’t know, actually, that I have much to say about that specific issue, beyond what I’ve already said elsethread here.)
I do like the “rural east coast libertarian” image. (As far as “can you have that and also rationality” question, well, why not? But perhaps the better question is “can you have that and Bay Area rationalist culture”—to which the answer might be, “why would you want to?”)
(I would not take this modus tollens, I don’t think the “community” is even close to fundamentally bad, I just think some serious reforms are in order for some of the culture that we let younger people build here.)
Indeed, I did not suspect that you would—but (I conjecture?) you also do not agree with Rob’s characterizations of the consequences of your points. It’s one who agrees with Rob’s positive take, but opposes his normative views on the community, that would take the other logical branch here.
> a larger rationality-community problem than depression-adjacent patterns, OCD-adjacent patterns, dissociation-adjacent patterns
Well, Connor’s list would probably help with most of these as well. (Not that I disagree with your point.)
But the “community” should not be totalizing.
(Also, I think rationality should still be less totalizing than many people take it to be, because a lot of people replace common sense with rationality. Instead one should totalize themselves very slowly, over years, watching for all sorts of mis-steps and mistakes, and merge their past life with their new life. Sure, rationality will eventually pervade your thinking, but that doesn’t mean at age 22 you throw out all of society’s wisdom and roll your own.)
Reservationism is the proper antidote to the (prematurely) totalizing nature of rationality.
That is: take whatever rationality tells you, and judge it with your own existing common sense, practical reason, and understanding of the world. Reject whatever seems to you to be unreasonable. Take on whatever seems to you to be right and proper. Excise or replace existing parts of your epistemology and worldview only when it genuinely seems to you that those parts are dysfunctional or incorrect, regardless of what the rationality you encounter is telling you about them.
(Don’t take this quick summary as a substitute for reading the linked essay; read it yourself, judge it for yourself.)
Note, by the way, that rationality—as taught in the Sequences—already recommends this! If anyone fails to approach the practice of rationality in this proper way, they are failing to do that which we have explicitly been told to do! If your rationality is “prematurely totalizing”, then you’re doing it wrong.
Consider also how many times we have heard a version of this: “When I read the Sequences, the ideas found therein seemed so obvious—like they’d put into words things I’ve always somehow known or thought, but had never been able to formulate so clearly and concisely!”. This is not a coincidence! If you learn of a “rationality”-related idea, and it seems to you to be obviously correct, such that you find that not only is it obvious that you should integrate it into your worldview, but indeed that you’ve already integrated it (so naturally and perfectly does it fit)—well, good! But if you encounter an idea that is strange, and counterintuitive, then examine it well, before you rush to integrate it; examine it with your existing reason—which will necessarily include all the “rationality” that you have already carefully and prudently integrated.
(And this, too, we have already been told.)
I don’t think there’s actually a contradiction between Eliezer’s post and Connor’s comment. But maybe you should bring up specifics if you think there is one.
I like everything you say here. Hear hear.
I resonate as someone who wanted to ‘totalize’ themselves when I lived in the Bay Area rationalist scene. One hint as to why: I have felt, from a young age, compelled towards being one of the elite. I don’t think this is the case for most rationalists or anything, but noting my own personal motivation in case this helps anyone introspect on their own motivations more readily.
It was important for my identity / ego to be “one of the top / best people” and to associate with the best people. I had a natural way of dismissing anyone I thought was “below” my threshold of worthiness—I basically “didn’t think about them” and had no room in my brain for them. (I recognize the problematic-ness of that now? Like these kinds of thoughts lead to genocide, exploitation, runaway power, slavery, and a bunch of other horrible things. As such, I now find this ‘way of seeing’ morally repulsive.)
The whole rationality game was of egoic interest to me, because it seemed like a clear and even correct way of distinguishing the elite from the non-elite. Obviously Eliezer and Anna and others were just better than other people and better at thinking which is hugely important obviously and AI risk is something that most people don’t take seriously oh my god what is wrong with most people ahhh we’re all gonna die. (I didn’t really think thoughts like this or feel this way. But it would take more for me to give an accurate representation so I settled for a caricature. I hope you’re more charitable to your own insides.)
When a foolish ego wants something like this, it basically does everything it can to immerse themselves in it, and while it’s very motivating and good for learning, it is compelled towards totalization and will make foolish sacrifices. In the same way, perhaps, that young pretty Koreans sacrifice for the sake of becoming an idol in the kpop world.
MAPLE is like rehab for ego addicts. I find myself visiting my parents each year (after not having spoken to them for a decade) and valuing ‘just spending time together with people’. Like going bowling for the sake of togetherness, more than for the sake of bowling. And people here have a plethora of hobbies like woodworking, playing music, juggling, mushroom picking, baking, etc. Some people here want to totalize but are largely frustrated by their inability to do so to the extent they want to, and I think it’s an unhealthy addiction pattern that they haven’t figured out how to break. :l
I note that the things which you’re resonating with, which Connor proposes and which you expect would have helped you, or helped protect you...
...protect you from things which were not problems for me.
Which is not to say that those things are bad. Like, saving people from problems they have (that I do not have) sounds good to me.
But it does mean that there is [a good thing] for at least [some people] already, and while it may be right to trade off against that, I would want us to be eyes-open that it might be a tradeoff, rather than assuming that sliding in the Connor-Unreal direction is strictly and costlessly good.
Hmm, I want to point out I did not say anything about what I expected would have helped me or helped ‘protect’ me. I don’t see anything on that in my comment…
I also don’t think it’d be good for me to be saved from my problems...? but maybe I’m misunderstanding what you meant.
I definitely like Connor’s post. My “hear hear” was a kind of friendly encouragement for him speaking to something that felt real. I like the totalization concept. Was a good comment imo.
I do not particularly endorse his proposal… It seems like a non-starter. A better proposal might be to run some workshops or something that try to investigate this ‘totalization’ phenomenon in the community and what’s going on with it. That sounds fun! I’d totally be into doing this. Prob can’t though.
I agree with most of this point. I’ve added an ETA to the original to reflect this. My quibble (that I think is actually important) is that I think it should be less of a tradeoff and more of an {each person does the thing that is right for them}.
Endorsed, but that means when we’re talking about setting group norms and community standards, what we’re really shooting for is stuff that makes all the options available to everyone, and which helps people figure out what would be good for them as individuals.
Where one attractor near what you were proposing (i.e. not what you were proposing but what people might hear in your proposal, or what your proposal might amount to in practice) is “new way good, old way bad.”
Instead of “old way insufficient, new way more all-encompassing and cosmopolitan.”
Yeah, ideally would have lampshaded this more. My bad.
The part that gets extra complex is that I personally think ~2/3+ of people who say totalization is fine for them are in fact wrong and are missing out on tons of subtle things that you don’t notice until longer-term. But obviously the mostly likely thing is that I’m wrong about this. Hard to tell either way. I’d like to point this out more somehow so I can find out, but I’d sort of hoped my original comment would make things click for people without further time. I suppose I’ll have to think about how to broach this further.