The Rationality community [...] has been the main focus of Rationality [...] rationality’s most famous infohazard [...] join the tribe [bolding mine]
I agree that explicit reasoning is powerful and that the lesswrong.com website has hosted a lot of useful information about COVID-19, but this self-congratulatory reification of “the community”—identifying “rationality” (!!) with this particular cluster of people who read each other’s blogs—is super toxic. (Talk about “bucket errors”!) Our little robot cult does not have a monopoly on reason itself!
the young rationalist must navigate strange status hierarchies and bewildering memeplexes. I’ve seen many people bounce off the Rationalist community over those two things.
Rationality has benefits for the individual, but there are additional enormous benefits that can be reaped if you have many people doing rationality together, building on each other’s ideas. Moreover, ideally this group of people should, besides the sum of its individuals, also have a set of norms that are conductive for collective truth-seeking. Moreover, the relationships between them shouldn’t be purely impersonal and intellectual. Any group endeavor benefits from emotional connections and mutual support. Why? First, to be capable of working on anything you need to be able to satisfy your other human needs. Second, emotional connections is the machinery we have for building trust and cooperation, and that’s something no amount of rationality can replace, as long as we’re humans.
Put all of those things together and you get a “tribe”. Sure, tribes also carry dangers such as death spirals and other toxic dynamics. But the solution isn’t disbanding the tribe, that’s throwing away the baby with the bathwater. The solution is doing the hard work of establishing norms that make the tribe productive and beneficial.
Sure, tribes also carry dangers such as death spirals and other toxic dynamics. But the solution isn’t disbanding the tribe, that’s throwing away the baby with the bathwater.
I think we need to be really careful with this and the dangers of becoming a “tribe” shouldn’t be understated w.r.t our goals. In a community focused on promoting explicit reason, it becomes far more difficult to tell apart those who are carrying out social cognition from those who are actually carrying out the explicit reason, since the object level beliefs and their justifications of those doing social cognition and those using explicit reason will be almost identical. Likewise, it becomes much easier to slip back into the social cognition mode of thought while still telling yourself that your still reasoning.
IMO, if we don’t take additional precautions, this makes us really vulnerable to the dynamics described here. Doubly so the second we begin to rack up any kind of power, influence or status. Initially everything looks good and everyone around you seems to be making their way along The Path^T^M. But slowly you build up a mass of people who all agree with you on the object level but who acquired their conclusions and justifications by following social cues. Once the group reaches critical mass, you might get into a disagreement with a high status individual or group, and instead of using reason and letting the chips fall where they may, standard human tribal coordination mechanisms are used to strip you of your power and status. Then you’re expelled from the tribe. From there whatever mission the tribe had is quickly lost to the usual status games.
Personally, I haven’t seen much discussion of mechanisms for preventing this and other failure modes, so I’m skeptical of associating myself or supporting any IRL “rationalist community/village”.
The problems you discuss are real, but I don’t understand what alternative you’re defending. The choice is not having society or not having society. You are going to be part of some society anyway. So, isn’t it better if it’s a society of rationalists? Or do you advocate isolating yourself from everyone as much as possible? I really doubt that is a good strategy.
In practice, I think LessWrong has been pretty good at establishing norms that promote reason, and building some kind of community around them. It’s far from perfect, but it’s quite good compared to most other communities IMO. In fact, I think the community is one of the main benefits of LessWrong. Having such a community makes it much easier to adopt rational reasoning without becoming completely isolated due to your idiosyncratic beliefs.
So full disclosure, I’m on the outskirts of the rationality community looking inwards. My view of the situation is mostly filtered through what I’ve picked up online rather than in person.
With that said, in my mind the alternative is to keep the community more digital, or something that you go to meetups for, and to take advantage of societies’ existing infrastructure for social support and other things. This is not to say we shouldn’t have strong norms, the comment box I’m typing this in is reminding me of many of those norms right now. But the overall effect is that rationalists end up more diffuse, with less in common other than the shared desire for whatever it is we happen to be optimizing for. This in contrast to building something more like a rationalist community/village, where we create stronger interpersonal bonds and rely on each other for support.
The reason I say this is because as I understood it, the rationalist (at least the truth seeking side) came out of a generally online culture, where disagreement is (relatively) cheap, and individuals in the group don’t have much obvious leverage over one another. That environment seems to have been really good for allowing people to explore and exchange weird ideas, and to follow logic and reason wherever it happens to go. It also allows people to more easily “tell it like it is”.
When you create a situation where a group of rats become interdependent socially or economically, most of what I’ve read and seen indicates that you can gain quite a bit in terms of quality of life and group effectiveness, but I feel it also opens up the door to the kind of “catastrophic social failure” I’d mentioned earlier. Doubly so if the community starts to build up social or economic capital that other agents who don’t share the same goals might be interested in.
I think you are both right about important things, and the problem is whether we can design a community that can draw benefits of mutual support in real life, while minimising the risks. Keeping each other at internet distance is a solution, but I strongly believe it is far from the best we can do.
We probably need to accept that different people will have different preferences about how strongly involved they want to become in real life. For some people, internet debate may be the optimal level of involvement. For other people, it would be something more like the Dragon Army. Others will want something in between, and probably with emphasis on different things, e.g. more about projects and less about social interaction versus more about social interaction and less about projects. (Here, social interaction is my shortcut for solving everyday problems faced by individual people where they are now, as opposed to having a coherent outside-oriented project.)
But with different levels of involvement, there is a risk that people on some level would declare people on a different level to be “not true rationalists”. (Those with low involvement are not true rationalists, because they only want to procrastinate online, instead of becoming stronger and optimizing their lives. Those with high involvement are not true rationalists, because they care less about having correct knowledge, and more about belonging to a tribe and having group sex.) And if people around you prefer a different level, there will be social pressure to also choose a level that is not comfortable for you.
My vision would be a community where multiple levels of involvement are acceptable and all are considered normal. I believe it is possible in principle, because e.g. the Catholic Church is kinda like this: you have levels of involvement starting with “remembers a few memes, and visits the church on Christmas if the weather is nice” and ending with “spends the whole life isolated from the world, praying and debating esoteric topics”. Except for us it would go from “heard something about biases and how map is not the territory, and visits a LW/SSC meetup once in a while” to “lives in a group house and works full-time on preventing robot apocalypse”.
Plus, there are people for whom just having a group boundary as such, no matter how small, even something as vague as “identifies as a ‘rationalist’, whatever that word might mean”, is already too much. They can actually be a majority of LW readers, who knows; they are probably overrepresented among lurkers. But even for them, the website will continue existing approximately as they are now; and if some of them disappears, there are always other place on the internet.
tl;dr—we need to somehow have stronger rationalist groups for those who want them, without creating social pressure on those who don’t
This feels like an incredibly important point, the pressures when “the rationalists” are friends your debate with online vs when they are close community you are dependant on.
First, when Jacob wrote “join the tribe”, I don’t think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you’re saying here doesn’t seem like an argument against my objection to Zack’s objection to Jacob.
Second, specifically regarding Crocker’s rules, I’m not their fan at all. I think that you can be honest and tactful at the same time, and it’s reasonable to expect the same from other people.
Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution is a real danger (if not necessarily an insurmountable one).
I will probably never have the chance to live in a rationalist village, so for me the question is mostly academic. To me, a rationalist village sounds like a good idea in expectation (for some possible executions), but the uncertainty is great. However, why not experiment? Some rationalists can try having their own village. Many others wouldn’t join them anyway. We would see what comes out of it, and learn.
I’m breaking this into a separate thread since I think it’s a separate topic.
Second, specifically regarding Crocker’s rules, I’m not their fan at all. I think that you can be honest and tactful at the same time, and it’s reasonable to expect the same from other people.
So I disagree. Obviously you can’t impose Croker’s rules on others, but I find it much easier and far less mentally taxing to communicate with people I don’t expect to get offended. Likewise, I’ve gained a great deal of benefit from people very straightforwardly and bluntly calling me out when I’m dropping the ball, and I don’t think they would have bothered otherwise since there was no obvious way to be tactful about it. I also think that there are individuals out there that are both smart and easily offended, and with those individuals tact isn’t really an option as they can transparently see what you’re trying to say, and will take issue with it anyways.
I can see the value of “getting offended” when everyone is sorta operating on simulacra level 3 and factual statements are actually group policy bids. However, when it comes to forming accurate beliefs, “getting offended” strikes me as counter productive, and I do my best to operate in a mode where I don’t do it, which is basically Croker’s rules.
This might be another difference of personalities, maybe Crocker’s rules make sense for some people.
The problem is, different people have conflicting interests. If we all had the same utility function then, sure, communication would be only about conveying factual information. But we don’t. In order to cooperate, we need not only to share information, but also reassure each other we are trustworthy and not planning to defect. If someone criticizes me in a way that disregards tact, it leads me to suspect that eir agenda is not helping me but undermining my status in the group.
You can say, we shouldn’t do that, that’s “simulacra” and simulacra=bad. But the game theory is real, and you can’t just magic it away by wishing it would be different. You can try just taking on faith that everyone are your allies, but then you’ll get exploited by defectors. Or you can try to come up with a different set of norms that solves the problem. But that can’t be Crocker’s rules, at least it can’t be only Crocker’s rules.
Now, obviously you can go too far in the other direction and stop conveying meaningful criticism, or start dancing around facts that need to be faced. That’s also bad. But the optimum is in the middle, at least for most people.
So first of all, I think the dynamics of surrounding offense are tripartite. You have the the party who said something offensive, the party who gets offended, and the party who judges the others involved based on the remark. Furthermore, the reason why simulacra=bad in general is because the underlying truth is irrelevant. Without extra social machinery, there’s no way to distinguish between valid criticism and slander. Offense and slander are both symmetric weapons.
This might be another difference of personalities...you can try to come up with a different set of norms that solves the problem. But that can’t be Crocker’s rules, at least it can’t be only Crocker’s rules.
I think that’s a big part of it. Especially IRL, I’ve taken quite a few steps over the course of years to mitigate the trust issues you bring up in the first place, and I rely on social circles with norms that mitigate the downsides of Crocker’s rules. A good combination of integrity+documentation+choice of allies makes it difficult to criticize someone legitimately. To an extent, I try to make my actions align with the values of the people I associate myself with, I keep good records of what I do, and I check that the people I need either put effort into forming accurate beliefs or won’t judge me regardless of how they see me. Then when criticism is levelled against myself and or my group, I can usually challenge it by encouraging relevant third parties to look more closely at the underlying reality, usually by directly arguing against what was stated. That way I can ward off a lot of criticism without compromising as much on truth seeking, provided there isn’t a sea change in the values of my peers. This has the added benefit that it allows me and my peers to hold each other accountable to take actions that promote each others values.
The other thing I’m doing that is both far easier to pull off and way more effective, is just to be anonymous. When the judging party can’t retaliate because they don’t know you IRL and the people calling the shots on the site respect privacy and have very permissive posting norms, who cares what people say about you? You can take and dish out all the criticism you want and the only consequence is that you’ll need to sort through the crap to find the constructive/actionable/accurate stuff. (Although crap criticism can easily be a serious problem in and of itself.)
First, when Jacob wrote “join the tribe”, I don’t think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you’re saying here doesn’t seem like an argument against my objection to Zack’s objection to Jacob.
So my objection definitely applies much more to a village than less tightly bound communities, and Jacob could have been referring to anything along that spectrum. But I brought it up because you said:
Moreover, the relationships between them shouldn’t be purely impersonal and intellectual. Any group endeavour benefits from emotional connections and mutual support.
This is where the objection begins to apply. The more interdependent the group becomes, the more susceptible it is to the issues I brought up. I don’t think it’s a big deal in an online community, especially with pseudonyms, but I think we need to be careful when you get to more IRL communities. With a village, treating it like an experiment is good first step, but I’d definitely be in the group that wouldn’t join unless explicit thought had been put in to deal with my objections, or the village had been running successfully for long enough that I become convinced I was wrong.
Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution is a real danger (if not necessarily an insurmountable one).
So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don’t know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I’ve vetted them. This makes it harder for social networks to put pressure on you or undermine you. 2) A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don’t know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I’ve vetted them. This makes it harder for social networks to put pressure on you or undermine you.
Hmm, at this point it might be just a difference of personalities, but to me what you’re saying sounds like “if you don’t eat, you can’t get good poisoning”. “Dual identity” doesn’t work for me, I feel that social connections are meaningless if I can’t be upfront about myself.
A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
I guess? But in any case there will many subnetworks in the network. Even if everyone adopt the “village” model, there will be many such villages.
Hmm, at this point it might be just a difference of personalities, but to me what you’re saying sounds like “if you don’t eat, you can’t get good poisoning”. “Dual identity” doesn’t work for me, I feel that social connections are meaningless if I can’t be upfront about myself.
That’s probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don’t completely trust, and for most practical intents and purposes I’m quite a bit more “myself” online than IRL.
But in any case there will many subnetworks in the network. Even if everyone adopt the “village” model, there will be many such villages.
I think that’s easier said than done, and that a great effort needs to be made to deal with effects that come with having redundancy amongst villages/networks. Off the top of my head, you need to ward against having one of the communities implode after their best members leave for another:
Likewise, even if you do keep redundancy in rationalist communities, you need to ensure that there’s a mechanism that prevents them from seeing each other as out-groups or attacking each other when they do. This is especially important since one group viewing the other as their out-group, but not vice versa can lead to the group with the larger in-group getting exploited.
I think the point is to vigilantly keep track of the distinction between skills and tribes, to avoid any ambiguity in use of these different and opposed things, to never mention one in place of the other.
Skills and tribes are certainly different things, I’m not sure why are they opposed things? We should keep track the distinction and at the same time continue building a beneficial tribe. I agree that in terms of terminology, “rationalist” is a terrible name for “member of the LessWrong-ish community” and we should use something else (e.g. LessWronger).
They are opposed in the sense that using one in place of the other causes trouble. For example, insisting on meticulous observation of skills would be annoying and sometimes counterproductive in a tribe, and letting tribal dynamics dictate how skills are developed would corrode quality.
A tribe shouldn’t insist on a meticulous observation of skills, broadly speaking, but it should impose norms on e.g. which rhetorical moves are encouraged/discouraged in a discussion, and it should create positive incentives for the meticulous observation of skills.
As to letting tribal dynamics dictate how skills are developed, I think we don’t really have a choice there. People are social animals and everything they do and think is strongly effected by the society they are in. The only choice is trying to shape this society and those dynamics to make them beneficial rather than detrimental.
This might be possible, but should be specific to particular groups, unless there is a recipe for reproducing the norms. It’s very easy for any set of beneficial norms to be trampled by tribal dynamics. The standard story is loss of fidelity, with people who care about the mission somewhat less, or who are not as capable of incarnating its purpose, coming to dominate a movement. At that point, observation of the beneficial norms turns into a cargo cult.
Thus the phenomenon of tribes seeks to destroy the phenomenon of skills. This applies to any nuanced purpose, even when it’s the founding purpose of a tribe. Survival of a purpose requires an explanation, which won’t be generic tribal dynamics or a set of norms helpful in the short term.
everything they do and think is strongly affected by the society
A skill-aspected tribe uses its norms to police how you pursue skills. Tribes whose identity is unrelated to pursuit of same skills won’t affect this activity strongly.
...Thus the phenomenon of tribes seeks to destroy the phenomenon of skills
I don’t think it’s “the phenomenon of tribes”, I think it’s a phenomenon of tribes. Humans virtually always occupy one tribe or another, so it makes no more sense to say that “tribes destroy skills” than, for example, “DNA destroys skills”. There is no tribeless counterfactual we can compare to.
A skill-aspected tribe uses its norms to police how you pursue skills. Tribes whose identity is unrelated to pursuit of same skills won’t affect this activity strongly.
I think any tribe affects how you pursue skills by determining which skills are rewarded (or punished), and which skills you have room to exercise.
It is definitely the case, especially in the EA community, that I’m surrounded by a lot more people who express alliance via signaling and are making nontrivial commitments, for whom I’ve not seen real evidence that they understand how to think for themselves or take right action without a high status person telling them to do it.
That said I don’t find it too hard myself to distinguish between such people, and people where I can say “Yeah, I’ve seen them do real things”.
Music isn’t the sole domain of people that are particular interested in it either but it doesn’t seem “super toxic” that they might consider themselves to be, let alone refer to themselves as, ‘music people’. It seems like a natural shorthand given that that is the topic or subject around which they’ve organized.
And yes, it is – mostly – about the ideas. I’ve only been to a few meetups and generally prefer to read along and occassionally comment, but I’m open to ‘joining the tribe’ (or some ‘band’ closeby) too because it is nice to be able to socialize with people that think similarly and about the same topics.
The examples in the post about people bouncing off the community also seemed to be cases where they were bouncing off the ideas too.
The point is, the analogy fails because there is no “music people tribe” with “music meetups” organized at “MoreMusical.com”. There is no Elizier Yudkowsky of “music tribe” (at most, everyone who appreciates the Western classical music has heard about Beethoven maybe) nor idea that people familiar with main ideas of music have learned them from a small handful of “music sequences” and interconnected resources that reference each other.
Picking at one particular point in the OP, there are no weird sexual dynamics of music (some localized groups or cultures might have, eg. one could talk about sexual culture in rock music in general, and maybe the dynamics at a particular scene, but they are not central to the pursuit of all of music, and even at the local level the culture is often very diffuse).
Music is widespread. There are several cultures of music that intersect with the wider society : no particular societal group has any claim of monopoly on teaching appreciation or practice of music. There is so much music that there are economies of music. There are many academies, even more teachers, untold amount of people who have varying expertise in playing instruments who apply them for fun or sometimes profit. Anyone with talent and opportunity can learn to appreciate music or play an instrument from lots of different resources.
It would be good for rationality to explicitly attempt become like music (or scientific thinking, or mathematics, or such), because then the issue perceived by some of being an insular tribe would simply not exist.
Instead of building a single community, build a culture of several communities. After all, the idea of good, explicit thinking is universally applicable, so there is nothing in it that would necessitate a single community, is there?
The point is, the analogy fails because there is no “music people tribe” with “music meetups” organized at “MoreMusical.com”. There is no Elizier Yudkowsky of “music tribe” (at most, everyone who appreciates the Western classical music has heard about Beethoven maybe) …
Yes, there is no single ‘music people tribe’ but there are very much tribes for specific music (sub-)genres. (Music is huge!)
But as you point out, there are people of ‘similar’ stature in music generally; really much greater stature overall. And ‘music’ is much much much older than ‘rationality’. (Music is older than history!) And I’d guess it’s inherently more interesting to many many more people too.
… nor idea that people familiar with main ideas of music have learned them from a small handful of “music sequences” and interconnected resources that reference each other.
I don’t consider ‘the sequences’ or LW to be essential, especially now. The same insights are available from a lot of sources already and this should be more true in the future. It was, and perhaps is, a really good intro to what wasn’t previously a particularly coherent subject.
Actual ‘rationality’ is everywhere. There was just no one persistently pointing at all of the common phenomena, or at least not recently and in a way that’s accessible to (some) ‘laypeople’.
But I wouldn’t be surprised if there is something like a ‘music sequences’, e.g. a standard music textbook. I’d imagine ‘music theory’ or music pedagogy are in fact “interconnected resources that reference each other”.
Again, if it wasn’t already clear, the LW sequences are NOT essential for rationality.
Picking at one particular point in the OP, there are no weird sexual dynamics of music (some localized groups or cultures might have, eg. one could talk about sexual culture in rock music in general, and maybe the dynamics at a particular scene, but they are not central to the pursuit of all of music, and even at the local level the culture is often very diffuse).
There’s no weird “sexual dynamics” in rationality – based on MY experience. I don’t know why the people that publically write about that thing must define everyone else that’s part of the overall network. I certainly don’t consider any of it central to rationality.
I don’t even know that “weird sexual dynamics” is a common feature of LW meetups, let alone other ‘rationality’-related associations.
Music is widespread. There are several cultures of music that intersect with the wider society : no particular societal group has any claim of monopoly on teaching appreciation or practice of music. There is so much music that there are economies of music. There are many academies, even more teachers, untold amount of people who have varying expertise in playing instruments who apply them for fun or sometimes profit. Anyone with talent and opportunity can learn to appreciate music or play an instrument from lots of different resources.
Rationality, in the LW sense, could be all of these things. At least give it a few hundred years! Music is old.
And no one has a monopoly on rationality. If anything, LW-style rationality is competing with everything else; almost everything else is implicitly claiming to help you either believe truths or act effectively.
It would be good for rationality to explicitly attempt become like music (or scientific thinking, or mathematics, or such), because then the issue perceived by some of being an insular tribe would simply not exist.
I agree! We should definitely try to become ‘background knowledge’ or at least as diffuse or widespread as mathematics! I think this is already happening and that it was more widely known that it was. I may have assumed that anyone reading my comment knew (or believed) that too.
Instead of building a single community, build a culture of several communities. After all, the idea of good, explicit thinking is universally applicable, so there is nothing in it that would necessitate a single community, is there?
I agree! And again, I think this has already happened to an extent. I’m not a part of any rationality ‘community’; not in the sense you’ve described. I think that’s true for most of the people interested in this.
But, in case it’s still not clear, I do NOT think rationality should or must be ‘a single community’.
What I was pointing out is that if there was something named “music club” or you observed someone describe themselves as a ‘music lover’, it wouldn’t be a big deal.
I also wrote that “I’m open to ‘joining the tribe’ (or some ‘band’ closeby)”. I meant ‘tribe’ in the sense I think you mean ‘culture’ in “a culture of several communities”. I meant ‘band’ in the sense of some – not the – real-world group of people that at least meetup regularly (and are united by at least a common interest in rationality).
Now I’m wondering where people get the idea that ‘rationality’ is any kind of IRL organization centered around, or run by, Elizier Yudkowsky. I think there’s way more of us that aren’t a member of such an organization, beyond being users of this site or readers of ‘the diaspora’.
I do not feel like writing a point by point response, it seems we are in agreement over many issues but maybe not all.
Some paragrah-sized points I want to elaborate on, however:
1 If it is not clear, in my comment I attempted not to argue against your positions in particular. It was more in the support of the idea expressed upthread that building too much of the attitude of there being an identifiable “Rationality Tribe” is a net negative.
(1b Negative both to the objective of raising general societal sanity waterline and the tribespeople’s ability of it. Especially I feel the point—cant find link to comment with my phone—how in a close-knit society where many opinons obtained by explicit thought are expressed, it can become difficult disengtangle which of my individual opinions I have obtained by my own explicit thought and agreeing with others because I agree with the logic, or which opinions I am agreeing with because of my social mind wants agree or disagree with some specific individuals or “group consensus”)
2 One of the reasons I picked the sexual dynamics because OP mentions it in a figure caption as a joke. Nevertheless, it is an indication that at least in the OP the Tribe in question is not thought as existing in eg some abstract idea space but as a specific group of people living near each other enough to have sexual dynamics.
3 I find myself disagreeing with the idea that rationality-in-general (in contrast with LW-originated social group) is a new innovation. In near history perspective the first example that comes to mind, John Allen Paulos published Innumecary in 1988 ; I read it as a kid in 00s when I had no internet and LW did not exist, but it tickled the same parts of my brain as many ideas about putting numbers on arguments floating in LW-adjacent thoughtspace. In long-term history perspective, I’d make an argument that attempt at improving human ability at rational thought is part of the grand scientific project and tradition that goes back to Socrates.
4 I also I think that having social groups over common interests is good. I got started in local area SSC meetups because I was interested in talking with with people interested in AI, science, philosophy, and other such things Iassumed people reading SSC the blog would be interested in. (Maybe this would be “joining a band” in the metaphor.)
5 Writing and disseminating resources that help with better thinking is a good thing and worthwhile project. It is also quite natural that liked-minded people seek each other’s company, resulting in a community. (Of which there are and can be many kinds: up until late-20th century, there was an intellectual community of “men of letters” primarly writing letters to each other if they did not live near enough for regular in-person discussion.)
6 The part that seems problematic (and the complaint this comment thread is about) is the point where it looks like the Bay Area community (or some members thereof) treats itself as having a kind-of weird cultural or intellectual monopoly over principles of rationality as the Rationality Community With Capital Letters, whose members tacitly assume after learning about Rationality, others would want join exactly their “tribe”, instead of assuming more pluralistic outcomes.
This brings me back to your analogy that inspired me to claim rationality is not yet like music: some people most focused in tribes and communities do not talk in terms of having a music community in Bay Area, but of The Music Community.
I agree that explicit reasoning is powerful and that the lesswrong.com website has hosted a lot of useful information about COVID-19, but this self-congratulatory reification of “the community”—identifying “rationality” (!!) with this particular cluster of people who read each other’s blogs—is super toxic. (Talk about “bucket errors”!) Our little robot cult does not have a monopoly on reason itself!
Great! If bright young people read and understand the Sequences and go on to apply the core ideas (Bayesian reasoning, belief as anticipated experience, the real reasons being the ones that compute your decision, &c.) somewhere else, far away from the idiosyncratic status hierarchy of our idiosyncratic robot cult, that’s a good thing. Because it is about the ideas, not just roping in more warm bodies to join the tribe, right?!
Rationality has benefits for the individual, but there are additional enormous benefits that can be reaped if you have many people doing rationality together, building on each other’s ideas. Moreover, ideally this group of people should, besides the sum of its individuals, also have a set of norms that are conductive for collective truth-seeking. Moreover, the relationships between them shouldn’t be purely impersonal and intellectual. Any group endeavor benefits from emotional connections and mutual support. Why? First, to be capable of working on anything you need to be able to satisfy your other human needs. Second, emotional connections is the machinery we have for building trust and cooperation, and that’s something no amount of rationality can replace, as long as we’re humans.
Put all of those things together and you get a “tribe”. Sure, tribes also carry dangers such as death spirals and other toxic dynamics. But the solution isn’t disbanding the tribe, that’s throwing away the baby with the bathwater. The solution is doing the hard work of establishing norms that make the tribe productive and beneficial.
I think we need to be really careful with this and the dangers of becoming a “tribe” shouldn’t be understated w.r.t our goals. In a community focused on promoting explicit reason, it becomes far more difficult to tell apart those who are carrying out social cognition from those who are actually carrying out the explicit reason, since the object level beliefs and their justifications of those doing social cognition and those using explicit reason will be almost identical. Likewise, it becomes much easier to slip back into the social cognition mode of thought while still telling yourself that your still reasoning.
IMO, if we don’t take additional precautions, this makes us really vulnerable to the dynamics described here. Doubly so the second we begin to rack up any kind of power, influence or status. Initially everything looks good and everyone around you seems to be making their way along The Path^T^M. But slowly you build up a mass of people who all agree with you on the object level but who acquired their conclusions and justifications by following social cues. Once the group reaches critical mass, you might get into a disagreement with a high status individual or group, and instead of using reason and letting the chips fall where they may, standard human tribal coordination mechanisms are used to strip you of your power and status. Then you’re expelled from the tribe. From there whatever mission the tribe had is quickly lost to the usual status games.
Personally, I haven’t seen much discussion of mechanisms for preventing this and other failure modes, so I’m skeptical of associating myself or supporting any IRL “rationalist community/village”.
The problems you discuss are real, but I don’t understand what alternative you’re defending. The choice is not having society or not having society. You are going to be part of some society anyway. So, isn’t it better if it’s a society of rationalists? Or do you advocate isolating yourself from everyone as much as possible? I really doubt that is a good strategy.
In practice, I think LessWrong has been pretty good at establishing norms that promote reason, and building some kind of community around them. It’s far from perfect, but it’s quite good compared to most other communities IMO. In fact, I think the community is one of the main benefits of LessWrong. Having such a community makes it much easier to adopt rational reasoning without becoming completely isolated due to your idiosyncratic beliefs.
So full disclosure, I’m on the outskirts of the rationality community looking inwards. My view of the situation is mostly filtered through what I’ve picked up online rather than in person.
With that said, in my mind the alternative is to keep the community more digital, or something that you go to meetups for, and to take advantage of societies’ existing infrastructure for social support and other things. This is not to say we shouldn’t have strong norms, the comment box I’m typing this in is reminding me of many of those norms right now. But the overall effect is that rationalists end up more diffuse, with less in common other than the shared desire for whatever it is we happen to be optimizing for. This in contrast to building something more like a rationalist community/village, where we create stronger interpersonal bonds and rely on each other for support.
The reason I say this is because as I understood it, the rationalist (at least the truth seeking side) came out of a generally online culture, where disagreement is (relatively) cheap, and individuals in the group don’t have much obvious leverage over one another. That environment seems to have been really good for allowing people to explore and exchange weird ideas, and to follow logic and reason wherever it happens to go. It also allows people to more easily “tell it like it is”.
When you create a situation where a group of rats become interdependent socially or economically, most of what I’ve read and seen indicates that you can gain quite a bit in terms of quality of life and group effectiveness, but I feel it also opens up the door to the kind of “catastrophic social failure” I’d mentioned earlier. Doubly so if the community starts to build up social or economic capital that other agents who don’t share the same goals might be interested in.
I think you are both right about important things, and the problem is whether we can design a community that can draw benefits of mutual support in real life, while minimising the risks. Keeping each other at internet distance is a solution, but I strongly believe it is far from the best we can do.
We probably need to accept that different people will have different preferences about how strongly involved they want to become in real life. For some people, internet debate may be the optimal level of involvement. For other people, it would be something more like the Dragon Army. Others will want something in between, and probably with emphasis on different things, e.g. more about projects and less about social interaction versus more about social interaction and less about projects. (Here, social interaction is my shortcut for solving everyday problems faced by individual people where they are now, as opposed to having a coherent outside-oriented project.)
But with different levels of involvement, there is a risk that people on some level would declare people on a different level to be “not true rationalists”. (Those with low involvement are not true rationalists, because they only want to procrastinate online, instead of becoming stronger and optimizing their lives. Those with high involvement are not true rationalists, because they care less about having correct knowledge, and more about belonging to a tribe and having group sex.) And if people around you prefer a different level, there will be social pressure to also choose a level that is not comfortable for you.
My vision would be a community where multiple levels of involvement are acceptable and all are considered normal. I believe it is possible in principle, because e.g. the Catholic Church is kinda like this: you have levels of involvement starting with “remembers a few memes, and visits the church on Christmas if the weather is nice” and ending with “spends the whole life isolated from the world, praying and debating esoteric topics”. Except for us it would go from “heard something about biases and how map is not the territory, and visits a LW/SSC meetup once in a while” to “lives in a group house and works full-time on preventing robot apocalypse”.
Plus, there are people for whom just having a group boundary as such, no matter how small, even something as vague as “identifies as a ‘rationalist’, whatever that word might mean”, is already too much. They can actually be a majority of LW readers, who knows; they are probably overrepresented among lurkers. But even for them, the website will continue existing approximately as they are now; and if some of them disappears, there are always other place on the internet.
tl;dr—we need to somehow have stronger rationalist groups for those who want them, without creating social pressure on those who don’t
This feels like an incredibly important point, the pressures when “the rationalists” are friends your debate with online vs when they are close community you are dependant on.
First, when Jacob wrote “join the tribe”, I don’t think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you’re saying here doesn’t seem like an argument against my objection to Zack’s objection to Jacob.
Second, specifically regarding Crocker’s rules, I’m not their fan at all. I think that you can be honest and tactful at the same time, and it’s reasonable to expect the same from other people.
Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution is a real danger (if not necessarily an insurmountable one).
I will probably never have the chance to live in a rationalist village, so for me the question is mostly academic. To me, a rationalist village sounds like a good idea in expectation (for some possible executions), but the uncertainty is great. However, why not experiment? Some rationalists can try having their own village. Many others wouldn’t join them anyway. We would see what comes out of it, and learn.
I’m breaking this into a separate thread since I think it’s a separate topic.
So I disagree. Obviously you can’t impose Croker’s rules on others, but I find it much easier and far less mentally taxing to communicate with people I don’t expect to get offended. Likewise, I’ve gained a great deal of benefit from people very straightforwardly and bluntly calling me out when I’m dropping the ball, and I don’t think they would have bothered otherwise since there was no obvious way to be tactful about it. I also think that there are individuals out there that are both smart and easily offended, and with those individuals tact isn’t really an option as they can transparently see what you’re trying to say, and will take issue with it anyways.
I can see the value of “getting offended” when everyone is sorta operating on simulacra level 3 and factual statements are actually group policy bids. However, when it comes to forming accurate beliefs, “getting offended” strikes me as counter productive, and I do my best to operate in a mode where I don’t do it, which is basically Croker’s rules.
This might be another difference of personalities, maybe Crocker’s rules make sense for some people.
The problem is, different people have conflicting interests. If we all had the same utility function then, sure, communication would be only about conveying factual information. But we don’t. In order to cooperate, we need not only to share information, but also reassure each other we are trustworthy and not planning to defect. If someone criticizes me in a way that disregards tact, it leads me to suspect that eir agenda is not helping me but undermining my status in the group.
You can say, we shouldn’t do that, that’s “simulacra” and simulacra=bad. But the game theory is real, and you can’t just magic it away by wishing it would be different. You can try just taking on faith that everyone are your allies, but then you’ll get exploited by defectors. Or you can try to come up with a different set of norms that solves the problem. But that can’t be Crocker’s rules, at least it can’t be only Crocker’s rules.
Now, obviously you can go too far in the other direction and stop conveying meaningful criticism, or start dancing around facts that need to be faced. That’s also bad. But the optimum is in the middle, at least for most people.
So first of all, I think the dynamics of surrounding offense are tripartite. You have the the party who said something offensive, the party who gets offended, and the party who judges the others involved based on the remark. Furthermore, the reason why simulacra=bad in general is because the underlying truth is irrelevant. Without extra social machinery, there’s no way to distinguish between valid criticism and slander. Offense and slander are both symmetric weapons.
I think that’s a big part of it. Especially IRL, I’ve taken quite a few steps over the course of years to mitigate the trust issues you bring up in the first place, and I rely on social circles with norms that mitigate the downsides of Crocker’s rules. A good combination of integrity+documentation+choice of allies makes it difficult to criticize someone legitimately. To an extent, I try to make my actions align with the values of the people I associate myself with, I keep good records of what I do, and I check that the people I need either put effort into forming accurate beliefs or won’t judge me regardless of how they see me. Then when criticism is levelled against myself and or my group, I can usually challenge it by encouraging relevant third parties to look more closely at the underlying reality, usually by directly arguing against what was stated. That way I can ward off a lot of criticism without compromising as much on truth seeking, provided there isn’t a sea change in the values of my peers. This has the added benefit that it allows me and my peers to hold each other accountable to take actions that promote each others values.
The other thing I’m doing that is both far easier to pull off and way more effective, is just to be anonymous. When the judging party can’t retaliate because they don’t know you IRL and the people calling the shots on the site respect privacy and have very permissive posting norms, who cares what people say about you? You can take and dish out all the criticism you want and the only consequence is that you’ll need to sort through the crap to find the constructive/actionable/accurate stuff. (Although crap criticism can easily be a serious problem in and of itself.)
So my objection definitely applies much more to a village than less tightly bound communities, and Jacob could have been referring to anything along that spectrum. But I brought it up because you said:
This is where the objection begins to apply. The more interdependent the group becomes, the more susceptible it is to the issues I brought up. I don’t think it’s a big deal in an online community, especially with pseudonyms, but I think we need to be careful when you get to more IRL communities. With a village, treating it like an experiment is good first step, but I’d definitely be in the group that wouldn’t join unless explicit thought had been put in to deal with my objections, or the village had been running successfully for long enough that I become convinced I was wrong.
So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don’t know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I’ve vetted them. This makes it harder for social networks to put pressure on you or undermine you. 2) A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
Hmm, at this point it might be just a difference of personalities, but to me what you’re saying sounds like “if you don’t eat, you can’t get good poisoning”. “Dual identity” doesn’t work for me, I feel that social connections are meaningless if I can’t be upfront about myself.
I guess? But in any case there will many subnetworks in the network. Even if everyone adopt the “village” model, there will be many such villages.
That’s probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don’t completely trust, and for most practical intents and purposes I’m quite a bit more “myself” online than IRL.
I think that’s easier said than done, and that a great effort needs to be made to deal with effects that come with having redundancy amongst villages/networks. Off the top of my head, you need to ward against having one of the communities implode after their best members leave for another:
Likewise, even if you do keep redundancy in rationalist communities, you need to ensure that there’s a mechanism that prevents them from seeing each other as out-groups or attacking each other when they do. This is especially important since one group viewing the other as their out-group, but not vice versa can lead to the group with the larger in-group getting exploited.
I think the point is to vigilantly keep track of the distinction between skills and tribes, to avoid any ambiguity in use of these different and opposed things, to never mention one in place of the other.
Skills and tribes are certainly different things, I’m not sure why are they opposed things? We should keep track the distinction and at the same time continue building a beneficial tribe. I agree that in terms of terminology, “rationalist” is a terrible name for “member of the LessWrong-ish community” and we should use something else (e.g. LessWronger).
They are opposed in the sense that using one in place of the other causes trouble. For example, insisting on meticulous observation of skills would be annoying and sometimes counterproductive in a tribe, and letting tribal dynamics dictate how skills are developed would corrode quality.
A tribe shouldn’t insist on a meticulous observation of skills, broadly speaking, but it should impose norms on e.g. which rhetorical moves are encouraged/discouraged in a discussion, and it should create positive incentives for the meticulous observation of skills.
As to letting tribal dynamics dictate how skills are developed, I think we don’t really have a choice there. People are social animals and everything they do and think is strongly effected by the society they are in. The only choice is trying to shape this society and those dynamics to make them beneficial rather than detrimental.
This might be possible, but should be specific to particular groups, unless there is a recipe for reproducing the norms. It’s very easy for any set of beneficial norms to be trampled by tribal dynamics. The standard story is loss of fidelity, with people who care about the mission somewhat less, or who are not as capable of incarnating its purpose, coming to dominate a movement. At that point, observation of the beneficial norms turns into a cargo cult.
Thus the phenomenon of tribes seeks to destroy the phenomenon of skills. This applies to any nuanced purpose, even when it’s the founding purpose of a tribe. Survival of a purpose requires an explanation, which won’t be generic tribal dynamics or a set of norms helpful in the short term.
A skill-aspected tribe uses its norms to police how you pursue skills. Tribes whose identity is unrelated to pursuit of same skills won’t affect this activity strongly.
I don’t think it’s “the phenomenon of tribes”, I think it’s a phenomenon of tribes. Humans virtually always occupy one tribe or another, so it makes no more sense to say that “tribes destroy skills” than, for example, “DNA destroys skills”. There is no tribeless counterfactual we can compare to.
I think any tribe affects how you pursue skills by determining which skills are rewarded (or punished), and which skills you have room to exercise.
It is definitely the case, especially in the EA community, that I’m surrounded by a lot more people who express alliance via signaling and are making nontrivial commitments, for whom I’ve not seen real evidence that they understand how to think for themselves or take right action without a high status person telling them to do it.
That said I don’t find it too hard myself to distinguish between such people, and people where I can say “Yeah, I’ve seen them do real things”.
Music isn’t the sole domain of people that are particular interested in it either but it doesn’t seem “super toxic” that they might consider themselves to be, let alone refer to themselves as, ‘music people’. It seems like a natural shorthand given that that is the topic or subject around which they’ve organized.
And yes, it is – mostly – about the ideas. I’ve only been to a few meetups and generally prefer to read along and occassionally comment, but I’m open to ‘joining the tribe’ (or some ‘band’ closeby) too because it is nice to be able to socialize with people that think similarly and about the same topics.
The examples in the post about people bouncing off the community also seemed to be cases where they were bouncing off the ideas too.
The point is, the analogy fails because there is no “music people tribe” with “music meetups” organized at “MoreMusical.com”. There is no Elizier Yudkowsky of “music tribe” (at most, everyone who appreciates the Western classical music has heard about Beethoven maybe) nor idea that people familiar with main ideas of music have learned them from a small handful of “music sequences” and interconnected resources that reference each other.
Picking at one particular point in the OP, there are no weird sexual dynamics of music (some localized groups or cultures might have, eg. one could talk about sexual culture in rock music in general, and maybe the dynamics at a particular scene, but they are not central to the pursuit of all of music, and even at the local level the culture is often very diffuse).
Music is widespread. There are several cultures of music that intersect with the wider society : no particular societal group has any claim of monopoly on teaching appreciation or practice of music. There is so much music that there are economies of music. There are many academies, even more teachers, untold amount of people who have varying expertise in playing instruments who apply them for fun or sometimes profit. Anyone with talent and opportunity can learn to appreciate music or play an instrument from lots of different resources.
It would be good for rationality to explicitly attempt become like music (or scientific thinking, or mathematics, or such), because then the issue perceived by some of being an insular tribe would simply not exist.
Instead of building a single community, build a culture of several communities. After all, the idea of good, explicit thinking is universally applicable, so there is nothing in it that would necessitate a single community, is there?
Yes, there is no single ‘music people tribe’ but there are very much tribes for specific music (sub-)genres. (Music is huge!)
But as you point out, there are people of ‘similar’ stature in music generally; really much greater stature overall. And ‘music’ is much much much older than ‘rationality’. (Music is older than history!) And I’d guess it’s inherently more interesting to many many more people too.
I don’t consider ‘the sequences’ or LW to be essential, especially now. The same insights are available from a lot of sources already and this should be more true in the future. It was, and perhaps is, a really good intro to what wasn’t previously a particularly coherent subject.
Actual ‘rationality’ is everywhere. There was just no one persistently pointing at all of the common phenomena, or at least not recently and in a way that’s accessible to (some) ‘laypeople’.
But I wouldn’t be surprised if there is something like a ‘music sequences’, e.g. a standard music textbook. I’d imagine ‘music theory’ or music pedagogy are in fact “interconnected resources that reference each other”.
Again, if it wasn’t already clear, the LW sequences are NOT essential for rationality.
There’s no weird “sexual dynamics” in rationality – based on MY experience. I don’t know why the people that publically write about that thing must define everyone else that’s part of the overall network. I certainly don’t consider any of it central to rationality.
I don’t even know that “weird sexual dynamics” is a common feature of LW meetups, let alone other ‘rationality’-related associations.
Rationality, in the LW sense, could be all of these things. At least give it a few hundred years! Music is old.
And no one has a monopoly on rationality. If anything, LW-style rationality is competing with everything else; almost everything else is implicitly claiming to help you either believe truths or act effectively.
I agree! We should definitely try to become ‘background knowledge’ or at least as diffuse or widespread as mathematics! I think this is already happening and that it was more widely known that it was. I may have assumed that anyone reading my comment knew (or believed) that too.
I agree! And again, I think this has already happened to an extent. I’m not a part of any rationality ‘community’; not in the sense you’ve described. I think that’s true for most of the people interested in this.
But, in case it’s still not clear, I do NOT think rationality should or must be ‘a single community’.
What I was pointing out is that if there was something named “music club” or you observed someone describe themselves as a ‘music lover’, it wouldn’t be a big deal.
I also wrote that “I’m open to ‘joining the tribe’ (or some ‘band’ closeby)”. I meant ‘tribe’ in the sense I think you mean ‘culture’ in “a culture of several communities”. I meant ‘band’ in the sense of some – not the – real-world group of people that at least meetup regularly (and are united by at least a common interest in rationality).
Now I’m wondering where people get the idea that ‘rationality’ is any kind of IRL organization centered around, or run by, Elizier Yudkowsky. I think there’s way more of us that aren’t a member of such an organization, beyond being users of this site or readers of ‘the diaspora’.
I do not feel like writing a point by point response, it seems we are in agreement over many issues but maybe not all.
Some paragrah-sized points I want to elaborate on, however:
1 If it is not clear, in my comment I attempted not to argue against your positions in particular. It was more in the support of the idea expressed upthread that building too much of the attitude of there being an identifiable “Rationality Tribe” is a net negative.
(1b Negative both to the objective of raising general societal sanity waterline and the tribespeople’s ability of it. Especially I feel the point—cant find link to comment with my phone—how in a close-knit society where many opinons obtained by explicit thought are expressed, it can become difficult disengtangle which of my individual opinions I have obtained by my own explicit thought and agreeing with others because I agree with the logic, or which opinions I am agreeing with because of my social mind wants agree or disagree with some specific individuals or “group consensus”)
2 One of the reasons I picked the sexual dynamics because OP mentions it in a figure caption as a joke. Nevertheless, it is an indication that at least in the OP the Tribe in question is not thought as existing in eg some abstract idea space but as a specific group of people living near each other enough to have sexual dynamics.
3 I find myself disagreeing with the idea that rationality-in-general (in contrast with LW-originated social group) is a new innovation. In near history perspective the first example that comes to mind, John Allen Paulos published Innumecary in 1988 ; I read it as a kid in 00s when I had no internet and LW did not exist, but it tickled the same parts of my brain as many ideas about putting numbers on arguments floating in LW-adjacent thoughtspace. In long-term history perspective, I’d make an argument that attempt at improving human ability at rational thought is part of the grand scientific project and tradition that goes back to Socrates.
4 I also I think that having social groups over common interests is good. I got started in local area SSC meetups because I was interested in talking with with people interested in AI, science, philosophy, and other such things Iassumed people reading SSC the blog would be interested in. (Maybe this would be “joining a band” in the metaphor.)
5 Writing and disseminating resources that help with better thinking is a good thing and worthwhile project. It is also quite natural that liked-minded people seek each other’s company, resulting in a community. (Of which there are and can be many kinds: up until late-20th century, there was an intellectual community of “men of letters” primarly writing letters to each other if they did not live near enough for regular in-person discussion.)
6 The part that seems problematic (and the complaint this comment thread is about) is the point where it looks like the Bay Area community (or some members thereof) treats itself as having a kind-of weird cultural or intellectual monopoly over principles of rationality as the Rationality Community With Capital Letters, whose members tacitly assume after learning about Rationality, others would want join exactly their “tribe”, instead of assuming more pluralistic outcomes.
This brings me back to your analogy that inspired me to claim rationality is not yet like music: some people most focused in tribes and communities do not talk in terms of having a music community in Bay Area, but of The Music Community.