Sure, tribes also carry dangers such as death spirals and other toxic dynamics. But the solution isn’t disbanding the tribe, that’s throwing away the baby with the bathwater.
I think we need to be really careful with this and the dangers of becoming a “tribe” shouldn’t be understated w.r.t our goals. In a community focused on promoting explicit reason, it becomes far more difficult to tell apart those who are carrying out social cognition from those who are actually carrying out the explicit reason, since the object level beliefs and their justifications of those doing social cognition and those using explicit reason will be almost identical. Likewise, it becomes much easier to slip back into the social cognition mode of thought while still telling yourself that your still reasoning.
IMO, if we don’t take additional precautions, this makes us really vulnerable to the dynamics described here. Doubly so the second we begin to rack up any kind of power, influence or status. Initially everything looks good and everyone around you seems to be making their way along The Path^T^M. But slowly you build up a mass of people who all agree with you on the object level but who acquired their conclusions and justifications by following social cues. Once the group reaches critical mass, you might get into a disagreement with a high status individual or group, and instead of using reason and letting the chips fall where they may, standard human tribal coordination mechanisms are used to strip you of your power and status. Then you’re expelled from the tribe. From there whatever mission the tribe had is quickly lost to the usual status games.
Personally, I haven’t seen much discussion of mechanisms for preventing this and other failure modes, so I’m skeptical of associating myself or supporting any IRL “rationalist community/village”.
The problems you discuss are real, but I don’t understand what alternative you’re defending. The choice is not having society or not having society. You are going to be part of some society anyway. So, isn’t it better if it’s a society of rationalists? Or do you advocate isolating yourself from everyone as much as possible? I really doubt that is a good strategy.
In practice, I think LessWrong has been pretty good at establishing norms that promote reason, and building some kind of community around them. It’s far from perfect, but it’s quite good compared to most other communities IMO. In fact, I think the community is one of the main benefits of LessWrong. Having such a community makes it much easier to adopt rational reasoning without becoming completely isolated due to your idiosyncratic beliefs.
So full disclosure, I’m on the outskirts of the rationality community looking inwards. My view of the situation is mostly filtered through what I’ve picked up online rather than in person.
With that said, in my mind the alternative is to keep the community more digital, or something that you go to meetups for, and to take advantage of societies’ existing infrastructure for social support and other things. This is not to say we shouldn’t have strong norms, the comment box I’m typing this in is reminding me of many of those norms right now. But the overall effect is that rationalists end up more diffuse, with less in common other than the shared desire for whatever it is we happen to be optimizing for. This in contrast to building something more like a rationalist community/village, where we create stronger interpersonal bonds and rely on each other for support.
The reason I say this is because as I understood it, the rationalist (at least the truth seeking side) came out of a generally online culture, where disagreement is (relatively) cheap, and individuals in the group don’t have much obvious leverage over one another. That environment seems to have been really good for allowing people to explore and exchange weird ideas, and to follow logic and reason wherever it happens to go. It also allows people to more easily “tell it like it is”.
When you create a situation where a group of rats become interdependent socially or economically, most of what I’ve read and seen indicates that you can gain quite a bit in terms of quality of life and group effectiveness, but I feel it also opens up the door to the kind of “catastrophic social failure” I’d mentioned earlier. Doubly so if the community starts to build up social or economic capital that other agents who don’t share the same goals might be interested in.
I think you are both right about important things, and the problem is whether we can design a community that can draw benefits of mutual support in real life, while minimising the risks. Keeping each other at internet distance is a solution, but I strongly believe it is far from the best we can do.
We probably need to accept that different people will have different preferences about how strongly involved they want to become in real life. For some people, internet debate may be the optimal level of involvement. For other people, it would be something more like the Dragon Army. Others will want something in between, and probably with emphasis on different things, e.g. more about projects and less about social interaction versus more about social interaction and less about projects. (Here, social interaction is my shortcut for solving everyday problems faced by individual people where they are now, as opposed to having a coherent outside-oriented project.)
But with different levels of involvement, there is a risk that people on some level would declare people on a different level to be “not true rationalists”. (Those with low involvement are not true rationalists, because they only want to procrastinate online, instead of becoming stronger and optimizing their lives. Those with high involvement are not true rationalists, because they care less about having correct knowledge, and more about belonging to a tribe and having group sex.) And if people around you prefer a different level, there will be social pressure to also choose a level that is not comfortable for you.
My vision would be a community where multiple levels of involvement are acceptable and all are considered normal. I believe it is possible in principle, because e.g. the Catholic Church is kinda like this: you have levels of involvement starting with “remembers a few memes, and visits the church on Christmas if the weather is nice” and ending with “spends the whole life isolated from the world, praying and debating esoteric topics”. Except for us it would go from “heard something about biases and how map is not the territory, and visits a LW/SSC meetup once in a while” to “lives in a group house and works full-time on preventing robot apocalypse”.
Plus, there are people for whom just having a group boundary as such, no matter how small, even something as vague as “identifies as a ‘rationalist’, whatever that word might mean”, is already too much. They can actually be a majority of LW readers, who knows; they are probably overrepresented among lurkers. But even for them, the website will continue existing approximately as they are now; and if some of them disappears, there are always other place on the internet.
tl;dr—we need to somehow have stronger rationalist groups for those who want them, without creating social pressure on those who don’t
This feels like an incredibly important point, the pressures when “the rationalists” are friends your debate with online vs when they are close community you are dependant on.
First, when Jacob wrote “join the tribe”, I don’t think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you’re saying here doesn’t seem like an argument against my objection to Zack’s objection to Jacob.
Second, specifically regarding Crocker’s rules, I’m not their fan at all. I think that you can be honest and tactful at the same time, and it’s reasonable to expect the same from other people.
Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution is a real danger (if not necessarily an insurmountable one).
I will probably never have the chance to live in a rationalist village, so for me the question is mostly academic. To me, a rationalist village sounds like a good idea in expectation (for some possible executions), but the uncertainty is great. However, why not experiment? Some rationalists can try having their own village. Many others wouldn’t join them anyway. We would see what comes out of it, and learn.
I’m breaking this into a separate thread since I think it’s a separate topic.
Second, specifically regarding Crocker’s rules, I’m not their fan at all. I think that you can be honest and tactful at the same time, and it’s reasonable to expect the same from other people.
So I disagree. Obviously you can’t impose Croker’s rules on others, but I find it much easier and far less mentally taxing to communicate with people I don’t expect to get offended. Likewise, I’ve gained a great deal of benefit from people very straightforwardly and bluntly calling me out when I’m dropping the ball, and I don’t think they would have bothered otherwise since there was no obvious way to be tactful about it. I also think that there are individuals out there that are both smart and easily offended, and with those individuals tact isn’t really an option as they can transparently see what you’re trying to say, and will take issue with it anyways.
I can see the value of “getting offended” when everyone is sorta operating on simulacra level 3 and factual statements are actually group policy bids. However, when it comes to forming accurate beliefs, “getting offended” strikes me as counter productive, and I do my best to operate in a mode where I don’t do it, which is basically Croker’s rules.
This might be another difference of personalities, maybe Crocker’s rules make sense for some people.
The problem is, different people have conflicting interests. If we all had the same utility function then, sure, communication would be only about conveying factual information. But we don’t. In order to cooperate, we need not only to share information, but also reassure each other we are trustworthy and not planning to defect. If someone criticizes me in a way that disregards tact, it leads me to suspect that eir agenda is not helping me but undermining my status in the group.
You can say, we shouldn’t do that, that’s “simulacra” and simulacra=bad. But the game theory is real, and you can’t just magic it away by wishing it would be different. You can try just taking on faith that everyone are your allies, but then you’ll get exploited by defectors. Or you can try to come up with a different set of norms that solves the problem. But that can’t be Crocker’s rules, at least it can’t be only Crocker’s rules.
Now, obviously you can go too far in the other direction and stop conveying meaningful criticism, or start dancing around facts that need to be faced. That’s also bad. But the optimum is in the middle, at least for most people.
So first of all, I think the dynamics of surrounding offense are tripartite. You have the the party who said something offensive, the party who gets offended, and the party who judges the others involved based on the remark. Furthermore, the reason why simulacra=bad in general is because the underlying truth is irrelevant. Without extra social machinery, there’s no way to distinguish between valid criticism and slander. Offense and slander are both symmetric weapons.
This might be another difference of personalities...you can try to come up with a different set of norms that solves the problem. But that can’t be Crocker’s rules, at least it can’t be only Crocker’s rules.
I think that’s a big part of it. Especially IRL, I’ve taken quite a few steps over the course of years to mitigate the trust issues you bring up in the first place, and I rely on social circles with norms that mitigate the downsides of Crocker’s rules. A good combination of integrity+documentation+choice of allies makes it difficult to criticize someone legitimately. To an extent, I try to make my actions align with the values of the people I associate myself with, I keep good records of what I do, and I check that the people I need either put effort into forming accurate beliefs or won’t judge me regardless of how they see me. Then when criticism is levelled against myself and or my group, I can usually challenge it by encouraging relevant third parties to look more closely at the underlying reality, usually by directly arguing against what was stated. That way I can ward off a lot of criticism without compromising as much on truth seeking, provided there isn’t a sea change in the values of my peers. This has the added benefit that it allows me and my peers to hold each other accountable to take actions that promote each others values.
The other thing I’m doing that is both far easier to pull off and way more effective, is just to be anonymous. When the judging party can’t retaliate because they don’t know you IRL and the people calling the shots on the site respect privacy and have very permissive posting norms, who cares what people say about you? You can take and dish out all the criticism you want and the only consequence is that you’ll need to sort through the crap to find the constructive/actionable/accurate stuff. (Although crap criticism can easily be a serious problem in and of itself.)
First, when Jacob wrote “join the tribe”, I don’t think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you’re saying here doesn’t seem like an argument against my objection to Zack’s objection to Jacob.
So my objection definitely applies much more to a village than less tightly bound communities, and Jacob could have been referring to anything along that spectrum. But I brought it up because you said:
Moreover, the relationships between them shouldn’t be purely impersonal and intellectual. Any group endeavour benefits from emotional connections and mutual support.
This is where the objection begins to apply. The more interdependent the group becomes, the more susceptible it is to the issues I brought up. I don’t think it’s a big deal in an online community, especially with pseudonyms, but I think we need to be careful when you get to more IRL communities. With a village, treating it like an experiment is good first step, but I’d definitely be in the group that wouldn’t join unless explicit thought had been put in to deal with my objections, or the village had been running successfully for long enough that I become convinced I was wrong.
Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution is a real danger (if not necessarily an insurmountable one).
So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don’t know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I’ve vetted them. This makes it harder for social networks to put pressure on you or undermine you. 2) A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don’t know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I’ve vetted them. This makes it harder for social networks to put pressure on you or undermine you.
Hmm, at this point it might be just a difference of personalities, but to me what you’re saying sounds like “if you don’t eat, you can’t get good poisoning”. “Dual identity” doesn’t work for me, I feel that social connections are meaningless if I can’t be upfront about myself.
A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
I guess? But in any case there will many subnetworks in the network. Even if everyone adopt the “village” model, there will be many such villages.
Hmm, at this point it might be just a difference of personalities, but to me what you’re saying sounds like “if you don’t eat, you can’t get good poisoning”. “Dual identity” doesn’t work for me, I feel that social connections are meaningless if I can’t be upfront about myself.
That’s probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don’t completely trust, and for most practical intents and purposes I’m quite a bit more “myself” online than IRL.
But in any case there will many subnetworks in the network. Even if everyone adopt the “village” model, there will be many such villages.
I think that’s easier said than done, and that a great effort needs to be made to deal with effects that come with having redundancy amongst villages/networks. Off the top of my head, you need to ward against having one of the communities implode after their best members leave for another:
Likewise, even if you do keep redundancy in rationalist communities, you need to ensure that there’s a mechanism that prevents them from seeing each other as out-groups or attacking each other when they do. This is especially important since one group viewing the other as their out-group, but not vice versa can lead to the group with the larger in-group getting exploited.
I think the point is to vigilantly keep track of the distinction between skills and tribes, to avoid any ambiguity in use of these different and opposed things, to never mention one in place of the other.
Skills and tribes are certainly different things, I’m not sure why are they opposed things? We should keep track the distinction and at the same time continue building a beneficial tribe. I agree that in terms of terminology, “rationalist” is a terrible name for “member of the LessWrong-ish community” and we should use something else (e.g. LessWronger).
They are opposed in the sense that using one in place of the other causes trouble. For example, insisting on meticulous observation of skills would be annoying and sometimes counterproductive in a tribe, and letting tribal dynamics dictate how skills are developed would corrode quality.
A tribe shouldn’t insist on a meticulous observation of skills, broadly speaking, but it should impose norms on e.g. which rhetorical moves are encouraged/discouraged in a discussion, and it should create positive incentives for the meticulous observation of skills.
As to letting tribal dynamics dictate how skills are developed, I think we don’t really have a choice there. People are social animals and everything they do and think is strongly effected by the society they are in. The only choice is trying to shape this society and those dynamics to make them beneficial rather than detrimental.
This might be possible, but should be specific to particular groups, unless there is a recipe for reproducing the norms. It’s very easy for any set of beneficial norms to be trampled by tribal dynamics. The standard story is loss of fidelity, with people who care about the mission somewhat less, or who are not as capable of incarnating its purpose, coming to dominate a movement. At that point, observation of the beneficial norms turns into a cargo cult.
Thus the phenomenon of tribes seeks to destroy the phenomenon of skills. This applies to any nuanced purpose, even when it’s the founding purpose of a tribe. Survival of a purpose requires an explanation, which won’t be generic tribal dynamics or a set of norms helpful in the short term.
everything they do and think is strongly affected by the society
A skill-aspected tribe uses its norms to police how you pursue skills. Tribes whose identity is unrelated to pursuit of same skills won’t affect this activity strongly.
...Thus the phenomenon of tribes seeks to destroy the phenomenon of skills
I don’t think it’s “the phenomenon of tribes”, I think it’s a phenomenon of tribes. Humans virtually always occupy one tribe or another, so it makes no more sense to say that “tribes destroy skills” than, for example, “DNA destroys skills”. There is no tribeless counterfactual we can compare to.
A skill-aspected tribe uses its norms to police how you pursue skills. Tribes whose identity is unrelated to pursuit of same skills won’t affect this activity strongly.
I think any tribe affects how you pursue skills by determining which skills are rewarded (or punished), and which skills you have room to exercise.
It is definitely the case, especially in the EA community, that I’m surrounded by a lot more people who express alliance via signaling and are making nontrivial commitments, for whom I’ve not seen real evidence that they understand how to think for themselves or take right action without a high status person telling them to do it.
That said I don’t find it too hard myself to distinguish between such people, and people where I can say “Yeah, I’ve seen them do real things”.
I think we need to be really careful with this and the dangers of becoming a “tribe” shouldn’t be understated w.r.t our goals. In a community focused on promoting explicit reason, it becomes far more difficult to tell apart those who are carrying out social cognition from those who are actually carrying out the explicit reason, since the object level beliefs and their justifications of those doing social cognition and those using explicit reason will be almost identical. Likewise, it becomes much easier to slip back into the social cognition mode of thought while still telling yourself that your still reasoning.
IMO, if we don’t take additional precautions, this makes us really vulnerable to the dynamics described here. Doubly so the second we begin to rack up any kind of power, influence or status. Initially everything looks good and everyone around you seems to be making their way along The Path^T^M. But slowly you build up a mass of people who all agree with you on the object level but who acquired their conclusions and justifications by following social cues. Once the group reaches critical mass, you might get into a disagreement with a high status individual or group, and instead of using reason and letting the chips fall where they may, standard human tribal coordination mechanisms are used to strip you of your power and status. Then you’re expelled from the tribe. From there whatever mission the tribe had is quickly lost to the usual status games.
Personally, I haven’t seen much discussion of mechanisms for preventing this and other failure modes, so I’m skeptical of associating myself or supporting any IRL “rationalist community/village”.
The problems you discuss are real, but I don’t understand what alternative you’re defending. The choice is not having society or not having society. You are going to be part of some society anyway. So, isn’t it better if it’s a society of rationalists? Or do you advocate isolating yourself from everyone as much as possible? I really doubt that is a good strategy.
In practice, I think LessWrong has been pretty good at establishing norms that promote reason, and building some kind of community around them. It’s far from perfect, but it’s quite good compared to most other communities IMO. In fact, I think the community is one of the main benefits of LessWrong. Having such a community makes it much easier to adopt rational reasoning without becoming completely isolated due to your idiosyncratic beliefs.
So full disclosure, I’m on the outskirts of the rationality community looking inwards. My view of the situation is mostly filtered through what I’ve picked up online rather than in person.
With that said, in my mind the alternative is to keep the community more digital, or something that you go to meetups for, and to take advantage of societies’ existing infrastructure for social support and other things. This is not to say we shouldn’t have strong norms, the comment box I’m typing this in is reminding me of many of those norms right now. But the overall effect is that rationalists end up more diffuse, with less in common other than the shared desire for whatever it is we happen to be optimizing for. This in contrast to building something more like a rationalist community/village, where we create stronger interpersonal bonds and rely on each other for support.
The reason I say this is because as I understood it, the rationalist (at least the truth seeking side) came out of a generally online culture, where disagreement is (relatively) cheap, and individuals in the group don’t have much obvious leverage over one another. That environment seems to have been really good for allowing people to explore and exchange weird ideas, and to follow logic and reason wherever it happens to go. It also allows people to more easily “tell it like it is”.
When you create a situation where a group of rats become interdependent socially or economically, most of what I’ve read and seen indicates that you can gain quite a bit in terms of quality of life and group effectiveness, but I feel it also opens up the door to the kind of “catastrophic social failure” I’d mentioned earlier. Doubly so if the community starts to build up social or economic capital that other agents who don’t share the same goals might be interested in.
I think you are both right about important things, and the problem is whether we can design a community that can draw benefits of mutual support in real life, while minimising the risks. Keeping each other at internet distance is a solution, but I strongly believe it is far from the best we can do.
We probably need to accept that different people will have different preferences about how strongly involved they want to become in real life. For some people, internet debate may be the optimal level of involvement. For other people, it would be something more like the Dragon Army. Others will want something in between, and probably with emphasis on different things, e.g. more about projects and less about social interaction versus more about social interaction and less about projects. (Here, social interaction is my shortcut for solving everyday problems faced by individual people where they are now, as opposed to having a coherent outside-oriented project.)
But with different levels of involvement, there is a risk that people on some level would declare people on a different level to be “not true rationalists”. (Those with low involvement are not true rationalists, because they only want to procrastinate online, instead of becoming stronger and optimizing their lives. Those with high involvement are not true rationalists, because they care less about having correct knowledge, and more about belonging to a tribe and having group sex.) And if people around you prefer a different level, there will be social pressure to also choose a level that is not comfortable for you.
My vision would be a community where multiple levels of involvement are acceptable and all are considered normal. I believe it is possible in principle, because e.g. the Catholic Church is kinda like this: you have levels of involvement starting with “remembers a few memes, and visits the church on Christmas if the weather is nice” and ending with “spends the whole life isolated from the world, praying and debating esoteric topics”. Except for us it would go from “heard something about biases and how map is not the territory, and visits a LW/SSC meetup once in a while” to “lives in a group house and works full-time on preventing robot apocalypse”.
Plus, there are people for whom just having a group boundary as such, no matter how small, even something as vague as “identifies as a ‘rationalist’, whatever that word might mean”, is already too much. They can actually be a majority of LW readers, who knows; they are probably overrepresented among lurkers. But even for them, the website will continue existing approximately as they are now; and if some of them disappears, there are always other place on the internet.
tl;dr—we need to somehow have stronger rationalist groups for those who want them, without creating social pressure on those who don’t
This feels like an incredibly important point, the pressures when “the rationalists” are friends your debate with online vs when they are close community you are dependant on.
First, when Jacob wrote “join the tribe”, I don’t think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you’re saying here doesn’t seem like an argument against my objection to Zack’s objection to Jacob.
Second, specifically regarding Crocker’s rules, I’m not their fan at all. I think that you can be honest and tactful at the same time, and it’s reasonable to expect the same from other people.
Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution is a real danger (if not necessarily an insurmountable one).
I will probably never have the chance to live in a rationalist village, so for me the question is mostly academic. To me, a rationalist village sounds like a good idea in expectation (for some possible executions), but the uncertainty is great. However, why not experiment? Some rationalists can try having their own village. Many others wouldn’t join them anyway. We would see what comes out of it, and learn.
I’m breaking this into a separate thread since I think it’s a separate topic.
So I disagree. Obviously you can’t impose Croker’s rules on others, but I find it much easier and far less mentally taxing to communicate with people I don’t expect to get offended. Likewise, I’ve gained a great deal of benefit from people very straightforwardly and bluntly calling me out when I’m dropping the ball, and I don’t think they would have bothered otherwise since there was no obvious way to be tactful about it. I also think that there are individuals out there that are both smart and easily offended, and with those individuals tact isn’t really an option as they can transparently see what you’re trying to say, and will take issue with it anyways.
I can see the value of “getting offended” when everyone is sorta operating on simulacra level 3 and factual statements are actually group policy bids. However, when it comes to forming accurate beliefs, “getting offended” strikes me as counter productive, and I do my best to operate in a mode where I don’t do it, which is basically Croker’s rules.
This might be another difference of personalities, maybe Crocker’s rules make sense for some people.
The problem is, different people have conflicting interests. If we all had the same utility function then, sure, communication would be only about conveying factual information. But we don’t. In order to cooperate, we need not only to share information, but also reassure each other we are trustworthy and not planning to defect. If someone criticizes me in a way that disregards tact, it leads me to suspect that eir agenda is not helping me but undermining my status in the group.
You can say, we shouldn’t do that, that’s “simulacra” and simulacra=bad. But the game theory is real, and you can’t just magic it away by wishing it would be different. You can try just taking on faith that everyone are your allies, but then you’ll get exploited by defectors. Or you can try to come up with a different set of norms that solves the problem. But that can’t be Crocker’s rules, at least it can’t be only Crocker’s rules.
Now, obviously you can go too far in the other direction and stop conveying meaningful criticism, or start dancing around facts that need to be faced. That’s also bad. But the optimum is in the middle, at least for most people.
So first of all, I think the dynamics of surrounding offense are tripartite. You have the the party who said something offensive, the party who gets offended, and the party who judges the others involved based on the remark. Furthermore, the reason why simulacra=bad in general is because the underlying truth is irrelevant. Without extra social machinery, there’s no way to distinguish between valid criticism and slander. Offense and slander are both symmetric weapons.
I think that’s a big part of it. Especially IRL, I’ve taken quite a few steps over the course of years to mitigate the trust issues you bring up in the first place, and I rely on social circles with norms that mitigate the downsides of Crocker’s rules. A good combination of integrity+documentation+choice of allies makes it difficult to criticize someone legitimately. To an extent, I try to make my actions align with the values of the people I associate myself with, I keep good records of what I do, and I check that the people I need either put effort into forming accurate beliefs or won’t judge me regardless of how they see me. Then when criticism is levelled against myself and or my group, I can usually challenge it by encouraging relevant third parties to look more closely at the underlying reality, usually by directly arguing against what was stated. That way I can ward off a lot of criticism without compromising as much on truth seeking, provided there isn’t a sea change in the values of my peers. This has the added benefit that it allows me and my peers to hold each other accountable to take actions that promote each others values.
The other thing I’m doing that is both far easier to pull off and way more effective, is just to be anonymous. When the judging party can’t retaliate because they don’t know you IRL and the people calling the shots on the site respect privacy and have very permissive posting norms, who cares what people say about you? You can take and dish out all the criticism you want and the only consequence is that you’ll need to sort through the crap to find the constructive/actionable/accurate stuff. (Although crap criticism can easily be a serious problem in and of itself.)
So my objection definitely applies much more to a village than less tightly bound communities, and Jacob could have been referring to anything along that spectrum. But I brought it up because you said:
This is where the objection begins to apply. The more interdependent the group becomes, the more susceptible it is to the issues I brought up. I don’t think it’s a big deal in an online community, especially with pseudonyms, but I think we need to be careful when you get to more IRL communities. With a village, treating it like an experiment is good first step, but I’d definitely be in the group that wouldn’t join unless explicit thought had been put in to deal with my objections, or the village had been running successfully for long enough that I become convinced I was wrong.
So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don’t know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I’ve vetted them. This makes it harder for social networks to put pressure on you or undermine you. 2) A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
Hmm, at this point it might be just a difference of personalities, but to me what you’re saying sounds like “if you don’t eat, you can’t get good poisoning”. “Dual identity” doesn’t work for me, I feel that social connections are meaningless if I can’t be upfront about myself.
I guess? But in any case there will many subnetworks in the network. Even if everyone adopt the “village” model, there will be many such villages.
That’s probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don’t completely trust, and for most practical intents and purposes I’m quite a bit more “myself” online than IRL.
I think that’s easier said than done, and that a great effort needs to be made to deal with effects that come with having redundancy amongst villages/networks. Off the top of my head, you need to ward against having one of the communities implode after their best members leave for another:
Likewise, even if you do keep redundancy in rationalist communities, you need to ensure that there’s a mechanism that prevents them from seeing each other as out-groups or attacking each other when they do. This is especially important since one group viewing the other as their out-group, but not vice versa can lead to the group with the larger in-group getting exploited.
I think the point is to vigilantly keep track of the distinction between skills and tribes, to avoid any ambiguity in use of these different and opposed things, to never mention one in place of the other.
Skills and tribes are certainly different things, I’m not sure why are they opposed things? We should keep track the distinction and at the same time continue building a beneficial tribe. I agree that in terms of terminology, “rationalist” is a terrible name for “member of the LessWrong-ish community” and we should use something else (e.g. LessWronger).
They are opposed in the sense that using one in place of the other causes trouble. For example, insisting on meticulous observation of skills would be annoying and sometimes counterproductive in a tribe, and letting tribal dynamics dictate how skills are developed would corrode quality.
A tribe shouldn’t insist on a meticulous observation of skills, broadly speaking, but it should impose norms on e.g. which rhetorical moves are encouraged/discouraged in a discussion, and it should create positive incentives for the meticulous observation of skills.
As to letting tribal dynamics dictate how skills are developed, I think we don’t really have a choice there. People are social animals and everything they do and think is strongly effected by the society they are in. The only choice is trying to shape this society and those dynamics to make them beneficial rather than detrimental.
This might be possible, but should be specific to particular groups, unless there is a recipe for reproducing the norms. It’s very easy for any set of beneficial norms to be trampled by tribal dynamics. The standard story is loss of fidelity, with people who care about the mission somewhat less, or who are not as capable of incarnating its purpose, coming to dominate a movement. At that point, observation of the beneficial norms turns into a cargo cult.
Thus the phenomenon of tribes seeks to destroy the phenomenon of skills. This applies to any nuanced purpose, even when it’s the founding purpose of a tribe. Survival of a purpose requires an explanation, which won’t be generic tribal dynamics or a set of norms helpful in the short term.
A skill-aspected tribe uses its norms to police how you pursue skills. Tribes whose identity is unrelated to pursuit of same skills won’t affect this activity strongly.
I don’t think it’s “the phenomenon of tribes”, I think it’s a phenomenon of tribes. Humans virtually always occupy one tribe or another, so it makes no more sense to say that “tribes destroy skills” than, for example, “DNA destroys skills”. There is no tribeless counterfactual we can compare to.
I think any tribe affects how you pursue skills by determining which skills are rewarded (or punished), and which skills you have room to exercise.
It is definitely the case, especially in the EA community, that I’m surrounded by a lot more people who express alliance via signaling and are making nontrivial commitments, for whom I’ve not seen real evidence that they understand how to think for themselves or take right action without a high status person telling them to do it.
That said I don’t find it too hard myself to distinguish between such people, and people where I can say “Yeah, I’ve seen them do real things”.