1) If you think tech money is important, you need to be in the bay area. Just accept that. There’s money elsewhere, but not with the same concentration and openness.
2) Are you focused on saving the world, or on building communitiy/ies who are satisfied with their identity as world-savers? “bring them in, use them up” _may_ be the way to get the most value from volunteer sacrifices. It may not—I haven’t seen a growth plan for any org that explicitly has many orders of magnitudes of increase while still being an infinitesimal fraction of the end-goal.
Both of these highlight the fact that although I’m a long-time reader and commenter, and consider myself a little-r rationalist, I find the community and organizational groupings to be opaque and alien. I’m glad people are experimenting with these things, but I’m happy to be far away from it.
The money in the Bay uses ‘if you’re not in the Bay you’re not serious, and even if you are other Bay money won’t take you seriously so I can’t afford to’ as a coercive strategy to draw people there. Parallel with the community issues. Giving in to such tactics makes the problem that much worse and it snowballs.
Yes, Bay tech money is bigger and more our flavor there, but there’s lots in many other places, and we’d get more out of what money exists if we were spread out than if we all chased the biggest pile, even with that pile playing hostile negative-sum games on us.
The money in the Bay uses ‘if you’re not in the Bay you’re not serious, and even if you are other Bay money won’t take you seriously so I can’t afford to’
Right. That’s my “just accept it” point. If you want that money, you (currently) have to play by those rules. If you don’t want to play that way, you need to stand up and say that your plan isn’t based on bay-area money/support levels.
as a coercive strategy to draw people there.
It’s hard for me to understand the use of “coercive” here. Other than choosing not to give you money/attention, what coercion is being applied?
Even so, I think that strategy (to draw the serious people who have the capability to contribute) is a small part of it. It’s mostly just a simple acknowledgement that distance matters. it’s just a bit more hassle to coordinate with distant partners, and that’s enough to make many want to invest time/effort/money more locally, all else equal. This is compounded by the (weak but real) signals about your seriousness if you won’t find a way to be in the center of things.
This dovetails with my experience from what I’ve heard in other points in the community, as I described in this comment:
There’s often a perception resources are only invested in projects based in the Bay Area, so trying to start projects with rationalists elsewhere and expect to sustain them long-term is futile.
Moving to Berkeley is still inaccessible or impractical for a lot of rationalists scattered everywhere that (especially if their friends leave) it breeds a sense of alienation and being left behind/stranded as one watches everyone else talk about how they *can* flock to the Berkeley. Combined with the rest of the above, this can also unfortunately breed feelings of resentment.
Rationalists from outside Berkeley often report feeling as though the benefits or incentives to moving to the Berkeley community are exaggerated relatives to the trade-offs or costs of moving to Berkeley.
1) If you think tech money is important, you need to be in the bay area. Just accept that. There’s money elsewhere, but not with the same concentration and openness.
This is true. There are reasons other than community-building to not be concentrated in one place. I don’t think trying to reverse the relatively high concentration of rationalists in the Bay Area is at this time a solution to common community problems.
2) Are you focused on saving the world, or on building communitiy/ies who are satisfied with their identity as world-savers? “bring them in, use them up” _may_ be the way to get the most value from volunteer sacrifices. It may not—I haven’t seen a growth plan for any org that explicitly has many orders of magnitudes of increase while still being an infinitesimal fraction of the end-goal.
This strikes me as pretty unlikely. Often even moreso among EA organizations than ones in the rationality community, world-saving operations which try this strategy appear to have a higher turnover rate, and they don’t appear to have improved enough to compensate for that. The Centre for Effective Altruism and the Open Philanthropy Project are two organizations which have close ties and are the two biggest funders in effective altruism, which also covers x-risk/world-saving rationalist projects. They’re taking more of a precision approach building community/ties in a way they think will maximize the world-saving-ness of the community. Not everyone agrees with the strategy (see this thread), but it’s definitely more of a hands-on approach moving away from a “bring them in, use them up” model that was closer to what EA organizations tended to do a few years ago.
Many of the other comments on this post point to an issue of concern being a trade-off between a world-saving focus and rationality community-building, but my sense of why it is tense is because both are considered important, so the way is to find better ways to not lose community-building to world-saving.
Two somewhat independent thoughts:
1) If you think tech money is important, you need to be in the bay area. Just accept that. There’s money elsewhere, but not with the same concentration and openness.
2) Are you focused on saving the world, or on building communitiy/ies who are satisfied with their identity as world-savers? “bring them in, use them up” _may_ be the way to get the most value from volunteer sacrifices. It may not—I haven’t seen a growth plan for any org that explicitly has many orders of magnitudes of increase while still being an infinitesimal fraction of the end-goal.
Both of these highlight the fact that although I’m a long-time reader and commenter, and consider myself a little-r rationalist, I find the community and organizational groupings to be opaque and alien. I’m glad people are experimenting with these things, but I’m happy to be far away from it.
The money in the Bay uses ‘if you’re not in the Bay you’re not serious, and even if you are other Bay money won’t take you seriously so I can’t afford to’ as a coercive strategy to draw people there. Parallel with the community issues. Giving in to such tactics makes the problem that much worse and it snowballs.
Yes, Bay tech money is bigger and more our flavor there, but there’s lots in many other places, and we’d get more out of what money exists if we were spread out than if we all chased the biggest pile, even with that pile playing hostile negative-sum games on us.
Right. That’s my “just accept it” point. If you want that money, you (currently) have to play by those rules. If you don’t want to play that way, you need to stand up and say that your plan isn’t based on bay-area money/support levels.
It’s hard for me to understand the use of “coercive” here. Other than choosing not to give you money/attention, what coercion is being applied?
Even so, I think that strategy (to draw the serious people who have the capability to contribute) is a small part of it. It’s mostly just a simple acknowledgement that distance matters. it’s just a bit more hassle to coordinate with distant partners, and that’s enough to make many want to invest time/effort/money more locally, all else equal. This is compounded by the (weak but real) signals about your seriousness if you won’t find a way to be in the center of things.
This dovetails with my experience from what I’ve heard in other points in the community, as I described in this comment:
This is true. There are reasons other than community-building to not be concentrated in one place. I don’t think trying to reverse the relatively high concentration of rationalists in the Bay Area is at this time a solution to common community problems.
This strikes me as pretty unlikely. Often even moreso among EA organizations than ones in the rationality community, world-saving operations which try this strategy appear to have a higher turnover rate, and they don’t appear to have improved enough to compensate for that. The Centre for Effective Altruism and the Open Philanthropy Project are two organizations which have close ties and are the two biggest funders in effective altruism, which also covers x-risk/world-saving rationalist projects. They’re taking more of a precision approach building community/ties in a way they think will maximize the world-saving-ness of the community. Not everyone agrees with the strategy (see this thread), but it’s definitely more of a hands-on approach moving away from a “bring them in, use them up” model that was closer to what EA organizations tended to do a few years ago.
Many of the other comments on this post point to an issue of concern being a trade-off between a world-saving focus and rationality community-building, but my sense of why it is tense is because both are considered important, so the way is to find better ways to not lose community-building to world-saving.