1) If you think tech money is important, you need to be in the bay area. Just accept that. There’s money elsewhere, but not with the same concentration and openness.
This is true. There are reasons other than community-building to not be concentrated in one place. I don’t think trying to reverse the relatively high concentration of rationalists in the Bay Area is at this time a solution to common community problems.
2) Are you focused on saving the world, or on building communitiy/ies who are satisfied with their identity as world-savers? “bring them in, use them up” _may_ be the way to get the most value from volunteer sacrifices. It may not—I haven’t seen a growth plan for any org that explicitly has many orders of magnitudes of increase while still being an infinitesimal fraction of the end-goal.
This strikes me as pretty unlikely. Often even moreso among EA organizations than ones in the rationality community, world-saving operations which try this strategy appear to have a higher turnover rate, and they don’t appear to have improved enough to compensate for that. The Centre for Effective Altruism and the Open Philanthropy Project are two organizations which have close ties and are the two biggest funders in effective altruism, which also covers x-risk/world-saving rationalist projects. They’re taking more of a precision approach building community/ties in a way they think will maximize the world-saving-ness of the community. Not everyone agrees with the strategy (see this thread), but it’s definitely more of a hands-on approach moving away from a “bring them in, use them up” model that was closer to what EA organizations tended to do a few years ago.
Many of the other comments on this post point to an issue of concern being a trade-off between a world-saving focus and rationality community-building, but my sense of why it is tense is because both are considered important, so the way is to find better ways to not lose community-building to world-saving.
This is true. There are reasons other than community-building to not be concentrated in one place. I don’t think trying to reverse the relatively high concentration of rationalists in the Bay Area is at this time a solution to common community problems.
This strikes me as pretty unlikely. Often even moreso among EA organizations than ones in the rationality community, world-saving operations which try this strategy appear to have a higher turnover rate, and they don’t appear to have improved enough to compensate for that. The Centre for Effective Altruism and the Open Philanthropy Project are two organizations which have close ties and are the two biggest funders in effective altruism, which also covers x-risk/world-saving rationalist projects. They’re taking more of a precision approach building community/ties in a way they think will maximize the world-saving-ness of the community. Not everyone agrees with the strategy (see this thread), but it’s definitely more of a hands-on approach moving away from a “bring them in, use them up” model that was closer to what EA organizations tended to do a few years ago.
Many of the other comments on this post point to an issue of concern being a trade-off between a world-saving focus and rationality community-building, but my sense of why it is tense is because both are considered important, so the way is to find better ways to not lose community-building to world-saving.