I’ve received a few notifications in the last couple weeks that this post I wrote in 2018 about the relationship between the rationality community in Berkeley and the broader rationality community elsewhere has received some upvotes. I don’t know why more people have been reading that post more recently but I’ve changed my mind about much of what I wrote then, so I’m writing this post as a follow-up.
What Has Changed Between 2018 and Now
The theme of how the post characterized the nature of the flow of the community members between the Berkeley rationality community and other local rationality groups is exemplified by this excerpt from another post also written in 2018 by Zvi on his personal blog.
In my experience, the recruitment to Berkeley was very aggressive. Sometimes it felt like: “if you don’t want to move to Berkeley as soon as possible, you are not *really* rational, and then it is a waste of our time to even talk to you.” I totally understand why having more rationalists around you is awesome, but trying to move everyone into one city feels like an overkill.
I have no longer been a rationality group organizer for a few years now, so I am not very aware of how perspectives of participants in the rationality community outside the Bay Area may have changed since 2018. My impression is that any such trend exclusively between rationality groups has been superseded by such a flow being driven by how participants in effective altruism (EA) worldwide are migrating to the Bay Area. I emphasize how for EA it’s more of a worldwide trend because it’s less of a matter of people moving from other cities in the United States to the Bay Area, as it is for rationalists. EA as a movement is much bigger than the rationality community and has a much faster growth rate. Far more people are through EA driven to migrate to the Bay Area from countries around the world.
How the Relationship Between the Rationality Community and Effective Altruism Has Changed
That this dynamic in EA may be the predominant factor determining the same dynamic in the rationality community is because of changes in the relationship between EA and the rationality community. The greatest change is the development and growth of long-termism as a philosophy in and from EA. This has brought EA much closer to the rationality community in its relative prioritization of reducing existential risks posed by advanced AI (i.e., ‘AI x-risks’), leading both communities to mutually have an even closer relationship with each other and even more overlap.
The second greatest change in the last few years may be the increased urgency to solve the AI control problem. My impression is that in the last few years that rationalists in the Bay Area have dovetailed with a conclusion that it’s not so ideal for the rationality community to be so concentrated in one place. Please comment if you have a very different impression but my further sense is that there is less resentment from rationalists elsewhere toward the Bay Area community.
My hunch is that while there are many reasons for that change, one of the greatest ones is how the rationality community, as well as EA, have become more united in the face of how much more pressing the imperative of AI alignment has become. Other things being equal, it would in theory be better for either community to have a more robust geographic distribution. Yet coordinating such a major, community-wide effort would conflict with the greater priority of optimizing community organization for improving AI alignment.
The increased urgency of AI alignment and the hyper-concentration of the worldwide development of advanced AI in the Bay Area causes there to be an inertia that poses too costly a trade-off against fundamentally re-organizing the community. I at least perceive that to be an increasing consensus both in the rationality community and EA, both in the Bay Area and elsewhere. That is a conclusion I have also been increasingly drawn to myself but I am probably less confident in that than many others.
Considerations for the Future
What remains is a potentially major problem in practice is the risk of the Bay Area attracting organizers from local groups elsewhere at a rate exceeding how fast those local groups can secure new organizers. Even given an assumption the strategy of a rapid talent pipeline from everywhere else to the Bay Area is the best one in practice, community organization across too many other local nodes in the global network experiencing brain drains may pose a potential catastrophe. If core organization(s) depend on university, local and national groups sustaining a talent pipeline, if too many of groups lose the very talent needed to sustain them all at once, the entire talent pipeline may collapse.
Especially in EA, successful group organizers or staff at budding EA-affiliated organizations are becoming likelier and likelier to be hired by major organizations based in the Bay Area (or another centralized hub, such as Oxford in England). That those opportunities will avail themselves to successful grant recipients for local group organizing or to build other community infrastructure is explicitly promoted by the Centre for Effective Altruism (CEA) to attract grant applications. From the Community Building Grants page on the CEA’s website:
Working as a professional community builder is a great way to prepare for other impactful work. Community builders develop skills, networks, and experience that could serve them well in management, operations, research, fundraising, entrepreneurship, and more. Many of our alumni move on to working for other organizations in the EA movement, or pursue long-term careers in community-building.
There need not be much extra concern from the rationality community if it seems like the approach being taken poses major risks. There is already a lot of concern about it in EA. An article from May of this year critical of how the current strategy has resulted in aggressive and counterproductive growth and recruitment tactics was recently very well-received on the EA Forum.
The CEA has also for over a year now been trying to support the sustained organization for city or national EA groups that serve as significant nodes in the global network and talent pipeline. This may have the effect of ensuring greater continuity between changes in local or national leadership in a way that lessens the overall risk of damage to the capacity of EA to coordinate itself on a global level. The expectations of what beneficial change these marginal patches to bugs in movement-building strategy are only tentative, given how relatively recent the full recognition of these problems and the implementation of intended solutions has been.
What This May Mean for Rationality Is Up to All of You
With so much about effective altruism, this isn’t so relevant to the rationality community anymore. I intend to address the problems presented here in more depth on the EA Forum at a later date. It’s of course still relevant for the rationality community to have some greater awareness of these dynamics in EA as well. Yet I have intended this follow-up to indeed be a conclusion to what remains of my insights directly for rationality community-building. It’s your community, so you must decide what to do next together!
Changes in Community Dynamics: A Follow-Up to ‘The Berkeley Community & the Rest of Us’
I’ve received a few notifications in the last couple weeks that this post I wrote in 2018 about the relationship between the rationality community in Berkeley and the broader rationality community elsewhere has received some upvotes. I don’t know why more people have been reading that post more recently but I’ve changed my mind about much of what I wrote then, so I’m writing this post as a follow-up.
What Has Changed Between 2018 and Now
The theme of how the post characterized the nature of the flow of the community members between the Berkeley rationality community and other local rationality groups is exemplified by this excerpt from another post also written in 2018 by Zvi on his personal blog.
I have no longer been a rationality group organizer for a few years now, so I am not very aware of how perspectives of participants in the rationality community outside the Bay Area may have changed since 2018. My impression is that any such trend exclusively between rationality groups has been superseded by such a flow being driven by how participants in effective altruism (EA) worldwide are migrating to the Bay Area. I emphasize how for EA it’s more of a worldwide trend because it’s less of a matter of people moving from other cities in the United States to the Bay Area, as it is for rationalists. EA as a movement is much bigger than the rationality community and has a much faster growth rate. Far more people are through EA driven to migrate to the Bay Area from countries around the world.
How the Relationship Between the Rationality Community and Effective Altruism Has Changed
That this dynamic in EA may be the predominant factor determining the same dynamic in the rationality community is because of changes in the relationship between EA and the rationality community. The greatest change is the development and growth of long-termism as a philosophy in and from EA. This has brought EA much closer to the rationality community in its relative prioritization of reducing existential risks posed by advanced AI (i.e., ‘AI x-risks’), leading both communities to mutually have an even closer relationship with each other and even more overlap.
The second greatest change in the last few years may be the increased urgency to solve the AI control problem. My impression is that in the last few years that rationalists in the Bay Area have dovetailed with a conclusion that it’s not so ideal for the rationality community to be so concentrated in one place. Please comment if you have a very different impression but my further sense is that there is less resentment from rationalists elsewhere toward the Bay Area community.
My hunch is that while there are many reasons for that change, one of the greatest ones is how the rationality community, as well as EA, have become more united in the face of how much more pressing the imperative of AI alignment has become. Other things being equal, it would in theory be better for either community to have a more robust geographic distribution. Yet coordinating such a major, community-wide effort would conflict with the greater priority of optimizing community organization for improving AI alignment.
The increased urgency of AI alignment and the hyper-concentration of the worldwide development of advanced AI in the Bay Area causes there to be an inertia that poses too costly a trade-off against fundamentally re-organizing the community. I at least perceive that to be an increasing consensus both in the rationality community and EA, both in the Bay Area and elsewhere. That is a conclusion I have also been increasingly drawn to myself but I am probably less confident in that than many others.
Considerations for the Future
What remains is a potentially major problem in practice is the risk of the Bay Area attracting organizers from local groups elsewhere at a rate exceeding how fast those local groups can secure new organizers. Even given an assumption the strategy of a rapid talent pipeline from everywhere else to the Bay Area is the best one in practice, community organization across too many other local nodes in the global network experiencing brain drains may pose a potential catastrophe. If core organization(s) depend on university, local and national groups sustaining a talent pipeline, if too many of groups lose the very talent needed to sustain them all at once, the entire talent pipeline may collapse.
Especially in EA, successful group organizers or staff at budding EA-affiliated organizations are becoming likelier and likelier to be hired by major organizations based in the Bay Area (or another centralized hub, such as Oxford in England). That those opportunities will avail themselves to successful grant recipients for local group organizing or to build other community infrastructure is explicitly promoted by the Centre for Effective Altruism (CEA) to attract grant applications. From the Community Building Grants page on the CEA’s website:
There need not be much extra concern from the rationality community if it seems like the approach being taken poses major risks. There is already a lot of concern about it in EA. An article from May of this year critical of how the current strategy has resulted in aggressive and counterproductive growth and recruitment tactics was recently very well-received on the EA Forum.
The CEA has also for over a year now been trying to support the sustained organization for city or national EA groups that serve as significant nodes in the global network and talent pipeline. This may have the effect of ensuring greater continuity between changes in local or national leadership in a way that lessens the overall risk of damage to the capacity of EA to coordinate itself on a global level. The expectations of what beneficial change these marginal patches to bugs in movement-building strategy are only tentative, given how relatively recent the full recognition of these problems and the implementation of intended solutions has been.
What This May Mean for Rationality Is Up to All of You
With so much about effective altruism, this isn’t so relevant to the rationality community anymore. I intend to address the problems presented here in more depth on the EA Forum at a later date. It’s of course still relevant for the rationality community to have some greater awareness of these dynamics in EA as well. Yet I have intended this follow-up to indeed be a conclusion to what remains of my insights directly for rationality community-building. It’s your community, so you must decide what to do next together!