A couple friends of mine who were early attendees of a CFAR workshop lived in the Bay Area for several months in 2013, and returned home with stories of how wondrous the Bay Area was. They convinced several of us to attend CFAR workshops as well, and we too returned home with the sense of wonderment after our brief immersion in the Berkeley rationality community. But when my friends and I each returned, somehow our ambition transformed into depression. I tried rallying my friends to try carrying back or reigniting the spark that made the Berkeley rationalist community thrive, to really spread the rationalist project beyond the Bay Area.
You seem to be conflating “CFAR workshop atmosphere” with “Berkeley Rationalist Community” in this section, which makes me wonder if you are conflating those things more generally.
The depressive slump post-CFAR happens *in Berkeley* too. The thriving community you envision Berkeley as having *does not exist,* except at CFAR workshops. The problem you’re identifying isn’t a Bay-Area-vs-the-world issue, it’s a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.
it’s a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.
So, this is definitely a thing that happens, and I’m aware of and sad about it, but it’s worth pointing out that this is a generic property of all sufficiently good workshops and things like workshops (e.g. summer camps) everywhere (the ones that aren’t sufficiently good don’t build the intense social connections in the first place), and to the extent that it’s a problem CFAR runs into, 1) I think it’s a little unfair to characterize it as the result of something CFAR is particularly doing that other similar organizations aren’t doing, and 2) as far as I know nobody else knows what to do about this either.
Or are you suggesting that the workshops shouldn’t be trying to build intense social connections?
I don’t think he was criticizing CFAR workshops, but people who implicitly expect their own communities to automatically produce the same intense social connections.
I agree with these statements, and clone of saturn is correct I was talking about an implicit expectation other rationalist communities will produce the same intense social connections found at CFAR workshops (and also attributed to the Berkeley community generally, but as stardust points out this isn’t as amazing as myself and others had built it up to be).
I think there is specifically a “work on x-risk” subgroup, which yes recruits from within Berkeley, and yes has some debilitating effects. I wouldn’t quite characterize it the way Zvi does but will say it’s not obviously wrong.
[Edit: I have mixed feelings about whether or how bad the current dynamics are. I think it actually is the case that x-risk desperately needs agents, and yes this competes with non-x-risk community building which also needs agents. I think it’s possible to make pareto-optimal improvements to the situation but there will probably be at least some tradeoffs that need to get made and I think reasonable people can disagree about where to draw those tradeoffs]
We can all agree that x-risk prevention is a Worthy Cause, or even the most worthy cause. And at some point, you need to divert increasing parts of your resources to that rather than to building resources to be spent, and that this time is, as one otherwise awful teacher of mine called it, immediately if not sooner.
The key question, in terms of implications/VOI, is: Is ‘work on x-risk’ the kind of all-consuming task (a la SSC’s scholars who must use every waking moment to get to those last few minutes where they can make progress, or other all-consuming jobs like start-up founder in a cash crunch) where you must/should let everything else burn, because you have power law returns to investment and the timeline is short enough that you’ll burn out now and fix it later? Or is it where you can and should do both, especially given there isn’t really a cash crunch and the timeline distribution is highly uncertain and so is what would be helpful?
I want vastly more resources into x-risk, but some (very well meaning) actors have taken the attitude of ‘if it’s not directly about x-risk I have no interest’ and otherwise making everything fit into one of the ‘proven effective’ boxes, which starves community for resources since it doesn’t count as an end goal. It’s a big problem.
Anyway, whole additional huge topic and all that. And I’m currently debating how to divide my own resources between these goals!
I’ve got a lot of thoughts on this myself I haven’t gotten done yet either, but it appears many effective altruists and rationalists share your perspective of a common problem disrupting other community projects. See this comment.
This ties into an underrated factor I talked about in this comment:
But then I also read stuff like this post by Alyssa, who is from the Berkeley rationalist community, and Zvi’s comment about Berkeley itself eating the seed corn of Berkeley sounds plausible. Sarah C also wrote this post about how the Bayesian Area has changed over the years. The posts are quite different but the theme of both is the Bayesian Area in reality defies many rationalists’ expectations of what the community is or should be about.
Another thing is much of the recruitment is driven by efforts which are decidedly more ‘effective altruist’ than they are ‘rationalist’. With the Open Philanthropy Project and the effective altruism movement enabling the growth of so many community projects based in the Bay Area, it both i) draws people from outside Bay Area; ii) draws attention to the sorts of projects EA incentivizes at the expense of focusing on other rationalist projects in Berkeley. As far as I can tell, much of the rationality community who don’t consider themselves effective altruists aren’t happy EA eats up such a huge part of the community’s time, attention and money. As far as I can tell, it’s not that they don’t like EA. The major complaint is projects in the community with the EA stamp of approval are magically more deserving of support than other rationalist projects, regardless of arguments weighing the projects against each other.
To me a funny thing is from the other side I’m aware of a lot of effective altruists long focused on global poverty alleviation or other causes are unhappy with a disproportionate diversion of time, attention, money, and talent toward AI alignment, but moreover EA movement-building and other meta-level activities. Both rationalists and effective altruists find projects also receive funding on the basis of fitting frameworks which are ultimately too narrow and limited to account for all the best projects (e.g., the Important/Neglected/Tractable framework). So it appears the most prioritized projects in effective altruism are driving rapid changes that the grassroots elements of both the rationality and EA movements aren’t able to adapt to. A lot of effective altruists and rationalists from outside the Bay Area perceive it as a monolith eating their communities, and a lot of rationalists in Berkeley see the same happening to local friends whose attention used to not be so singularly focused on EA.
This was the experience in Vancouver after CFAR workshops, and the atmosphere persisted for a long time. It wasn’t only me who was conflating “[big event] atmosphere” with “Berkeley Rationalist Community”. Not just me but a lot of other people in Vancouver, and also how a lot of rationalists from elsewhere talk about the Berkeley Rationalist Community (I’m going to call it the Bayesian Area), it’s often depicted as super awesome.
The first thing that comes to mind is a lot of rationalists from outside of Berkeley only visit town for events like CFAR workshops, CFAR alumni reunions, EA Global, Burning Man, etc. So if one rationalist visits Berkeley a few times a year and always returns to their home base talking about their experiences in Berkeley right after these exciting events, it makes the Berkeley community itself seem constantly exciting. I’m guessing the reality is Berkeley community isn’t always buzzing with conferences and workshops, and organizing all those things is actually very stressful.
There definitely is a halo around the Berkeley Rationalist Community for other reasons:
It’s often touted ‘leveling up’ to the point one can get hired at an x-risk reduction organization or working on another important project like a startup in Berkeley is an important and desirable thing for rationalists to do.
There’s often a perception resources are only invested in projects based in the Bay Area, so trying to start projects with rationalists elsewhere and expect to sustain them long-term is futile.
Moving to Berkeley is still inaccessible or impractical for a lot of rationalists scattered everywhere that (especially if their friends leave) it breeds a sense of alienation and being left behind/stranded as one watches everyone else talk about how they *can* flock to the Berkeley. Combined with the rest of the above, this can also unfortunately breed feelings of resentment.
Rationalists from outside Berkeley often report feeling as though the benefits or incentives to moving to the Berkeley community are exaggerated relatives to the trade-offs or costs of moving to Berkeley.
It would not surprise me if this halo effect around the Berkeley rationalist community around the world is just a case of confirmation bias writ large among rationalists everywhere. It could be there is a sense the Bayesian Area is doing all this deliberately, when almost no rationalists in Berkeley intended to do this. The accounts of what has happened to the NYC community are pretty startling, especially as one of the healthier communities I thought it would persist. The most I can say is there is a wide variance in accounts of how much a local rationalist community feels or not pressure exerted from Berkeley to send as many people as possible their way.
But then I also read stuff like this post by Alyssa, who is from the Berkeley rationalist community, and Zvi’s comment about Berkeley itself eating the seed corn of Berkeley sounds plausible. Sarah C also wrote this post about how the Bayesian Area has changed over the years. The posts are quite different but the theme of both is the Bayesian Area in reality defies many rationalists’ expectations of what the community is or should be about.
Another thing is much of the recruitment is driven by efforts which are decidedly more ‘effective altruist’ than they are ‘rationalist’. With the Open Philanthropy Project and the effective altruism movement enabling the growth of so many community projects based in the Bay Area, it both i) draws people from outside Bay Area; ii) draws attention to the sorts of projects EA incentivizes at the expense of focusing on other rationalist projects in Berkeley. As far as I can tell, much of the rationality community who don’t consider themselves effective altruists aren’t happy EA eats up such a huge part of the community’s time, attention and money. As far as I can tell, it’s not that they don’t like EA. The major complaint is projects in the community with the EA stamp of approval are magically more deserving of support than other rationalist projects, regardless of arguments weighing the projects against each other.
To me a funny thing is from the other side I’m aware of a lot of effective altruists long focused on global poverty alleviation or other causes are unhappy with a disproportionate diversion of time, attention, money, and talent toward AI alignment, but moreover EA movement-building and other meta-level activities. Both rationalists and effective altruists find projects also receive funding on the basis of fitting frameworks which are ultimately too narrow and limited to account for all the best projects (e.g., the Important/Neglected/Tractable framework). So it appears the most prioritized projects in effective altruism are driving rapid changes that the grassroots elements of both the rationality and EA movements aren’t able to adapt to. A lot of effective altruists and rationalists from outside the Bay Area perceive it as a monolith eating their communities, and a lot of rationalists in Berkeley see the same happening to local friends whose attention used to not be so singularly focused on EA.
You seem to be conflating “CFAR workshop atmosphere” with “Berkeley Rationalist Community” in this section, which makes me wonder if you are conflating those things more generally.
The depressive slump post-CFAR happens *in Berkeley* too. The thriving community you envision Berkeley as having *does not exist,* except at CFAR workshops. The problem you’re identifying isn’t a Bay-Area-vs-the-world issue, it’s a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.
So, this is definitely a thing that happens, and I’m aware of and sad about it, but it’s worth pointing out that this is a generic property of all sufficiently good workshops and things like workshops (e.g. summer camps) everywhere (the ones that aren’t sufficiently good don’t build the intense social connections in the first place), and to the extent that it’s a problem CFAR runs into, 1) I think it’s a little unfair to characterize it as the result of something CFAR is particularly doing that other similar organizations aren’t doing, and 2) as far as I know nobody else knows what to do about this either.
Or are you suggesting that the workshops shouldn’t be trying to build intense social connections?
I don’t think he was criticizing CFAR workshops, but people who implicitly expect their own communities to automatically produce the same intense social connections.
Yes, this is what I was getting at. Thanks.
I agree with these statements, and clone of saturn is correct I was talking about an implicit expectation other rationalist communities will produce the same intense social connections found at CFAR workshops (and also attributed to the Berkeley community generally, but as stardust points out this isn’t as amazing as myself and others had built it up to be).
Is this suggesting that top-tier Berkeley is even eating the seed corn of Berkeley and making everyone but its own top-tier depressed in its wake?
I think there is specifically a “work on x-risk” subgroup, which yes recruits from within Berkeley, and yes has some debilitating effects. I wouldn’t quite characterize it the way Zvi does but will say it’s not obviously wrong.
[Edit: I have mixed feelings about whether or how bad the current dynamics are. I think it actually is the case that x-risk desperately needs agents, and yes this competes with non-x-risk community building which also needs agents. I think it’s possible to make pareto-optimal improvements to the situation but there will probably be at least some tradeoffs that need to get made and I think reasonable people can disagree about where to draw those tradeoffs]
We can all agree that x-risk prevention is a Worthy Cause, or even the most worthy cause. And at some point, you need to divert increasing parts of your resources to that rather than to building resources to be spent, and that this time is, as one otherwise awful teacher of mine called it, immediately if not sooner.
The key question, in terms of implications/VOI, is: Is ‘work on x-risk’ the kind of all-consuming task (a la SSC’s scholars who must use every waking moment to get to those last few minutes where they can make progress, or other all-consuming jobs like start-up founder in a cash crunch) where you must/should let everything else burn, because you have power law returns to investment and the timeline is short enough that you’ll burn out now and fix it later? Or is it where you can and should do both, especially given there isn’t really a cash crunch and the timeline distribution is highly uncertain and so is what would be helpful?
I want vastly more resources into x-risk, but some (very well meaning) actors have taken the attitude of ‘if it’s not directly about x-risk I have no interest’ and otherwise making everything fit into one of the ‘proven effective’ boxes, which starves community for resources since it doesn’t count as an end goal. It’s a big problem.
Anyway, whole additional huge topic and all that. And I’m currently debating how to divide my own resources between these goals!
I’ve got a lot of thoughts on this myself I haven’t gotten done yet either, but it appears many effective altruists and rationalists share your perspective of a common problem disrupting other community projects. See this comment.
This ties into an underrated factor I talked about in this comment:
Perhaps? I am not sure if if there is even a coherent top-tier. If there is I am not part of or aware of it.
This was the experience in Vancouver after CFAR workshops, and the atmosphere persisted for a long time. It wasn’t only me who was conflating “[big event] atmosphere” with “Berkeley Rationalist Community”. Not just me but a lot of other people in Vancouver, and also how a lot of rationalists from elsewhere talk about the Berkeley Rationalist Community (I’m going to call it the Bayesian Area), it’s often depicted as super awesome.
The first thing that comes to mind is a lot of rationalists from outside of Berkeley only visit town for events like CFAR workshops, CFAR alumni reunions, EA Global, Burning Man, etc. So if one rationalist visits Berkeley a few times a year and always returns to their home base talking about their experiences in Berkeley right after these exciting events, it makes the Berkeley community itself seem constantly exciting. I’m guessing the reality is Berkeley community isn’t always buzzing with conferences and workshops, and organizing all those things is actually very stressful.
There definitely is a halo around the Berkeley Rationalist Community for other reasons:
It’s often touted ‘leveling up’ to the point one can get hired at an x-risk reduction organization or working on another important project like a startup in Berkeley is an important and desirable thing for rationalists to do.
There’s often a perception resources are only invested in projects based in the Bay Area, so trying to start projects with rationalists elsewhere and expect to sustain them long-term is futile.
Moving to Berkeley is still inaccessible or impractical for a lot of rationalists scattered everywhere that (especially if their friends leave) it breeds a sense of alienation and being left behind/stranded as one watches everyone else talk about how they *can* flock to the Berkeley. Combined with the rest of the above, this can also unfortunately breed feelings of resentment.
Rationalists from outside Berkeley often report feeling as though the benefits or incentives to moving to the Berkeley community are exaggerated relatives to the trade-offs or costs of moving to Berkeley.
It would not surprise me if this halo effect around the Berkeley rationalist community around the world is just a case of confirmation bias writ large among rationalists everywhere. It could be there is a sense the Bayesian Area is doing all this deliberately, when almost no rationalists in Berkeley intended to do this. The accounts of what has happened to the NYC community are pretty startling, especially as one of the healthier communities I thought it would persist. The most I can say is there is a wide variance in accounts of how much a local rationalist community feels or not pressure exerted from Berkeley to send as many people as possible their way.
But then I also read stuff like this post by Alyssa, who is from the Berkeley rationalist community, and Zvi’s comment about Berkeley itself eating the seed corn of Berkeley sounds plausible. Sarah C also wrote this post about how the Bayesian Area has changed over the years. The posts are quite different but the theme of both is the Bayesian Area in reality defies many rationalists’ expectations of what the community is or should be about.
Another thing is much of the recruitment is driven by efforts which are decidedly more ‘effective altruist’ than they are ‘rationalist’. With the Open Philanthropy Project and the effective altruism movement enabling the growth of so many community projects based in the Bay Area, it both i) draws people from outside Bay Area; ii) draws attention to the sorts of projects EA incentivizes at the expense of focusing on other rationalist projects in Berkeley. As far as I can tell, much of the rationality community who don’t consider themselves effective altruists aren’t happy EA eats up such a huge part of the community’s time, attention and money. As far as I can tell, it’s not that they don’t like EA. The major complaint is projects in the community with the EA stamp of approval are magically more deserving of support than other rationalist projects, regardless of arguments weighing the projects against each other.
To me a funny thing is from the other side I’m aware of a lot of effective altruists long focused on global poverty alleviation or other causes are unhappy with a disproportionate diversion of time, attention, money, and talent toward AI alignment, but moreover EA movement-building and other meta-level activities. Both rationalists and effective altruists find projects also receive funding on the basis of fitting frameworks which are ultimately too narrow and limited to account for all the best projects (e.g., the Important/Neglected/Tractable framework). So it appears the most prioritized projects in effective altruism are driving rapid changes that the grassroots elements of both the rationality and EA movements aren’t able to adapt to. A lot of effective altruists and rationalists from outside the Bay Area perceive it as a monolith eating their communities, and a lot of rationalists in Berkeley see the same happening to local friends whose attention used to not be so singularly focused on EA.