I think there is specifically a “work on x-risk” subgroup, which yes recruits from within Berkeley, and yes has some debilitating effects. I wouldn’t quite characterize it the way Zvi does but will say it’s not obviously wrong.
[Edit: I have mixed feelings about whether or how bad the current dynamics are. I think it actually is the case that x-risk desperately needs agents, and yes this competes with non-x-risk community building which also needs agents. I think it’s possible to make pareto-optimal improvements to the situation but there will probably be at least some tradeoffs that need to get made and I think reasonable people can disagree about where to draw those tradeoffs]
We can all agree that x-risk prevention is a Worthy Cause, or even the most worthy cause. And at some point, you need to divert increasing parts of your resources to that rather than to building resources to be spent, and that this time is, as one otherwise awful teacher of mine called it, immediately if not sooner.
The key question, in terms of implications/VOI, is: Is ‘work on x-risk’ the kind of all-consuming task (a la SSC’s scholars who must use every waking moment to get to those last few minutes where they can make progress, or other all-consuming jobs like start-up founder in a cash crunch) where you must/should let everything else burn, because you have power law returns to investment and the timeline is short enough that you’ll burn out now and fix it later? Or is it where you can and should do both, especially given there isn’t really a cash crunch and the timeline distribution is highly uncertain and so is what would be helpful?
I want vastly more resources into x-risk, but some (very well meaning) actors have taken the attitude of ‘if it’s not directly about x-risk I have no interest’ and otherwise making everything fit into one of the ‘proven effective’ boxes, which starves community for resources since it doesn’t count as an end goal. It’s a big problem.
Anyway, whole additional huge topic and all that. And I’m currently debating how to divide my own resources between these goals!
I’ve got a lot of thoughts on this myself I haven’t gotten done yet either, but it appears many effective altruists and rationalists share your perspective of a common problem disrupting other community projects. See this comment.
This ties into an underrated factor I talked about in this comment:
But then I also read stuff like this post by Alyssa, who is from the Berkeley rationalist community, and Zvi’s comment about Berkeley itself eating the seed corn of Berkeley sounds plausible. Sarah C also wrote this post about how the Bayesian Area has changed over the years. The posts are quite different but the theme of both is the Bayesian Area in reality defies many rationalists’ expectations of what the community is or should be about.
Another thing is much of the recruitment is driven by efforts which are decidedly more ‘effective altruist’ than they are ‘rationalist’. With the Open Philanthropy Project and the effective altruism movement enabling the growth of so many community projects based in the Bay Area, it both i) draws people from outside Bay Area; ii) draws attention to the sorts of projects EA incentivizes at the expense of focusing on other rationalist projects in Berkeley. As far as I can tell, much of the rationality community who don’t consider themselves effective altruists aren’t happy EA eats up such a huge part of the community’s time, attention and money. As far as I can tell, it’s not that they don’t like EA. The major complaint is projects in the community with the EA stamp of approval are magically more deserving of support than other rationalist projects, regardless of arguments weighing the projects against each other.
To me a funny thing is from the other side I’m aware of a lot of effective altruists long focused on global poverty alleviation or other causes are unhappy with a disproportionate diversion of time, attention, money, and talent toward AI alignment, but moreover EA movement-building and other meta-level activities. Both rationalists and effective altruists find projects also receive funding on the basis of fitting frameworks which are ultimately too narrow and limited to account for all the best projects (e.g., the Important/Neglected/Tractable framework). So it appears the most prioritized projects in effective altruism are driving rapid changes that the grassroots elements of both the rationality and EA movements aren’t able to adapt to. A lot of effective altruists and rationalists from outside the Bay Area perceive it as a monolith eating their communities, and a lot of rationalists in Berkeley see the same happening to local friends whose attention used to not be so singularly focused on EA.
Is this suggesting that top-tier Berkeley is even eating the seed corn of Berkeley and making everyone but its own top-tier depressed in its wake?
I think there is specifically a “work on x-risk” subgroup, which yes recruits from within Berkeley, and yes has some debilitating effects. I wouldn’t quite characterize it the way Zvi does but will say it’s not obviously wrong.
[Edit: I have mixed feelings about whether or how bad the current dynamics are. I think it actually is the case that x-risk desperately needs agents, and yes this competes with non-x-risk community building which also needs agents. I think it’s possible to make pareto-optimal improvements to the situation but there will probably be at least some tradeoffs that need to get made and I think reasonable people can disagree about where to draw those tradeoffs]
We can all agree that x-risk prevention is a Worthy Cause, or even the most worthy cause. And at some point, you need to divert increasing parts of your resources to that rather than to building resources to be spent, and that this time is, as one otherwise awful teacher of mine called it, immediately if not sooner.
The key question, in terms of implications/VOI, is: Is ‘work on x-risk’ the kind of all-consuming task (a la SSC’s scholars who must use every waking moment to get to those last few minutes where they can make progress, or other all-consuming jobs like start-up founder in a cash crunch) where you must/should let everything else burn, because you have power law returns to investment and the timeline is short enough that you’ll burn out now and fix it later? Or is it where you can and should do both, especially given there isn’t really a cash crunch and the timeline distribution is highly uncertain and so is what would be helpful?
I want vastly more resources into x-risk, but some (very well meaning) actors have taken the attitude of ‘if it’s not directly about x-risk I have no interest’ and otherwise making everything fit into one of the ‘proven effective’ boxes, which starves community for resources since it doesn’t count as an end goal. It’s a big problem.
Anyway, whole additional huge topic and all that. And I’m currently debating how to divide my own resources between these goals!
I’ve got a lot of thoughts on this myself I haven’t gotten done yet either, but it appears many effective altruists and rationalists share your perspective of a common problem disrupting other community projects. See this comment.
This ties into an underrated factor I talked about in this comment:
Perhaps? I am not sure if if there is even a coherent top-tier. If there is I am not part of or aware of it.