I think EA made a strategic choice not to rapidly grow, for various reasons:
“Eternal September” worries: rapid growth makes it harder to filter for fit, makes it harder to bring new members up to speed, makes it harder to for high-engagement EAs to find each other, etc.
A large movement is harder to “steer”. Much of EA’s future impact likely depends on our ability to make unusually wise prioritization decisions, and rapidly update and change strategy as we learn more. Fast growth makes it less likely we’ll be able to do this, and more likely we’ll either lock in our current ideas as “the truth” (lest the movement/community refuse when it comes time for us to change course), or end up drifting toward the wrong ideas as emotional appeal and virality comes to be a larger factor in the community’s thinking than detailed argument.
As EA became less bottlenecked on “things anyone can do” (including donating) and more bottlenecked on rare talents, it became less valuable to do broad “grow the movement” outreach and more valuable to do more targeted outreach to plug specific gaps.
This period also overlapped with a shift in EA toward longtermism and x-risk. It’s easier to imagine a big nation-wide movement that helps end malaria or factory farming, whereas it’s much harder to imagine a big nation-wide movement that does reasonable things about biorisk or x-risk, since those are much more complicated problems requiring more specialized knowledge. So a shift toward x-risk implies less interest in growth.
Rapid growth is an irreversible decision, so it lost some favor just for the sake of maintaining option value. If you choose not to grow, you can always take the brakes off later should you change your mind. If you choose to grow, you probably can’t later decide to (painlessly) contract.
There was a fair bit of discussion in 2014-2015 about the dangers of growing EA. Anna Salamon gave a talk to EA leaders in 2014 outlining pros and cons of growth, and in 2015 I think I remember “growth is plausibly a bad idea” becoming a more popular view.
That’s one story about what happened, anyway. I wouldn’t be shocked if some EA leaders saw things differently.
Note that the GiveWell and Open Philanthropy didn’t formally split until 2017.
Also note that Open Philanthropy officially launched in August 2014.
Some other events that happened around this time:
SSC started in Feb 2013.
Peter Singer’s “effective altruism” TED talk was in May 2013.
2014-2015 is also when AI x-risk “went mainstream”: Stephen Hawking made waves talking about it in May 2014, Superintelligence came out in July 2014, Elon Musk made more waves in August 2014, MIRI introduced the first alignment research agenda in December 2014, FLI’s Puerto Rico conference and open letter was January 2015, and OpenAI launched in December 2015.
I could imagine those causing step changes in EA’s size.
“Some people don’t seem to have that reaction at all, and I don’t think it’s a failure of empathy or cognitive ability. Somehow it just doesn’t take.
While there does seem to be something missing, I can’t express what it is.”
I think EA made a strategic choice not to rapidly grow, for various reasons:
“Eternal September” worries: rapid growth makes it harder to filter for fit, makes it harder to bring new members up to speed, makes it harder to for high-engagement EAs to find each other, etc.
A large movement is harder to “steer”. Much of EA’s future impact likely depends on our ability to make unusually wise prioritization decisions, and rapidly update and change strategy as we learn more. Fast growth makes it less likely we’ll be able to do this, and more likely we’ll either lock in our current ideas as “the truth” (lest the movement/community refuse when it comes time for us to change course), or end up drifting toward the wrong ideas as emotional appeal and virality comes to be a larger factor in the community’s thinking than detailed argument.
As EA became less bottlenecked on “things anyone can do” (including donating) and more bottlenecked on rare talents, it became less valuable to do broad “grow the movement” outreach and more valuable to do more targeted outreach to plug specific gaps.
This period also overlapped with a shift in EA toward longtermism and x-risk. It’s easier to imagine a big nation-wide movement that helps end malaria or factory farming, whereas it’s much harder to imagine a big nation-wide movement that does reasonable things about biorisk or x-risk, since those are much more complicated problems requiring more specialized knowledge. So a shift toward x-risk implies less interest in growth.
Rapid growth is an irreversible decision, so it lost some favor just for the sake of maintaining option value. If you choose not to grow, you can always take the brakes off later should you change your mind. If you choose to grow, you probably can’t later decide to (painlessly) contract.
There was a fair bit of discussion in 2014-2015 about the dangers of growing EA. Anna Salamon gave a talk to EA leaders in 2014 outlining pros and cons of growth, and in 2015 I think I remember “growth is plausibly a bad idea” becoming a more popular view.
That’s one story about what happened, anyway. I wouldn’t be shocked if some EA leaders saw things differently.
Also note that Open Philanthropy officially launched in August 2014.
Some other events that happened around this time:
SSC started in Feb 2013.
Peter Singer’s “effective altruism” TED talk was in May 2013.
2014-2015 is also when AI x-risk “went mainstream”: Stephen Hawking made waves talking about it in May 2014, Superintelligence came out in July 2014, Elon Musk made more waves in August 2014, MIRI introduced the first alignment research agenda in December 2014, FLI’s Puerto Rico conference and open letter was January 2015, and OpenAI launched in December 2015.
I could imagine those causing step changes in EA’s size.
Failure of taking ideas seriously?
One reason I might have expected at least somewhat more growth recently: Vox launched an effective altruism vertical in October 2018.