If you believe that I could instead provide the same benefits to civilization by being directly employed by some well-run EA organization as $2.4M/year in donations, then I will happily do this for only $1M/year. Everyone will be very much better off.
Does this sound like a good deal? If not, then how does this square with the N ~300 estimate?
I think most of the better-funded EA organizations would not prefer most LWers working there for $1M/yr, nor for a more typical salary, nor for free.
(Even though many of these same LW-ers are useful many other places.)
I think many of the better-funded EA organizations would prefer (being able to continue employing at least their most useful staff members) to (receiving an annual donation equal to 30x what that staff member could make in industry).
If a typical LWer somehow really decided, deep in themselves, to try to do good with all their heart and all their mind and creativity… or to do as much of this as was compatible with still working no more than 40 hrs/week and having a family and a life… I suspect this would be quite considerably more useful than donating 10% of their salary to some already-funded-to-near-saturation EA organization. (Since the latter effect is often small.) (Though some organizations are not that well-funded! So this varies by organization IMO.)
2 and 3 are as far as I can get toward agreeing with the OPs estimated factor of 300. It doesn’t get me all the way there (well, I guess it might for the mean person, but certainly not for the median; plus also there’re assumptions implicit in trying to use a multiplier here that I don’t buy or can’t stomach.). But it makes me sort of empathize with how people can utter sentences like those.
In terms of what to make of this:
Sometimes people jam 1 and 2 together, to get a perspective like “most people are useless compared to those who work at EA organizations.” I think this is not quite right, because “scaling an existing EA organization’s impact” is not at all the only way to do good, and my guess is that the same people may be considerably better at other ways to do good than they are at adding productivity to an [organization that already has as many staff as it knows how to use].
One possible alternate perspective:
“Many of the better funded EA organizations don’t much know how to turn additional money, or additional skilled people, into doing their work faster/more/better. So look for some other way to do good and don’t listen too much to them for how to do it. Rely on your own geeky ideas, smart outside friends who’ve done interesting things before, common sense and feedback loops and experimentation and writing out your models and looking for implications/inconsistencies, etc. in place of expecting EA to have a lot of pre-found opportunities that require only your following of their instructions.”
From the perspective of the EA org, there are hires for whom this would be a good decision (I’ve heard >1M pay numbers thrown around for critical software engineering roles that are disconnected from EA strategy, or Terence Tao). But it’s not obviously good in every case. Here’s some of the reasoning I’ve heard:
People often do better work if they’re altruistically motivated than if they’re mercenaries—there’s a “human alignment problem”. When you underpay, you don’t attract top talent. When you overpay, you attract more top talent but also more mercenaries. The optimum seems to be somewhere around top industry pay (in other industries, employees often provide the companies far more value than their salary, and the equilibrium for companies is to match median industry pay adjusting a bit for their circumstances). EA orgs are currently transitioning away from the underfunded nonprofit regime, but I think the equilibrium is still lower than top industry pay in many cases (e.g. when EA work is more interesting or saliently meaningful than industry work, and top talent differentially seeks out interesting or saliently meaningful work). Due to the factors below, I don’t see the optimum being substantially more than industry.
People (even altruists) don’t like being paid less than someone else for more impact. Your slightly more talented or harder-working colleague might demand to be paid $1.2 million. If not, this sets up weird dynamics where selfish people are paid 5x more than altruists
People (even altruists) don’t like getting pay cuts, and often expect pay raises. Paying someone $1M often raises their expectations so they expect $1M * 1.04^n in year n until they retire. This can sometimes be fixed with workplace culture.
edit: the below thing is wrong
The last two factors are especially large because EA has much more human capital than financial capital (edit: as measured by valuation)-- I would guess something like a 5x ratio. If we paid everyone at EA orgs 41% of what they’re worth, and they spend it selfishly, this would kill >30% of the surplus from employing all the EAs, force EA funders (who are invested in high-risk, high-EV companies like FTX) to derisk to pay consistent salaries.
because EA has much more human capital than financial capital
Is this a typo? It seems in direct contradiction with the OPs claim that EA is people-bottlenecked and not funding-bottlenecked, which I otherwise took you to be agreeing with.
I mean this in a narrow sense (edited to clarify) based on marginal valuations: I’d much rather delete 1% of EA money than 1% of EA human capital. So we can think of human capital as being worth more than money. I think there might be problems with this framing, but the core point applies: even though there are far fewer people than money (when using the conversion ratio implied by industry salary), the counterfactual value of people adds up to more than money. So paying everyone 40% of their counterfactual value would substantially deplete EA financial capital.
I think this is equivalent to saying that the marginal trade we’re making is much worse than the average trade (where trade = buying labor with money)
I could still be missing something, but I think this doesn’t make sense. If the marginal numbers are as you say and if EA organizations started paying everyone 40% of their counterfactual value, the sum of “EA financial capital” would go down, and so the counterfactual value-in-“EA”-dollars of marginal people would also go down, and so the numbers would probably work out with lower valuations per person in dollars. Similarly, if “supply and demand” works for finding good people to work at EA organizations (which it might? I’m honestly unsure), the number of EA people would go up, which would also reduce the counterfactual value-in-dollars of marginal EA people.
More simply, it seems a bit weird to start with “money is not very useful on the margin, compared to people” and get from there to “because of how useless money is compared to people, if we spend money to get more people, this’ll be a worse deal than you’d think.”
Although, I was missing something / confused about something prior to reading your reply: it does seem likely to me on reflection that losing all of EAs dollars, but keeping the people, would leave us in a much better position than losing all of EAs people (except a few very wealthy donors, say) but losing its dollars. So in that sense it seems likely to me that EA has much more value-from-human-capital than value-from-financial-capital.
I currently have a salary of around $80k/year.
If you believe that I could instead provide the same benefits to civilization by being directly employed by some well-run EA organization as $2.4M/year in donations, then I will happily do this for only $1M/year. Everyone will be very much better off.
Does this sound like a good deal? If not, then how does this square with the N ~300 estimate?
Some components of my own models, here:
I think most of the better-funded EA organizations would not prefer most LWers working there for $1M/yr, nor for a more typical salary, nor for free.
(Even though many of these same LW-ers are useful many other places.)
I think many of the better-funded EA organizations would prefer (being able to continue employing at least their most useful staff members) to (receiving an annual donation equal to 30x what that staff member could make in industry).
If a typical LWer somehow really decided, deep in themselves, to try to do good with all their heart and all their mind and creativity… or to do as much of this as was compatible with still working no more than 40 hrs/week and having a family and a life… I suspect this would be quite considerably more useful than donating 10% of their salary to some already-funded-to-near-saturation EA organization. (Since the latter effect is often small.) (Though some organizations are not that well-funded! So this varies by organization IMO.)
2 and 3 are as far as I can get toward agreeing with the OPs estimated factor of 300. It doesn’t get me all the way there (well, I guess it might for the mean person, but certainly not for the median; plus also there’re assumptions implicit in trying to use a multiplier here that I don’t buy or can’t stomach.). But it makes me sort of empathize with how people can utter sentences like those.
In terms of what to make of this:
Sometimes people jam 1 and 2 together, to get a perspective like “most people are useless compared to those who work at EA organizations.” I think this is not quite right, because “scaling an existing EA organization’s impact” is not at all the only way to do good, and my guess is that the same people may be considerably better at other ways to do good than they are at adding productivity to an [organization that already has as many staff as it knows how to use].
One possible alternate perspective:
“Many of the better funded EA organizations don’t much know how to turn additional money, or additional skilled people, into doing their work faster/more/better. So look for some other way to do good and don’t listen too much to them for how to do it. Rely on your own geeky ideas, smart outside friends who’ve done interesting things before, common sense and feedback loops and experimentation and writing out your models and looking for implications/inconsistencies, etc. in place of expecting EA to have a lot of pre-found opportunities that require only your following of their instructions.”
Great use of logic to try to force us to have models, and to make those models explicit!
From the perspective of the EA org, there are hires for whom this would be a good decision (I’ve heard >1M pay numbers thrown around for critical software engineering roles that are disconnected from EA strategy, or Terence Tao). But it’s not obviously good in every case. Here’s some of the reasoning I’ve heard:
People often do better work if they’re altruistically motivated than if they’re mercenaries—there’s a “human alignment problem”. When you underpay, you don’t attract top talent. When you overpay, you attract more top talent but also more mercenaries. The optimum seems to be somewhere around top industry pay (in other industries, employees often provide the companies far more value than their salary, and the equilibrium for companies is to match median industry pay adjusting a bit for their circumstances). EA orgs are currently transitioning away from the underfunded nonprofit regime, but I think the equilibrium is still lower than top industry pay in many cases (e.g. when EA work is more interesting or saliently meaningful than industry work, and top talent differentially seeks out interesting or saliently meaningful work). Due to the factors below, I don’t see the optimum being substantially more than industry.
People (even altruists) don’t like being paid less than someone else for more impact. Your slightly more talented or harder-working colleague might demand to be paid $1.2 million. If not, this sets up weird dynamics where selfish people are paid 5x more than altruists
People (even altruists) don’t like getting pay cuts, and often expect pay raises. Paying someone $1M often raises their expectations so they expect $1M * 1.04^n in year n until they retire. This can sometimes be fixed with workplace culture.
edit: the below thing is wrong
The last two factors are especially large because EA has much more human capital than financial capital (edit: as measured by valuation)-- I would guess something like a 5x ratio. If we paid everyone at EA orgs 41% of what they’re worth, and they spend it selfishly, this would kill >30% of the surplus from employing all the EAs, force EA funders (who are invested in high-risk, high-EV companies like FTX) to derisk to pay consistent salaries.
Is this a typo? It seems in direct contradiction with the OPs claim that EA is people-bottlenecked and not funding-bottlenecked, which I otherwise took you to be agreeing with.
I mean this in a narrow sense (edited to clarify) based on marginal valuations: I’d much rather delete 1% of EA money than 1% of EA human capital. So we can think of human capital as being worth more than money. I think there might be problems with this framing, but the core point applies: even though there are far fewer people than money (when using the conversion ratio implied by industry salary), the counterfactual value of people adds up to more than money. So paying everyone 40% of their counterfactual value would substantially deplete EA financial capital.
I think this is equivalent to saying that the marginal trade we’re making is much worse than the average trade (where trade = buying labor with money)
I could still be missing something, but I think this doesn’t make sense. If the marginal numbers are as you say and if EA organizations started paying everyone 40% of their counterfactual value, the sum of “EA financial capital” would go down, and so the counterfactual value-in-“EA”-dollars of marginal people would also go down, and so the numbers would probably work out with lower valuations per person in dollars. Similarly, if “supply and demand” works for finding good people to work at EA organizations (which it might? I’m honestly unsure), the number of EA people would go up, which would also reduce the counterfactual value-in-dollars of marginal EA people.
More simply, it seems a bit weird to start with “money is not very useful on the margin, compared to people” and get from there to “because of how useless money is compared to people, if we spend money to get more people, this’ll be a worse deal than you’d think.”
Although, I was missing something / confused about something prior to reading your reply: it does seem likely to me on reflection that losing all of EAs dollars, but keeping the people, would leave us in a much better position than losing all of EAs people (except a few very wealthy donors, say) but losing its dollars. So in that sense it seems likely to me that EA has much more value-from-human-capital than value-from-financial-capital.
nit: I think “but losing its dollars” should be “but keeping its dollars”
Thanks, I agree with this comment.