In this 80,000 Hours post (written in 2021), Benjamin Todd says “I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million per year (again, with huge variance depending on fit).” This seems like an argument against earning to give for most people.
On the other hand, this post emphasizes the value of small donors.
I don’t see how the quote you mentioned is an argument rather than a statement. Does the post cited provide a calculation to support that number given current funding constraints?
Edit: Reading some of the post, it definitely assumes we are in a funding overhang, which if you take John (and my own, and others’) observations at face value, then we are not.
The post was written in 2021 and argued that there was a funding overhang in longtermist causes (e.g. AI safety) because the amount of funding had grown faster than the number of people working.
The amount of committed capital increased by ~37% per year and the amount of deployed funds increased by ~21% per year since 2015 whereas the number of engaged EAs only grew ~14% per year.
The introduction of the FTX Future Fund around 2022 caused a major increase in longtermist funding which further increased the funding overhang.
Benjamin linked a Twitter update in August 2022 saying that the total committed capital was down by half because of a stock market and crypto crash. Then FTX went bankrupt a few months later.
The current situation
The FTX Future Fund no longer exists and Open Phil AI safety spending seems to have been mostly flat for the past 2 years. The post mentions that Open Phil is doing this to evaluate impact and increase capacity before possibly scaling more.
My understanding (based on this spreadsheet) is that the current level of AI safety funding has been roughly the same for the past 2 years whereas the number of AI safety organizations and researchers has been increasing by ~15% and ~30% per year respectively. So the funding overhang could be gone by now or there could even be a funding underhang.
Comparing talent vs funding
The post compares talent and funding in two ways:
The lifetime value of a researcher (e.g. $5 million) vs total committed funding (e.g. $1 billion)
The annual cost of a researcher (e.g. $100k) vs annual deployed funding (e.g. $100 million)
A funding overhang occurs when the total committed funding is greater than the lifetime value of all the researchers or the annual amount of funding that could be deployed per year is greater than the annual cost of all researchers.
Then the post says:
“Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10 (where this hugely depends on fit and the circumstances).”
I forgot to mention that this statement was applied to leadership roles like research leads, entrepreneurs, and grantmakers who can deploy large amounts of funds or have a large impact and therefore can have a large amount of value. Ordinary employees probably have less financial value.
Assuming there is no funding overhang in AI safety anymore, the marginal value of funding over more researchers is higher today than it was when the post was written.
The future
If total AI safety funding does not increase much in the near term, AI safety could continue to be funding-constrained or become more funding constrained as the number of people interested in working on AI safety increases.
However, the post explains some arguments for expecting EA funding to increase:
There’s some evidence that Open Philanthropy plans to scale up its spending over the next several years. For example, this post says, “We gave away over $400 million in 2021. We aim to double that number this year, and triple it by 2025”. Though the post was written in 2022 so it could be overoptimistic.
According to Metaculus, there is a ~50% chance of another Good Ventures / Open Philanthropy-sized fund being created by 2026 which could substantially increase funding for AI safety.
My mildly optimistic guess is that as AI safety becomes more mainstream there will be a symmetrical effect where both more talent and funding are attracted to the field.
You are misunderstanding. OP is saying that these people they’ve identified are as valuable to the org as an additional N$/y “earn-to-give”-er. They are not saying that they pay those employees N$/y.
[I don’t like the $5k/life number and it generally seems sus to use a number GiveWell created (and disavows literal use of) to evaluate OpenPhil, but accepting it arguendo for this post...]
I think it’s pretty easy for slightly better decision-making by someone at openphil to save many times 2000 lives/ year. I think your math is off and you mean 20,000 lives per year, which is still not that hard for me to picture. The returns on slightly better spending is easily that high, when that much money is involved.
You could argue OpenPhil grant makers are not, in practice, generating those improvements. But preferring a year of an excellent grant maker to an additional $10m doesn’t seem weird for an org that, at the time, was giving away less money than it wanted to because it couldn’t find enough good projects.
Yeah, 80k’s post on the topic was written with the very explicit assumption of a funding overhang, which I do think was a correct assumption when that post was written, but has recently ceased to be a correct assumption.
In this 80,000 Hours post (written in 2021), Benjamin Todd says “I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million per year (again, with huge variance depending on fit).” This seems like an argument against earning to give for most people.
On the other hand, this post emphasizes the value of small donors.
I don’t see how the quote you mentioned is an argument rather than a statement. Does the post cited provide a calculation to support that number given current funding constraints?
Edit: Reading some of the post, it definitely assumes we are in a funding overhang, which if you take John (and my own, and others’) observations at face value, then we are not.
Context of the post: funding overhang
The post was written in 2021 and argued that there was a funding overhang in longtermist causes (e.g. AI safety) because the amount of funding had grown faster than the number of people working.
The amount of committed capital increased by ~37% per year and the amount of deployed funds increased by ~21% per year since 2015 whereas the number of engaged EAs only grew ~14% per year.
The introduction of the FTX Future Fund around 2022 caused a major increase in longtermist funding which further increased the funding overhang.
Benjamin linked a Twitter update in August 2022 saying that the total committed capital was down by half because of a stock market and crypto crash. Then FTX went bankrupt a few months later.
The current situation
The FTX Future Fund no longer exists and Open Phil AI safety spending seems to have been mostly flat for the past 2 years. The post mentions that Open Phil is doing this to evaluate impact and increase capacity before possibly scaling more.
My understanding (based on this spreadsheet) is that the current level of AI safety funding has been roughly the same for the past 2 years whereas the number of AI safety organizations and researchers has been increasing by ~15% and ~30% per year respectively. So the funding overhang could be gone by now or there could even be a funding underhang.
Comparing talent vs funding
The post compares talent and funding in two ways:
The lifetime value of a researcher (e.g. $5 million) vs total committed funding (e.g. $1 billion)
The annual cost of a researcher (e.g. $100k) vs annual deployed funding (e.g. $100 million)
A funding overhang occurs when the total committed funding is greater than the lifetime value of all the researchers or the annual amount of funding that could be deployed per year is greater than the annual cost of all researchers.
Then the post says:
“Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10 (where this hugely depends on fit and the circumstances).”
I forgot to mention that this statement was applied to leadership roles like research leads, entrepreneurs, and grantmakers who can deploy large amounts of funds or have a large impact and therefore can have a large amount of value. Ordinary employees probably have less financial value.
Assuming there is no funding overhang in AI safety anymore, the marginal value of funding over more researchers is higher today than it was when the post was written.
The future
If total AI safety funding does not increase much in the near term, AI safety could continue to be funding-constrained or become more funding constrained as the number of people interested in working on AI safety increases.
However, the post explains some arguments for expecting EA funding to increase:
There’s some evidence that Open Philanthropy plans to scale up its spending over the next several years. For example, this post says, “We gave away over $400 million in 2021. We aim to double that number this year, and triple it by 2025”. Though the post was written in 2022 so it could be overoptimistic.
According to Metaculus, there is a ~50% chance of another Good Ventures / Open Philanthropy-sized fund being created by 2026 which could substantially increase funding for AI safety.
My mildly optimistic guess is that as AI safety becomes more mainstream there will be a symmetrical effect where both more talent and funding are attracted to the field.
This comment expressed doubt that 10 million/year figure is an accurate estimation of the value of individual people at 80k/ OpenPhil in practice.
An earlier version of this comment expressed this more colorfully. Upon reflection I no longer feel comfortable discussing this in person.
You are misunderstanding. OP is saying that these people they’ve identified are as valuable to the org as an additional N$/y “earn-to-give”-er. They are not saying that they pay those employees N$/y.
I don’t think I am misunderstanding. Unfortunately, upon reflection I don’t feel comfortable discussing this in public. Sorry.
Thank you for your thougths lc.
[I don’t like the $5k/life number and it generally seems sus to use a number GiveWell created (and disavows literal use of) to evaluate OpenPhil, but accepting it arguendo for this post...]
I think it’s pretty easy for slightly better decision-making by someone at openphil to save many times 2000 lives/ year. I think your math is off and you mean 20,000 lives per year, which is still not that hard for me to picture. The returns on slightly better spending is easily that high, when that much money is involved.
You could argue OpenPhil grant makers are not, in practice, generating those improvements. But preferring a year of an excellent grant maker to an additional $10m doesn’t seem weird for an org that, at the time, was giving away less money than it wanted to because it couldn’t find enough good projects.
Sorry, I don’t feel comfortable continuing this conversation in public. Thank you for your thoughts Elizabeth.
Yeah, 80k’s post on the topic was written with the very explicit assumption of a funding overhang, which I do think was a correct assumption when that post was written, but has recently ceased to be a correct assumption.