I want to start by saying that these are some interesting ideas that I had definitely never thought about before! However, if your goal is to “improve the lives of as many people as possible, as much as possible” I think you missed the mark a bit – this looks more like “interesting/novel ways to improve the lives of people who are disadvantaged in a way I’m personally familiar with.” The people whose lives can be improved most cost-effectively probably live thousands of miles away from you. (I can’t tell from reading how familiar you are with EA and don’t want to come across as patronizing.) Specific note on #7 – lead abatement is actually a cause area that GiveWell has looked into, and it looks like there’s a new EA organization focused on it!
Now time for my opinions!
If you have several billion dollars, you should start another organization like Open Phil or the Gates Foundation – a foundation that’s solely funded by you, trying to do the most good by your standards, led by really smart and thoughtful people who you trust. Reasoning:
$1B is generally more than any single cause area can easily absorb, and existing foundations (even Open Phil) aren’t a very good place to put the money either, since they often already have trouble spending down their huge endowments.
You’d be subject to a different set of constraints and likely have a different philosophy than the other foundations, so you’d be able to cover more/different ground. (e.g. the Gates Foundation has done a lot of really excellent work, but they also pour a lot of money into education, and that area is so problematic that they’re probably essentially just pouring that money down the drain.)
Open Phil doesn’t like to fund more than 50% of any project, because that can often create bad dynamics (e.g. the recipient subconsciously becomes oversensitive to the donor’s opinions for fear of losing funding); if you come in as a separate donor and independently think the project is good, you and Open Phil together can fully fund the project without creating this dynamic (as long as you don’t form – or appear to form – a coalition).
Also something something competing in the marketplace of ideas; hand-wavey reasoning that maybe having another foundation with a focus on evidence-based interventions would push both you and Open Phil to do a better job overall.
Man, all this is really making me wish I had a billion dollars.
I’ve been thinking about the responses I’ve received a lot the past few days, and have somewhat changed my opinions written here, though not entirely. It really deserves a second essay, but it seems to me that EA (as normally practiced in this community) has a number of potentially dangerous blindspots, most notably in areas where it is hard to determine in advance how effective a given cause will be, or in general in areas that are hard to compute the value of using any currently known formal utilitarian systems. I think too much weight is currently being given by the EA community into our ability to formally calculate the value of a given good, and additionally, there needs to be greater willingness to fund more diverse actions, in my opinion. I know I’m not explaining my case very well here, but I would like to go back to this at some point and expand on it.
I want to start by saying that these are some interesting ideas that I had definitely never thought about before! However, if your goal is to “improve the lives of as many people as possible, as much as possible” I think you missed the mark a bit – this looks more like “interesting/novel ways to improve the lives of people who are disadvantaged in a way I’m personally familiar with.” The people whose lives can be improved most cost-effectively probably live thousands of miles away from you. (I can’t tell from reading how familiar you are with EA and don’t want to come across as patronizing.) Specific note on #7 – lead abatement is actually a cause area that GiveWell has looked into, and it looks like there’s a new EA organization focused on it!
Now time for my opinions!
If you have several billion dollars, you should start another organization like Open Phil or the Gates Foundation – a foundation that’s solely funded by you, trying to do the most good by your standards, led by really smart and thoughtful people who you trust. Reasoning:
$1B is generally more than any single cause area can easily absorb, and existing foundations (even Open Phil) aren’t a very good place to put the money either, since they often already have trouble spending down their huge endowments.
You’d be subject to a different set of constraints and likely have a different philosophy than the other foundations, so you’d be able to cover more/different ground. (e.g. the Gates Foundation has done a lot of really excellent work, but they also pour a lot of money into education, and that area is so problematic that they’re probably essentially just pouring that money down the drain.)
Open Phil doesn’t like to fund more than 50% of any project, because that can often create bad dynamics (e.g. the recipient subconsciously becomes oversensitive to the donor’s opinions for fear of losing funding); if you come in as a separate donor and independently think the project is good, you and Open Phil together can fully fund the project without creating this dynamic (as long as you don’t form – or appear to form – a coalition).
Also something something competing in the marketplace of ideas; hand-wavey reasoning that maybe having another foundation with a focus on evidence-based interventions would push both you and Open Phil to do a better job overall.
Man, all this is really making me wish I had a billion dollars.
thanks for your insightful feedback!
I’ve been thinking about the responses I’ve received a lot the past few days, and have somewhat changed my opinions written here, though not entirely. It really deserves a second essay, but it seems to me that EA (as normally practiced in this community) has a number of potentially dangerous blindspots, most notably in areas where it is hard to determine in advance how effective a given cause will be, or in general in areas that are hard to compute the value of using any currently known formal utilitarian systems. I think too much weight is currently being given by the EA community into our ability to formally calculate the value of a given good, and additionally, there needs to be greater willingness to fund more diverse actions, in my opinion. I know I’m not explaining my case very well here, but I would like to go back to this at some point and expand on it.