I’m curious about the reasoning behind that statement, too.
This suggestion would unnecessarily concentrate donations among people with existing social connections to one another, no? I don’t expect that I personally know the world’s highest-leverage people. Even if I know some of them, I expect that organizations that dedicate resources to finding high-leverage people or opportunities (GiveWell, EA Funds, etc.) will fund opportunities with a better expected value than those that happen to be in front of me.
Is the reasoning here that those organizations are likely to miss the opportunities that happen to be in front of me personally? Or that sharing resources in local social communities strengthens them in a way that has particularly large benefits? Or that you’ve more carefully selected the people you have social connections to, such that they are likely to be overlooked-yet-high-leverage?
(I think I’m coming from a slightly more sceptical starting point than gwillen, but also feel like I could be missing something important here.)
I’m not sure I understand exactly what Ben’s proposing, and I posted Ben’s view here as a discussion-starter (because I want to see it evaluated), rather than as an endorsement.
(I should also note explicitly that I’m not writing this on MIRI’s behalf or trying to make any statement about MIRI’s current room for more funding; and I should mention that Open Phil is MIRI’s largest contributor.)
But if I had said something like what Ben said, the version of the claim I’d be making is:
The primary goal is still to maximize long-term, large-scale welfare, not to improve your friends’ lives as an end in itself. But if your friends are in the EA community, or in some other community that tends to do really important high-value things, then personal financial constraints will overlap a lot with “constraints on my ability to start a new high-altruistic-value project”, “constraints on my ability to take 3 months off work to think about what new high-value projects I could start in the future”, etc.
These personal constraints are often tougher to evaluate for bigger donors like Jaan Tallinn or Dustin Moskovitz (and the organizations they use to disburse funds, like BERI and the Open Philanthropy Project), awkward for those unusually heavily scrutinized donors to justify to onlookers, or demanding of too much evaluation time given opportunity costs. The funding gaps tend to be too small to be worth the time of bigger donors, while smaller donors are in a great position to cover these gaps, particularly if they’re gaps affecting high-impact individuals the donor already knows really well.
Larger donors are in a great position to help provide large, stable long-term support to well-established projects; I take Ben to be arguing that the role of smaller donors should largely be to add enough slack to the system that high-altruistic-impact people can afford to do the early-stage work (brainstorming, experimenting with uncertain new ideas, taking time off to skill-build or retrain for a new kind of work, etc.) that will then sometimes spit out a well-established project later in the pipeline.
I take Paul Christiano’s recent experiments with impact purchases, prizes, and researcher funding to be a special case of this approach to giving: rather than trying to find a well-established project to support, try to address value that’s being lost early in the pipeline, by paying individuals to start new projects or by just giving no-strings donations to people who have a proven track record of doing really valuable things.
One effect of this is that you’re incentivizing the good accomplishments/behaviors you’re basing your donation decision on. A separate effect can be that you’re removing constraints from people who find high-value projects inherently motivating and would spend time on them by default if they could; someone who’s already sufficiently motivated by altruistic impact and doesn’t need extra financial incentive may still be cash-constrained in what useful things they can spend their time on (or pay others to do, etc.).
This approach does introduce risk of bias. In principle, though, you can try to mitigate bias for this category of decision in the same way you’d try to mitigate bias for a direct donation to a philanthropic organization. E.g., ask third parties to check your reasoning, deliberately ignore opportunities where you’re wary of your own motivations, or simply give the money to someone you trust a lot to do the donating on your behalf.
This seems like a good representation of a large portion of my reasons.
I basically expect people without perceived slack to be destroying value whenever they’re engaged in sufficiently high-level intellectual work.If you believe that people in the developed world do in fact wield disproportionate power in the form of money (which is the usual justification for wealth transfers to the developing world poor), then improving the decisionmaking slack of those people seems like an extremely high-leverage intervention. This works for the same reason that real tenure was a good idea, and for the same reason Tocqueville was worried about the destruction of hereditary aristocracy and unaccountable institutions more generally.
I’m curious about the reasoning behind that statement, too.
This suggestion would unnecessarily concentrate donations among people with existing social connections to one another, no? I don’t expect that I personally know the world’s highest-leverage people. Even if I know some of them, I expect that organizations that dedicate resources to finding high-leverage people or opportunities (GiveWell, EA Funds, etc.) will fund opportunities with a better expected value than those that happen to be in front of me.
Is the reasoning here that those organizations are likely to miss the opportunities that happen to be in front of me personally? Or that sharing resources in local social communities strengthens them in a way that has particularly large benefits? Or that you’ve more carefully selected the people you have social connections to, such that they are likely to be overlooked-yet-high-leverage?
(I think I’m coming from a slightly more sceptical starting point than gwillen, but also feel like I could be missing something important here.)
I’m not sure I understand exactly what Ben’s proposing, and I posted Ben’s view here as a discussion-starter (because I want to see it evaluated), rather than as an endorsement.
(I should also note explicitly that I’m not writing this on MIRI’s behalf or trying to make any statement about MIRI’s current room for more funding; and I should mention that Open Phil is MIRI’s largest contributor.)
But if I had said something like what Ben said, the version of the claim I’d be making is:
The primary goal is still to maximize long-term, large-scale welfare, not to improve your friends’ lives as an end in itself. But if your friends are in the EA community, or in some other community that tends to do really important high-value things, then personal financial constraints will overlap a lot with “constraints on my ability to start a new high-altruistic-value project”, “constraints on my ability to take 3 months off work to think about what new high-value projects I could start in the future”, etc.
These personal constraints are often tougher to evaluate for bigger donors like Jaan Tallinn or Dustin Moskovitz (and the organizations they use to disburse funds, like BERI and the Open Philanthropy Project), awkward for those unusually heavily scrutinized donors to justify to onlookers, or demanding of too much evaluation time given opportunity costs. The funding gaps tend to be too small to be worth the time of bigger donors, while smaller donors are in a great position to cover these gaps, particularly if they’re gaps affecting high-impact individuals the donor already knows really well.
Larger donors are in a great position to help provide large, stable long-term support to well-established projects; I take Ben to be arguing that the role of smaller donors should largely be to add enough slack to the system that high-altruistic-impact people can afford to do the early-stage work (brainstorming, experimenting with uncertain new ideas, taking time off to skill-build or retrain for a new kind of work, etc.) that will then sometimes spit out a well-established project later in the pipeline.
I take Paul Christiano’s recent experiments with impact purchases, prizes, and researcher funding to be a special case of this approach to giving: rather than trying to find a well-established project to support, try to address value that’s being lost early in the pipeline, by paying individuals to start new projects or by just giving no-strings donations to people who have a proven track record of doing really valuable things.
One effect of this is that you’re incentivizing the good accomplishments/behaviors you’re basing your donation decision on. A separate effect can be that you’re removing constraints from people who find high-value projects inherently motivating and would spend time on them by default if they could; someone who’s already sufficiently motivated by altruistic impact and doesn’t need extra financial incentive may still be cash-constrained in what useful things they can spend their time on (or pay others to do, etc.).
This approach does introduce risk of bias. In principle, though, you can try to mitigate bias for this category of decision in the same way you’d try to mitigate bias for a direct donation to a philanthropic organization. E.g., ask third parties to check your reasoning, deliberately ignore opportunities where you’re wary of your own motivations, or simply give the money to someone you trust a lot to do the donating on your behalf.
This seems like a good representation of a large portion of my reasons.
I basically expect people without perceived slack to be destroying value whenever they’re engaged in sufficiently high-level intellectual work.If you believe that people in the developed world do in fact wield disproportionate power in the form of money (which is the usual justification for wealth transfers to the developing world poor), then improving the decisionmaking slack of those people seems like an extremely high-leverage intervention. This works for the same reason that real tenure was a good idea, and for the same reason Tocqueville was worried about the destruction of hereditary aristocracy and unaccountable institutions more generally.
For more on the incentive effect, see Robin Hanson’s argument for prizes over grants, which is related to the argument for impact certificates (and my argument for something simpler).
See Carl Shulman’s Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation for a more thorough discussion of a few of these points (though the examples Carl cites to support his conclusion look more like “provide very early funding to new organizations” than like Ben’s particular description).