Could you give an example or two? I don’t mean of one person assuming shallow diminishing returns and another assuming steep diminishing returns—obviously different people may have different opinions—but of a single person doing the sort of combination you describe.
I think Scott’s doing that here, switching back and forth between a steep diminishing returns story (where Good Ventures is engaged in at the very least intertemporal funging as a matter of policy, so giving to one of their preferred charities doesn’t have straightforward effects) and a claim that “you or I, if we wanted to, could currently donate $5000 (with usual caveats) and save a life.”
The more general pattern is people making nonspecific claims that some number is “true.” I’m claiming that if you try to make it true in some specific sense, you have to posit some weird stuff that should be strongly decision-relevant.
So I assume you’re objecting to his statement near the end that “the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty)”, on the basis that he should actually say “you probably can’t really save a life for $5000 because if you give that $5000 then the actual result will be that Good Ventures gives less in future because GiveWell will make sure of that to ensure that alleged $5000 opportunities continue to exist for PR reasons”.
But I don’t see the alleged switching back and forth. So far as I can see, Scott simply disagrees with you about the intertemporal funging thing, perhaps for the same reason as I think I do (namely, that GiveWell’s actual statements about their recommendations to Good Ventures specifically claim that they are trying to make them in a way that doesn’t involve intertemporal funging of a sort that messes up incentives in the way you say it does).
Where do you think Scott’s comment assumes the “steep diminishing returns story”?
It does tell a steep-diminishing-returns story about the specific idea of trying to run the sort of experiment you propose. But part of his point is that that sort of experiment would likely be inefficient and impractical, unlike just continuing to do what AMF and similar charities are already doing with whatever funding is available to them. The diminishing returns are different in the two scenarios, and it could be that they are much steeper if you decide that your goal is to eliminate all malaria deaths on Madagascar than if your goal is to reduce malaria in all the areas where there’s a lot of malaria that can be addressed via bed nets. It can simultaneously be true that (1) there are readily available opportunities to save more than 6k extra lives by distributing more bed nets, at a cost of $5k per life saved, and that (2) if instead you want to save specifically all 6k people who would otherwise have died from malaria in Madagascar this year, then it will cost hugely more than $5k per life. And also, relatedly, that (3) if instead of this vague “you” we start trying to be specific about who is going to do the thing, then in case 1 the answer is that AMF can save those lives by distributing bed nets, a specific thing that it knows how to do well, whereas in case 2 the answer is that there is no organization that has all the competences required to save all those lives at once, and that making it happen would require a tremendous feat of coordination.
I think Scott’s doing that here, switching back and forth between a steep diminishing returns story (where Good Ventures is engaged in at the very least intertemporal funging as a matter of policy, so giving to one of their preferred charities doesn’t have straightforward effects) and a claim that “you or I, if we wanted to, could currently donate $5000 (with usual caveats) and save a life.”
The more general pattern is people making nonspecific claims that some number is “true.” I’m claiming that if you try to make it true in some specific sense, you have to posit some weird stuff that should be strongly decision-relevant.
So I assume you’re objecting to his statement near the end that “the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty)”, on the basis that he should actually say “you probably can’t really save a life for $5000 because if you give that $5000 then the actual result will be that Good Ventures gives less in future because GiveWell will make sure of that to ensure that alleged $5000 opportunities continue to exist for PR reasons”.
But I don’t see the alleged switching back and forth. So far as I can see, Scott simply disagrees with you about the intertemporal funging thing, perhaps for the same reason as I think I do (namely, that GiveWell’s actual statements about their recommendations to Good Ventures specifically claim that they are trying to make them in a way that doesn’t involve intertemporal funging of a sort that messes up incentives in the way you say it does).
Where do you think Scott’s comment assumes the “steep diminishing returns story”?
It does tell a steep-diminishing-returns story about the specific idea of trying to run the sort of experiment you propose. But part of his point is that that sort of experiment would likely be inefficient and impractical, unlike just continuing to do what AMF and similar charities are already doing with whatever funding is available to them. The diminishing returns are different in the two scenarios, and it could be that they are much steeper if you decide that your goal is to eliminate all malaria deaths on Madagascar than if your goal is to reduce malaria in all the areas where there’s a lot of malaria that can be addressed via bed nets. It can simultaneously be true that (1) there are readily available opportunities to save more than 6k extra lives by distributing more bed nets, at a cost of $5k per life saved, and that (2) if instead you want to save specifically all 6k people who would otherwise have died from malaria in Madagascar this year, then it will cost hugely more than $5k per life. And also, relatedly, that (3) if instead of this vague “you” we start trying to be specific about who is going to do the thing, then in case 1 the answer is that AMF can save those lives by distributing bed nets, a specific thing that it knows how to do well, whereas in case 2 the answer is that there is no organization that has all the competences required to save all those lives at once, and that making it happen would require a tremendous feat of coordination.