But the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty) and is a really important message to get people thinking about ethics and how they want to contribute.
GiveWell seems not to think this is true:
GiveWell’s general position is that you can’t take cost-effectiveness estimates literally. It might be confusing that GiveWell nevertheless attempts to estimate cost-effectiveness with a great degree of precision, but Holden’s on the record as saying that donors need to adjust for publication bias.
If you look at those detailed cost-effectiveness estimates, you’ll find that GiveWell is usually compressing a variety of outcomes into a single metric. The amount of money it takes to literally prevent a death from malaria is higher than the amount of money it takes to do the “equivalent” of saving a life if you count indirect effects. (Nevertheless, the last time I checked, CEA reported the number as though it were literally the price for averting a death from malaria, so I can see why you’d be confused.)
I’ve read this. I interpret them as saying there are fundamental problems of uncertainty with saying any number, not that the number $5000 is wrong. There is a complicated and meta-uncertain probability distribution with its peak at $5000. This seems like the same thing we mean by many other estimates, like “Biden has a 40% chance of winning the Democratic primary”. GiveWell is being unusually diligent in discussing the ways their number is uncertain and meta-uncertain, but it would be wrong (isolated demand for rigor) to retreat from a best estimate to total ignorance because of this.
OK but
(1) what about the fact that to a large extent they’re not actually talking about saving lives if you look into the details of the cost-effectiveness estimate?
(2) GiveWell’s analysis does not account for the kind of publication bias end users of GiveWell’s recommendations should expect, so yes this does analytically imply that we should adjust the $5k substantially downwards based on some kind of model of what kinds of effectiveness claims get promoted to our attention.
GiveWell seems not to think this is true:
GiveWell’s general position is that you can’t take cost-effectiveness estimates literally. It might be confusing that GiveWell nevertheless attempts to estimate cost-effectiveness with a great degree of precision, but Holden’s on the record as saying that donors need to adjust for publication bias.
If you look at those detailed cost-effectiveness estimates, you’ll find that GiveWell is usually compressing a variety of outcomes into a single metric. The amount of money it takes to literally prevent a death from malaria is higher than the amount of money it takes to do the “equivalent” of saving a life if you count indirect effects. (Nevertheless, the last time I checked, CEA reported the number as though it were literally the price for averting a death from malaria, so I can see why you’d be confused.)
I’ve read this. I interpret them as saying there are fundamental problems of uncertainty with saying any number, not that the number $5000 is wrong. There is a complicated and meta-uncertain probability distribution with its peak at $5000. This seems like the same thing we mean by many other estimates, like “Biden has a 40% chance of winning the Democratic primary”. GiveWell is being unusually diligent in discussing the ways their number is uncertain and meta-uncertain, but it would be wrong (isolated demand for rigor) to retreat from a best estimate to total ignorance because of this.
OK but (1) what about the fact that to a large extent they’re not actually talking about saving lives if you look into the details of the cost-effectiveness estimate? (2) GiveWell’s analysis does not account for the kind of publication bias end users of GiveWell’s recommendations should expect, so yes this does analytically imply that we should adjust the $5k substantially downwards based on some kind of model of what kinds of effectiveness claims get promoted to our attention.