As I wrote in my response to Carl on The GiveWell Blog, the conceptual content of this post does not rely on the assumption that the value of donations (as measured in something like “lives saved” or “DALYs saved”) is normally distributed. In particular, a lognormal distribution fits easily into the above framework. .
I recognize that my model doesn’t perfectly describe reality, especially for edge cases. However, I think it is more sophisticated than any model I know of that contradicts its big-picture conceptual conclusions (e.g., by implying “the higher your back-of-the-envelope [extremely error-prone] expected-value calculation, the necessarily higher your posterior expected-value estimate”) and that further sophistication would likely leave the big-picture conceptual conclusions in place.
JGWeissman is correct that I meant “maximum” when I said “inflection point.”
I recognize that my model doesn’t perfectly describe reality, especially for edge cases
The model is uninteresting for cases within a standard deviation of the mean, so that’s an enormous weakness, particularly as edge cases have happened before in history.
This is in some ways a counterintuitive result...further sophistication would likely leave the big-picture conceptual conclusions in place.
It’s counterintuitive because you represented the mathematical model as one modeling reality. It’s not counterintuitive if one only thinks about the math.
If the model gets correct conclusions for the questions you are interested in but doesn’t describe reality well, it doesn’t need more sophistication—it needs replacement.
However, “the higher the initial estimate of cost-effectiveness, the better” is not strictly true.
This is because absence of evidence is evidence of absence, not because in the real world one is confronted by anything resembling the situation where initial expected estimates of charities’ effectiveness have ”...a normally distributed “estimate error” with mean 0 (the estimate is as likely to be too optimistic as too pessimistic) and...hold the ‘probability of 0 or less’ constant.”
when I think about how to improve the robustness of evidence and thus reduce the variance of “estimate error,” I think about examining a charity from different angles—asking critical questions and looking for places where reality may or may not match the basic narrative being presented.
This works because the final estimated expected value punishes charities for being unable to provide good accounts of their estimates; the absence of such accounts by those most motivated and in the best position to provide them is evidence that they do not exist.
Possibly, charities with particular high initial estimated expected values have historically done worse than those with specific lower initial estimated expected values—I would wager that this is in fact true for some values. If so, this alone provides reason to disbelieve similar high initial estimated expected values independent of statistical chicanery pretending that in reality there is no relationship between charities’ initial expected value and the chance that they are no better than average.
A few quick notes:
As I wrote in my response to Carl on The GiveWell Blog, the conceptual content of this post does not rely on the assumption that the value of donations (as measured in something like “lives saved” or “DALYs saved”) is normally distributed. In particular, a lognormal distribution fits easily into the above framework. .
I recognize that my model doesn’t perfectly describe reality, especially for edge cases. However, I think it is more sophisticated than any model I know of that contradicts its big-picture conceptual conclusions (e.g., by implying “the higher your back-of-the-envelope [extremely error-prone] expected-value calculation, the necessarily higher your posterior expected-value estimate”) and that further sophistication would likely leave the big-picture conceptual conclusions in place.
JGWeissman is correct that I meant “maximum” when I said “inflection point.”
The model is uninteresting for cases within a standard deviation of the mean, so that’s an enormous weakness, particularly as edge cases have happened before in history.
It’s counterintuitive because you represented the mathematical model as one modeling reality. It’s not counterintuitive if one only thinks about the math.
If the model gets correct conclusions for the questions you are interested in but doesn’t describe reality well, it doesn’t need more sophistication—it needs replacement.
This is because absence of evidence is evidence of absence, not because in the real world one is confronted by anything resembling the situation where initial expected estimates of charities’ effectiveness have ”...a normally distributed “estimate error” with mean 0 (the estimate is as likely to be too optimistic as too pessimistic) and...hold the ‘probability of 0 or less’ constant.”
This works because the final estimated expected value punishes charities for being unable to provide good accounts of their estimates; the absence of such accounts by those most motivated and in the best position to provide them is evidence that they do not exist.
Possibly, charities with particular high initial estimated expected values have historically done worse than those with specific lower initial estimated expected values—I would wager that this is in fact true for some values. If so, this alone provides reason to disbelieve similar high initial estimated expected values independent of statistical chicanery pretending that in reality there is no relationship between charities’ initial expected value and the chance that they are no better than average.