I recognize that my model doesn’t perfectly describe reality, especially for edge cases
The model is uninteresting for cases within a standard deviation of the mean, so that’s an enormous weakness, particularly as edge cases have happened before in history.
This is in some ways a counterintuitive result...further sophistication would likely leave the big-picture conceptual conclusions in place.
It’s counterintuitive because you represented the mathematical model as one modeling reality. It’s not counterintuitive if one only thinks about the math.
If the model gets correct conclusions for the questions you are interested in but doesn’t describe reality well, it doesn’t need more sophistication—it needs replacement.
However, “the higher the initial estimate of cost-effectiveness, the better” is not strictly true.
This is because absence of evidence is evidence of absence, not because in the real world one is confronted by anything resembling the situation where initial expected estimates of charities’ effectiveness have ”...a normally distributed “estimate error” with mean 0 (the estimate is as likely to be too optimistic as too pessimistic) and...hold the ‘probability of 0 or less’ constant.”
when I think about how to improve the robustness of evidence and thus reduce the variance of “estimate error,” I think about examining a charity from different angles—asking critical questions and looking for places where reality may or may not match the basic narrative being presented.
This works because the final estimated expected value punishes charities for being unable to provide good accounts of their estimates; the absence of such accounts by those most motivated and in the best position to provide them is evidence that they do not exist.
Possibly, charities with particular high initial estimated expected values have historically done worse than those with specific lower initial estimated expected values—I would wager that this is in fact true for some values. If so, this alone provides reason to disbelieve similar high initial estimated expected values independent of statistical chicanery pretending that in reality there is no relationship between charities’ initial expected value and the chance that they are no better than average.
The model is uninteresting for cases within a standard deviation of the mean, so that’s an enormous weakness, particularly as edge cases have happened before in history.
It’s counterintuitive because you represented the mathematical model as one modeling reality. It’s not counterintuitive if one only thinks about the math.
If the model gets correct conclusions for the questions you are interested in but doesn’t describe reality well, it doesn’t need more sophistication—it needs replacement.
This is because absence of evidence is evidence of absence, not because in the real world one is confronted by anything resembling the situation where initial expected estimates of charities’ effectiveness have ”...a normally distributed “estimate error” with mean 0 (the estimate is as likely to be too optimistic as too pessimistic) and...hold the ‘probability of 0 or less’ constant.”
This works because the final estimated expected value punishes charities for being unable to provide good accounts of their estimates; the absence of such accounts by those most motivated and in the best position to provide them is evidence that they do not exist.
Possibly, charities with particular high initial estimated expected values have historically done worse than those with specific lower initial estimated expected values—I would wager that this is in fact true for some values. If so, this alone provides reason to disbelieve similar high initial estimated expected values independent of statistical chicanery pretending that in reality there is no relationship between charities’ initial expected value and the chance that they are no better than average.