Interesting but from a purely mathematical pov I’ve some problems with the model (or the way it’s used).
The article doesn’t speak at all of cases where the initial estimate is negative (you can have an initially, broad estimate, of a charity to be negative, that is, below average, even if at the end it’s an efficient one).
Variance of error = estimate sounds too drastic to me. It’s reasonable to assume that, since your estimate is crude, it’ll tend to be more error-prone when extreme. But first, if your first estimate is very close to “oh, this charity seems really average” (X very close to 0) that doesn’t mean it that the error in the estimate is very close to 0. And then, even if your estimate is crude, it’s still comes from some information, not pure random. What about something like 1+aX as the variance of error (with a somewhere like 3⁄4 maybe) ? So it never gets close to 0, and you still account for some amount of information in the estimate. I’m popping the formula out of my head. A much better one could probably be done using bits of information : ie, your estimate is worth one bit of information, and using Bayes’ theorem you unfold the error estimate with a prior of (0,1) to get to (X,Y) with fixed X and one bit of information… something like that ?
Assuming you always get the same X for all of your crude estimates seem very unlikely—I can understand it’s a simplifying hypothesis, but more realistic hypothesis where you get different values of X for different estimates of the same charity should be analyzed too… will it be the topic of the next article ?
And finally (but it’s just a wording issue) you seem to confuse “will be 0” and “will be 0 or less” in the text, for example : « it has a normally distributed “estimate error” with mean 0 (the estimate is as likely to be too optimistic as too pessimistic) and standard deviation X (so 16% of the time, the actual impact of your $1000 will be 0 or “average”). » well, it’s “will be 0 or less” in that. you’ll never get exactly 0 using continuous functions.
Interesting but from a purely mathematical pov I’ve some problems with the model (or the way it’s used).
The article doesn’t speak at all of cases where the initial estimate is negative (you can have an initially, broad estimate, of a charity to be negative, that is, below average, even if at the end it’s an efficient one).
Variance of error = estimate sounds too drastic to me. It’s reasonable to assume that, since your estimate is crude, it’ll tend to be more error-prone when extreme. But first, if your first estimate is very close to “oh, this charity seems really average” (X very close to 0) that doesn’t mean it that the error in the estimate is very close to 0. And then, even if your estimate is crude, it’s still comes from some information, not pure random. What about something like 1+aX as the variance of error (with a somewhere like 3⁄4 maybe) ? So it never gets close to 0, and you still account for some amount of information in the estimate. I’m popping the formula out of my head. A much better one could probably be done using bits of information : ie, your estimate is worth one bit of information, and using Bayes’ theorem you unfold the error estimate with a prior of (0,1) to get to (X,Y) with fixed X and one bit of information… something like that ?
Assuming you always get the same X for all of your crude estimates seem very unlikely—I can understand it’s a simplifying hypothesis, but more realistic hypothesis where you get different values of X for different estimates of the same charity should be analyzed too… will it be the topic of the next article ?
And finally (but it’s just a wording issue) you seem to confuse “will be 0” and “will be 0 or less” in the text, for example : « it has a normally distributed “estimate error” with mean 0 (the estimate is as likely to be too optimistic as too pessimistic) and standard deviation X (so 16% of the time, the actual impact of your $1000 will be 0 or “average”). » well, it’s “will be 0 or less” in that. you’ll never get exactly 0 using continuous functions.