All but the final section of the post are arguing precisely along these lines.
That seems probable, and therefore I haven’t said that I disagree with the post, but that I am confused about what it suggests. But I have some problems with the non-final sections, too, mainly concerning the terminology. For example, the phrases “estimated value” and “expected value”, e.g. in
The crucial characteristic of the EEV approach is that it does not incorporate a systematic preference for better-grounded estimates over rougher estimates. It ranks charities/actions based simply on their estimated value, ignoring differences in the reliability and robustness of the estimates.
are used as if it simply meant “result of the Fermi calculation” instead of “mean value of probability distribution updated by the Fermi calculation”. It seems to me that the post nowhere explicitly says that such estimates are incorrect and that it is advocating standard Bayesian reasoning, only done properly. After first reading I rather assumed that it proposes an extension to Bayes, where the agent after proper updating classifies the obtained estimates based on their reliabilities.
Also, I was not sure whether the post discusses a useful everyday technique when formal updating is unfeasible, or whether it proposes an extension to probability theory valid on the fundamental level. See also cousin_it’s comments.
As for #2, i.e.
Assuming a normal/log-normal distribution for effectiveness of actions, the appropriate Bayesian adjustment is huge for actions with prima facie effectiveness many standard above the mean but which have substantial error bars.
mostly I am not sure what you refer to by appropriate Bayesian adjustment. On the first reading I interpreted is as “the correct approach, in contrast to EEV”, but then it contradicts your apparent position expressed in the rest of the comment, where you argue that substantial error bars shoul prevent huge updating. The second interpretation may be “the usual Bayes updating”, but then it is not true, as I argued in #1 (and in fact, I only repeat Holden’s calculations).
For example, the phrases “estimated value” and “expected value” [...] are used as if it simply meant “result of the Fermi calculation” instead of “mean value of probability distribution updated by the Fermi calculation”. It seems to me that the post nowhere explicitly says that such estimates are incorrect and that it is advocating standard Bayesian reasoning, only done properly.
I’m very sure that in the section to which you refer “estimated value” means “result of a Fermi calcuation” (or something similar) as opposed “mean value of probability distribution updated by the Fermi calculation.” (I personally find this to be clear from the text but may have been influenced by prior correspondence with Holden on this topic.)
The reference to “differences in the reliability and robustness of the estimates” refers to the size of the error bars (whether explicit or implicit) about the initial estimate.
Also, I was not sure whether the post discusses a useful everyday technique when formal updating is unfeasible, or whether it proposes an extension to probability theory valid on the fundamental level. See also cousin_it’s comments.
Here too I’m very sure that the post is discussing a useful everyday technique when formal updating is unfeasible rather than an extension to probability theory valid on a fundamental level.
On the first reading I interpreted is as “the correct approach, in contrast to EEV”, but then it contradicts your apparent position expressed in the rest of the comment, where you argue that substantial error bars shoul prevent huge updating. The second interpretation may be “the usual Bayes updating”, but then it is not true, as I argued in #1 (and in fact, I only repeat Holden’s calculations).
Here we had a simple misunderstanding; I meant “updating from the initial (Fermi calculation-based) estimate to a revised estimate after taking into account one’s Bayesian prior” rather than “updating one’s Bayesian prior to a revised Bayesian prior based on the initial (Fermi calculation-based) estimate.”
I was saying “when there are large error bars about the initial estimate, the initial estimate should be revised heavily”, not “when there are large error bars about the initial estimate, one’s Bayesian prior should be revised heavily.” On the contrary, larger the error bars about the initial estimate, the less one’s Bayesian prior should change based on the estimate.
I imagine that we’re in agreement here. I think that the article is probably pitched at someone with less technical expertise than you have; what seems obvious and standard to you might be genuinely new to many people and this may lead to you to assume that it’s saying more than it is.
That seems probable, and therefore I haven’t said that I disagree with the post, but that I am confused about what it suggests. But I have some problems with the non-final sections, too, mainly concerning the terminology. For example, the phrases “estimated value” and “expected value”, e.g. in
are used as if it simply meant “result of the Fermi calculation” instead of “mean value of probability distribution updated by the Fermi calculation”. It seems to me that the post nowhere explicitly says that such estimates are incorrect and that it is advocating standard Bayesian reasoning, only done properly. After first reading I rather assumed that it proposes an extension to Bayes, where the agent after proper updating classifies the obtained estimates based on their reliabilities.
Also, I was not sure whether the post discusses a useful everyday technique when formal updating is unfeasible, or whether it proposes an extension to probability theory valid on the fundamental level. See also cousin_it’s comments.
As for #2, i.e.
mostly I am not sure what you refer to by appropriate Bayesian adjustment. On the first reading I interpreted is as “the correct approach, in contrast to EEV”, but then it contradicts your apparent position expressed in the rest of the comment, where you argue that substantial error bars shoul prevent huge updating. The second interpretation may be “the usual Bayes updating”, but then it is not true, as I argued in #1 (and in fact, I only repeat Holden’s calculations).
I’m very sure that in the section to which you refer “estimated value” means “result of a Fermi calcuation” (or something similar) as opposed “mean value of probability distribution updated by the Fermi calculation.” (I personally find this to be clear from the text but may have been influenced by prior correspondence with Holden on this topic.)
The reference to “differences in the reliability and robustness of the estimates” refers to the size of the error bars (whether explicit or implicit) about the initial estimate.
Here too I’m very sure that the post is discussing a useful everyday technique when formal updating is unfeasible rather than an extension to probability theory valid on a fundamental level.
Here we had a simple misunderstanding; I meant “updating from the initial (Fermi calculation-based) estimate to a revised estimate after taking into account one’s Bayesian prior” rather than “updating one’s Bayesian prior to a revised Bayesian prior based on the initial (Fermi calculation-based) estimate.”
I was saying “when there are large error bars about the initial estimate, the initial estimate should be revised heavily”, not “when there are large error bars about the initial estimate, one’s Bayesian prior should be revised heavily.” On the contrary, larger the error bars about the initial estimate, the less one’s Bayesian prior should change based on the estimate.
I imagine that we’re in agreement here. I think that the article is probably pitched at someone with less technical expertise than you have; what seems obvious and standard to you might be genuinely new to many people and this may lead to you to assume that it’s saying more than it is.
Then I suppose that we don’t have disagreement, too.