Carl Shulman pointed out how absurd this was: If GiveWell had existed 100 years ago, they would have argued against funding the eradication of smallpox. Their process forces them to reject the possibility that an intervention could be that effective
To clarify what I said in those comments:
Holden had a few posts that 1) made the standard point that one should use both prior and evidence to generate one’s posterior estimate of a quantity like charity effectiveness, 2) used example prior distributions that assigned vanishingly low probability to outcomes far from the median, albeit disclaiming that those distributions were essential.
I naturally agree with 1), but took issue with 2). A normal distribution for charity effectiveness is devastatingly falsified by the historical data, and even a log-normal distribution has wacky implications, like ruling out long-term human survival a priori. So I think a reasonable prior distribution will have a fatter tail. I think it’s problematic to use false examples, lest they get lodged in memory without metadata, especially when they might receive some halo effect from 1).
I said that this methodology and the example priors would have more or less ruled out big historical successes, not that GiveWell would not have endorsed smallpox eradication. Indeed, with smallpox I was trying to point out something that Holden would consider a problematic implication of a thin-tailed prior. With respect to existential risks, I likewise said that I thought Holden assigned a higher prior to x-risk interventions than could be reconciled with a log-normal prior, since he could be convinced by sufficient evidence (like living to see humanity colonize the galaxy, and witnessing other civilizations that perished). These were criticisms that those priors were too narrow even for Holden, not that GiveWell would use those specific wacky priors.
Separately, I do think Holden’s actual intuitions are too conservative, e.g. in assigning overly low probability to eventual large-scale space colonization and large populations, and giving too much weight to a feeling of absurdity. So I would like readers to distinguish between the use of priors in general and Holden’s specific intuitions that big payoffs from x-risk reduction (and AI risk specifically) face a massive prior absurdity penalty, with the key anti-x-risk work being done by the latter (which they may not share).
To clarify what I said in those comments:
Holden had a few posts that 1) made the standard point that one should use both prior and evidence to generate one’s posterior estimate of a quantity like charity effectiveness, 2) used example prior distributions that assigned vanishingly low probability to outcomes far from the median, albeit disclaiming that those distributions were essential.
I naturally agree with 1), but took issue with 2). A normal distribution for charity effectiveness is devastatingly falsified by the historical data, and even a log-normal distribution has wacky implications, like ruling out long-term human survival a priori. So I think a reasonable prior distribution will have a fatter tail. I think it’s problematic to use false examples, lest they get lodged in memory without metadata, especially when they might receive some halo effect from 1).
I said that this methodology and the example priors would have more or less ruled out big historical successes, not that GiveWell would not have endorsed smallpox eradication. Indeed, with smallpox I was trying to point out something that Holden would consider a problematic implication of a thin-tailed prior. With respect to existential risks, I likewise said that I thought Holden assigned a higher prior to x-risk interventions than could be reconciled with a log-normal prior, since he could be convinced by sufficient evidence (like living to see humanity colonize the galaxy, and witnessing other civilizations that perished). These were criticisms that those priors were too narrow even for Holden, not that GiveWell would use those specific wacky priors.
Separately, I do think Holden’s actual intuitions are too conservative, e.g. in assigning overly low probability to eventual large-scale space colonization and large populations, and giving too much weight to a feeling of absurdity. So I would like readers to distinguish between the use of priors in general and Holden’s specific intuitions that big payoffs from x-risk reduction (and AI risk specifically) face a massive prior absurdity penalty, with the key anti-x-risk work being done by the latter (which they may not share).