As I said in a comment at the GiveWell blog, a normal prior would assign vanishing probability to the existence of charities even 10x better than 90th percentile charitable expenditures (low-value first world things). Vaccinations appear to do many times better, and with the benefit of hindsight we can point to particular things like smallpox eradication, the Green Revolution, etc. But if we had a normal prior we would assign ludicrously low probability (less than 10^-100 probability) to these things having been real, too small to outweigh the possibility of hoax or systematic error. As Eliezer said in the previous thread, if a model assigns essentially zero probability to something that actually happens frequently, it’s time to pause and recognize that the model is terribly wrong:
This jumped out instantly when I looked at the charts: Your prior and evidence can’t possibly both be correct at the same time. Everywhere the prior has non-negligible density has negligible likelihood. Everywhere that has substantial likelihood has negligible prior density. If you try multiplying the two together to get a compromise probability estimate instead of saying “I notice that I am confused”, I would hold this up as a pretty strong example of the real sin that I think this post should be arguing against, namely that of trying to use math too blindly without sanity-checking its meaning.
In the context of existential risk, Holden has claimed that expected QALYs of x-risk reductions are low, so that even aggregative utilitarian types would do badly on x-risk vs vaccinations. Given that there are well-understood particular risks and ways of spending on them (and historical examples of actual progress, e.g. tracking 90% of dinosaur killer asteroids and now NEA ) this seems to require near-certainty that humanity will soon go extinct anyway, or fail to colonize space or create large populations, so that astronomical waste considerations don’t loom large.
This gives us a “Charity Doomsday Argument”: if humanity could survive to have a long and prosperous future, then at least some approaches to averting catastrophes would have high returns in QALYs per dollar. But by the normal prior on charity effectiveness, no charity can have high cost-effectiveness (with overwhelming probability), so humanity is doomed to catastrophe, stagnation, or an otherwise cramped future.
ETA: These problems are less severe with a log-normal prior (the Charity Doomsday Argument still goes through, but the probability penalties for historical interventions are less severe although still rather heavy), and Holden has mentioned the possibility of instead using a log-normal prior in the previous post.
As I said in a comment at the GiveWell blog, a normal prior would assign vanishing probability to the existence of charities even 10x better than 90th percentile charitable expenditures (low-value first world things). Vaccinations appear to do many times better, and with the benefit of hindsight we can point to particular things like smallpox eradication, the Green Revolution, etc. But if we had a normal prior we would assign ludicrously low probability (less than 10^-100 probability) to these things having been real, too small to outweigh the possibility of hoax or systematic error. As Eliezer said in the previous thread, if a model assigns essentially zero probability to something that actually happens frequently, it’s time to pause and recognize that the model is terribly wrong:
In the context of existential risk, Holden has claimed that expected QALYs of x-risk reductions are low, so that even aggregative utilitarian types would do badly on x-risk vs vaccinations. Given that there are well-understood particular risks and ways of spending on them (and historical examples of actual progress, e.g. tracking 90% of dinosaur killer asteroids and now NEA ) this seems to require near-certainty that humanity will soon go extinct anyway, or fail to colonize space or create large populations, so that astronomical waste considerations don’t loom large.
This gives us a “Charity Doomsday Argument”: if humanity could survive to have a long and prosperous future, then at least some approaches to averting catastrophes would have high returns in QALYs per dollar. But by the normal prior on charity effectiveness, no charity can have high cost-effectiveness (with overwhelming probability), so humanity is doomed to catastrophe, stagnation, or an otherwise cramped future.
ETA: These problems are less severe with a log-normal prior (the Charity Doomsday Argument still goes through, but the probability penalties for historical interventions are less severe although still rather heavy), and Holden has mentioned the possibility of instead using a log-normal prior in the previous post.