The Economics of p(doom)

Transformative AI poses an imminent existential risk to humanity, and yet we’re spending close to zero dollars on mitigating this risk. This feels wrong, but by how much?

In a recent paper “The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI”, Klaus Prettner and I organize a variety of scenarios for the future with transformative AI and evaluate their associated existential risks and economic outcomes in terms of aggregate welfare.

Our analysis shows that even low-probability catastrophic outcomes justify large investments in AI safety and alignment research. We find that the optimizing representative individual would rationally allocate substantial (in truth, really massive) resources to mitigate extinction risk; in some cases, she would prefer not to develop TAI at all.

No comments.