You should have laid out the basic argument more plainly. As far as I see it:
Suppose we are spending 3 billion on AI safety. Then as per our revealed preferences, the world is worth at least 3 billion, and any intervention that has a 1% chance to save the world is worth at least 30 million, such as preparing for global loss of industry. If each million spent on AI safety is less important than the last one, we should then divert additional funding from AI safety to other interventions.
I agree that such interventions deserve at least 1% of the AI safety budget. You have not included the possibility that global loss of industry might improve far-future potential. AI safety research is much less hurt by a loss of supercomputers than AI capabilities research. Another thousand years of history as we know it do not impact the cosmic endowment. One intervention that takes this into account would be a time capsule that will preserve and hide a supercomputer for a thousand years, in case we lose industry in the meantime but solve AI and AI safety. Then again, we do not want to incentivize any clever consequentialist to set us back to the renaissance, so let’s not do that and focus on the case that is not swallowed by model uncertainty.
I like your succinct way of restating the case for spending some money on catastrophes other than AI. It is possible that a loss of industry could be beneficial in the long term. One can adjust the moral hazard parameter to take into account this possibility. However, it does subject us to more natural risk like supervolcanic eruptions and asteroid/comet impacts. And if we actually lost anthropological civilization, we would not be doing any AI safety work. Even just losing industry for a long time I think would make most AI safety work not feasible, but I am interested in your thoughts. Without industry, we would not be able to afford nearly as many researchers. And they would just be doing math on paper.
You should have laid out the basic argument more plainly. As far as I see it:
Suppose we are spending 3 billion on AI safety. Then as per our revealed preferences, the world is worth at least 3 billion, and any intervention that has a 1% chance to save the world is worth at least 30 million, such as preparing for global loss of industry. If each million spent on AI safety is less important than the last one, we should then divert additional funding from AI safety to other interventions.
I agree that such interventions deserve at least 1% of the AI safety budget. You have not included the possibility that global loss of industry might improve far-future potential. AI safety research is much less hurt by a loss of supercomputers than AI capabilities research. Another thousand years of history as we know it do not impact the cosmic endowment. One intervention that takes this into account would be a time capsule that will preserve and hide a supercomputer for a thousand years, in case we lose industry in the meantime but solve AI and AI safety. Then again, we do not want to incentivize any clever consequentialist to set us back to the renaissance, so let’s not do that and focus on the case that is not swallowed by model uncertainty.
I like your succinct way of restating the case for spending some money on catastrophes other than AI.
It is possible that a loss of industry could be beneficial in the long term. One can adjust the moral hazard parameter to take into account this possibility. However, it does subject us to more natural risk like supervolcanic eruptions and asteroid/comet impacts. And if we actually lost anthropological civilization, we would not be doing any AI safety work. Even just losing industry for a long time I think would make most AI safety work not feasible, but I am interested in your thoughts. Without industry, we would not be able to afford nearly as many researchers. And they would just be doing math on paper.