I disagree with this line of argument. It’s true that moderately lower-probability scenarios deserve extra attention if they’re higher-stakes, but if the full range of scenarios in which early AGI systems realize large capability gains totaled to only 1% probability, then they would deserve little attention. Practically speaking, in real life, there are just too many other urgent medium- and high-probability scenarios to worry about for us to have the resources to focus on all the 1%-probability futurist scenarios.
If there are more than a few independent short-term extinction scenarios (from any cause) with a probability higher than 1%, then we are in trouble—their combined probability would add up to a significant probability of doom.
As far as resources go, even if we threw 100 times the current budget of MIRI at the problem, that would be $175 million, which is
I disagree with this line of argument. It’s true that moderately lower-probability scenarios deserve extra attention if they’re higher-stakes, but if the full range of scenarios in which early AGI systems realize large capability gains totaled to only 1% probability, then they would deserve little attention. Practically speaking, in real life, there are just too many other urgent medium- and high-probability scenarios to worry about for us to have the resources to focus on all the 1%-probability futurist scenarios.
If there are more than a few independent short-term extinction scenarios (from any cause) with a probability higher than 1%, then we are in trouble—their combined probability would add up to a significant probability of doom.
As far as resources go, even if we threw 100 times the current budget of MIRI at the problem, that would be $175 million, which is
- 0.005% of the U.S. federal budget,
- 54 cents per person living in the U.S., or
- 2 cents per human being.