Forecasting is hard, and experts disagree with the AI risk community (though this is less true now, and may be a difference in values rather than beliefs, i.e. longtermism vs. common-sense morality)
Past doomsday predictions have all not materialized. (Anthropics might throw a wrench in this, but my impression is that predictions of non-existential catastrophes have been made but haven’t happened.) Even if you think you should focus solely on x-risk, this suggests you should focus on the ones that a large group of people agree are x-risks.
And some inside view arguments:
The underlying causes of x-risk scenarios also lead to problems before superintelligence, eg. reward hacking. We’ll notice these problems when they occur and correct them. (Note that they won’t be corrected in the near term because the failures are so inconsequential that they aren’t worth correcting.)
Powerful AI will be developed by large organizations (companies or governments), which tend to be very risk averse and so will ensure safety automatically.
Timelines are long and we can’t do much useful work on the problem today.
There are probably more I find compelling, I did not spend much time on this.
Some outside view arguments:
Forecasting is hard, and experts disagree with the AI risk community (though this is less true now, and may be a difference in values rather than beliefs, i.e. longtermism vs. common-sense morality)
Past doomsday predictions have all not materialized. (Anthropics might throw a wrench in this, but my impression is that predictions of non-existential catastrophes have been made but haven’t happened.) Even if you think you should focus solely on x-risk, this suggests you should focus on the ones that a large group of people agree are x-risks.
And some inside view arguments:
The underlying causes of x-risk scenarios also lead to problems before superintelligence, eg. reward hacking. We’ll notice these problems when they occur and correct them. (Note that they won’t be corrected in the near term because the failures are so inconsequential that they aren’t worth correcting.)
Powerful AI will be developed by large organizations (companies or governments), which tend to be very risk averse and so will ensure safety automatically.
Timelines are long and we can’t do much useful work on the problem today.
There are probably more I find compelling, I did not spend much time on this.