So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk.
You’re ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.
Not to mention that if your P(AI) isn’t close to one, you probably want to be prepared for the situation in which an AI never materializes.
You’re ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.
Not to mention that if your P(AI) isn’t close to one, you probably want to be prepared for the situation in which an AI never materializes.