There is too much at stake here to base the decision to neglect all other potential existential risks on the vague idea that intelligence might come up with something we haven’t thought about.
To my knowledge, SIAI does not actually endorse neglecting all potential x-risks besides UFAI. (Analysis might recommend discounting the importance of fighting them head-on, but that analysis should still be done when resources are available.)
A somewhat important correction:
To my knowledge, SIAI does not actually endorse neglecting all potential x-risks besides UFAI. (Analysis might recommend discounting the importance of fighting them head-on, but that analysis should still be done when resources are available.)