I totally agree that if the creation of a superhuman AI is going to precede all other existential threats then we should focus all of our resources on trying to get the superhuman AI to be as friendly as possible.
Yes, I was agreeing with what I inferred your attitude to be rather than agreeing with something that you said. (I apologize if I distorted your views—if you’d like I can edit my comment to remove the suggestion that you hold the position that I attributed to you.)
I don’t believe that we “should focus all of our resources” on FAI, as there are many other worthy activities to focus on. The argument is that this particular problem gets disproportionally little attention, and while with other risks we can in principle luck out even if they get no attention, it isn’t so for AI. Failing to take FAI seriously is fatal, failing to take nanotech seriously isn’t necessarily fatal.
Thus, although strictly speaking I agree with your implication, I don’t see its condition plausible, and so implication as whole relevant.
(?) I never said that.
Yes, I was agreeing with what I inferred your attitude to be rather than agreeing with something that you said. (I apologize if I distorted your views—if you’d like I can edit my comment to remove the suggestion that you hold the position that I attributed to you.)
I don’t believe that we “should focus all of our resources” on FAI, as there are many other worthy activities to focus on. The argument is that this particular problem gets disproportionally little attention, and while with other risks we can in principle luck out even if they get no attention, it isn’t so for AI. Failing to take FAI seriously is fatal, failing to take nanotech seriously isn’t necessarily fatal.
Thus, although strictly speaking I agree with your implication, I don’t see its condition plausible, and so implication as whole relevant.