Yes, that’s probably just the kind of paranoid delusional thinking that a psychopathic superintelligence with no respect for the law would use to justify its murder of academic researchers.
You seem confused (or, perhaps, hysterical). A psychopathic superintelligence would have no need to justify anything it does to anyone.
By including ‘delusional’ you appear to be claiming that an unfriendly super-intelligence would not likely cause the extinction of humanity. Was that your intent? If so, why do you suggest that the first actions of a FAI would be to kill AI researchers? Do you believe that a superintelligence will disagree with you about whether uFAI is a threat and that it will be wrong while you are right? That is a bizarre prediction.
and I expect most researchers will reject it, and expend their energies elsewhere—hopefully on more law-abiding projects.
You seem to have a lot of faith in the law. I find this odd. Has it escaped your notice that a GAI is not constrained by country borders? I’m afraid most of the universe, even most of the planet, is out of your jurisdiction.
A powerful corporate agent not bound by the law might well choose to assassinate its potential competitors—if it thought it could get away with it. Its competitors are likely to be among those best placed to prevent it from meeting its goals.
Its competitors don’t have to want to destroy all humankind for it to want to eliminate them! The tiniest divergence between its goals and theirs could potentially be enough.
You seem confused (or, perhaps, hysterical). A psychopathic superintelligence would have no need to justify anything it does to anyone.
By including ‘delusional’ you appear to be claiming that an unfriendly super-intelligence would not likely cause the extinction of humanity. Was that your intent? If so, why do you suggest that the first actions of a FAI would be to kill AI researchers? Do you believe that a superintelligence will disagree with you about whether uFAI is a threat and that it will be wrong while you are right? That is a bizarre prediction.
You seem to have a lot of faith in the law. I find this odd. Has it escaped your notice that a GAI is not constrained by country borders? I’m afraid most of the universe, even most of the planet, is out of your jurisdiction.
Re: You seem confused (or, perhaps, hysterical).
Uh, thanks :-(
A powerful corporate agent not bound by the law might well choose to assassinate its potential competitors—if it thought it could get away with it. Its competitors are likely to be among those best placed to prevent it from meeting its goals.
Its competitors don’t have to want to destroy all humankind for it to want to eliminate them! The tiniest divergence between its goals and theirs could potentially be enough.