An intelligent machine might make one of its first acts the assassination of other machine intelligence researchers—unless it is explicitly told not to do that. I figure we are going to want machines that will obey the law. That should be part of any sensible machine morality proposal.
I absolutely do not want my FAI to be constrained by the law. If the FAI allows machine intelligence researchers to create an uFAI we will all die. An AI that values the law above the existence of me and my species is evil, not Friendly. I wouldn’t want the FAI to kill such researchers unless it was unable to find a more appealing way to ensure future safety but I wouldn’t dream of constraining it to either laws or politics. But come to think of it I don’t want it to be sensible either.
The Three Laws of Robotics may be a naive conception but that Zeroth law was a step in the right direction.
Re: If the FAI allows machine intelligence researchers to create an uFAI we will all die
Yes, that’s probably just the kind of paranoid delusional thinking that a psychopathic superintelligence with no respect for the law would use to justify its murder of academic researchers.
Hopefully, we won’t let it get that far. Constructing an autonomous tool that will kill people is conspiracy to murder—so hopefully the legal system will allow us to lock up researchers who lack respect for the law before they do some real damage.
Assassinating your competitors is not an acceptable business practice.
Hopefully, the researchers will learn the error of their ways before then. The first big and successful machine intelligence project may well be a collaboration. Help build my tool, or be killed by it—is a rather aggressive proposition—and I expect most researchers will reject it, and expend their energies elsewhere—hopefully on more law-abiding projects.
Yes, that’s probably just the kind of paranoid delusional thinking that a psychopathic superintelligence with no respect for the law would use to justify its murder of academic researchers.
You seem confused (or, perhaps, hysterical). A psychopathic superintelligence would have no need to justify anything it does to anyone.
By including ‘delusional’ you appear to be claiming that an unfriendly super-intelligence would not likely cause the extinction of humanity. Was that your intent? If so, why do you suggest that the first actions of a FAI would be to kill AI researchers? Do you believe that a superintelligence will disagree with you about whether uFAI is a threat and that it will be wrong while you are right? That is a bizarre prediction.
and I expect most researchers will reject it, and expend their energies elsewhere—hopefully on more law-abiding projects.
You seem to have a lot of faith in the law. I find this odd. Has it escaped your notice that a GAI is not constrained by country borders? I’m afraid most of the universe, even most of the planet, is out of your jurisdiction.
A powerful corporate agent not bound by the law might well choose to assassinate its potential competitors—if it thought it could get away with it. Its competitors are likely to be among those best placed to prevent it from meeting its goals.
Its competitors don’t have to want to destroy all humankind for it to want to eliminate them! The tiniest divergence between its goals and theirs could potentially be enough.
It is a misconception to think of law as a set of rules. Even more so to understand them as a set of rules that apply to non-humans today. In addition, rules won’t be very effective constraints on superintelligences.
I absolutely do not want my FAI to be constrained by the law. If the FAI allows machine intelligence researchers to create an uFAI we will all die. An AI that values the law above the existence of me and my species is evil, not Friendly. I wouldn’t want the FAI to kill such researchers unless it was unable to find a more appealing way to ensure future safety but I wouldn’t dream of constraining it to either laws or politics. But come to think of it I don’t want it to be sensible either.
The Three Laws of Robotics may be a naive conception but that Zeroth law was a step in the right direction.
Re: If the FAI allows machine intelligence researchers to create an uFAI we will all die
Yes, that’s probably just the kind of paranoid delusional thinking that a psychopathic superintelligence with no respect for the law would use to justify its murder of academic researchers.
Hopefully, we won’t let it get that far. Constructing an autonomous tool that will kill people is conspiracy to murder—so hopefully the legal system will allow us to lock up researchers who lack respect for the law before they do some real damage. Assassinating your competitors is not an acceptable business practice.
Hopefully, the researchers will learn the error of their ways before then. The first big and successful machine intelligence project may well be a collaboration. Help build my tool, or be killed by it—is a rather aggressive proposition—and I expect most researchers will reject it, and expend their energies elsewhere—hopefully on more law-abiding projects.
You seem confused (or, perhaps, hysterical). A psychopathic superintelligence would have no need to justify anything it does to anyone.
By including ‘delusional’ you appear to be claiming that an unfriendly super-intelligence would not likely cause the extinction of humanity. Was that your intent? If so, why do you suggest that the first actions of a FAI would be to kill AI researchers? Do you believe that a superintelligence will disagree with you about whether uFAI is a threat and that it will be wrong while you are right? That is a bizarre prediction.
You seem to have a lot of faith in the law. I find this odd. Has it escaped your notice that a GAI is not constrained by country borders? I’m afraid most of the universe, even most of the planet, is out of your jurisdiction.
Re: You seem confused (or, perhaps, hysterical).
Uh, thanks :-(
A powerful corporate agent not bound by the law might well choose to assassinate its potential competitors—if it thought it could get away with it. Its competitors are likely to be among those best placed to prevent it from meeting its goals.
Its competitors don’t have to want to destroy all humankind for it to want to eliminate them! The tiniest divergence between its goals and theirs could potentially be enough.
It is a misconception to think of law as a set of rules. Even more so to understand them as a set of rules that apply to non-humans today. In addition, rules won’t be very effective constraints on superintelligences.