Long before a Friendly AI is developed, some research team is going to be in a position to deploy an unFriendly AI that tries to maximise the personal wealth of the researchers, or the share price of the corporation that employs them, or pursues some other goal that the rest of humanity might not like.
And who’s going to stop that happening?
That is a fairly likely outcome. It would represent business as usual. The entire history of life is one of some creatures profiting at the expense of other ones.
My point, then, is that as well as heroically trying to come up with a theory of Friendly AI, it might be a good idea to heroically stop the deployment of unFriendly AI.
That is a fairly likely outcome. It would represent business as usual. The entire history of life is one of some creatures profiting at the expense of other ones.
My point, then, is that as well as heroically trying to come up with a theory of Friendly AI, it might be a good idea to heroically stop the deployment of unFriendly AI.
Very large organisations do sometimes attempt to cut of their competitors’ air supply.
They had better make sure they have good secrecy controls if they don’t want it to blow up in their faces.