Your proposals are the kind of strawman utilitarianism that turns out to be both wrong and stupid, for severalreasons.
Also, I don’t think you understand what the SIAI argues about what an unFriendly intelligence would do if programmed to maximize, say, the personal wealth of its programmers. Short story, this would be suicide or worse in terms of what the programmers would actually want. The point at which smarter-than-human AI could be successfully abused by a selfish few is after the problem of Friendliness has been solved, rather than before.
I freely admit there are ethical issues with a secret assassination programme. But what’s wrong with lobbying politicians to retard the progress of unFriendly AI projects, regulate AI, etc? You could easily persuade conservatives to pretend to be scared about human-level AI on theological/moral/job-preservation grounds. Why not start shaping the debate and pushing the Overton window now?
I do understand what SIAI argues what an unFriendly intelligence would do if programmed to maximize some financial metric. I just don’t believe that a corporation in a position to deploy a super-AI would understand or heed SIAI’s argument. After all, corporations maximise short-term profit against their long-term interests all the time—a topical example is News International.
Ah, another point about maximising. What if the AI uses CEV of the programmers or the corporation? In other words, it’s programmed to maximise their wealth in a way they would actually want? Solving that problem is a subset of Friendliness.
Suppose we get the funding, find the people, pull together the project, solve the technical problems of AI and Friendly AI, carry out the required teaching, and finally set a superintelligence in motion, all before anyone else throws something together that only does recursive self-improvement. It is still pointless if we do not have a nice task for that optimization process to carry out.
Your proposals are the kind of strawman utilitarianism that turns out to be both wrong and stupid, for several reasons.
Also, I don’t think you understand what the SIAI argues about what an unFriendly intelligence would do if programmed to maximize, say, the personal wealth of its programmers. Short story, this would be suicide or worse in terms of what the programmers would actually want. The point at which smarter-than-human AI could be successfully abused by a selfish few is after the problem of Friendliness has been solved, rather than before.
I freely admit there are ethical issues with a secret assassination programme. But what’s wrong with lobbying politicians to retard the progress of unFriendly AI projects, regulate AI, etc? You could easily persuade conservatives to pretend to be scared about human-level AI on theological/moral/job-preservation grounds. Why not start shaping the debate and pushing the Overton window now?
I do understand what SIAI argues what an unFriendly intelligence would do if programmed to maximize some financial metric. I just don’t believe that a corporation in a position to deploy a super-AI would understand or heed SIAI’s argument. After all, corporations maximise short-term profit against their long-term interests all the time—a topical example is News International.
Ah, another point about maximising. What if the AI uses CEV of the programmers or the corporation? In other words, it’s programmed to maximise their wealth in a way they would actually want? Solving that problem is a subset of Friendliness.
That’s not how the term is used here. Friendliness is prior to and separate from CEV, if I understand it correctly.
From the CEV document: