...and the disadvantage that you are trying to solve a harder problem.
How do you know its harder? The first problem (preventing anyone from building an AI) seems to require nothing short of world conquest (or at least setting up some kind of singleton, nothing weaker than that could hope to effectively enforce such a law), and while neither world conquest nor FAI has ever been achieved, more effort has been put into the former, so I would guess it is harder.
Sorry for misunderstanding you. I agree that making Friendly AI probably is harder than making Unfriendly AI, so if Friendliness is necessary then our only hope is if anyone smart enough to successfully build an AI is also smart enough to see the importance of friendliness.
How do you know its harder? The first problem (preventing anyone from building an AI) seems to require nothing short of world conquest (or at least setting up some kind of singleton, nothing weaker than that could hope to effectively enforce such a law), and while neither world conquest nor FAI has ever been achieved, more effort has been put into the former, so I would guess it is harder.
What I meant was that the disadvantage of this plan:
...was that the former problem is harder than the latter one.
A machine with safety features is usually somewhat harder to build than one without—it has more components and complexity.
I was not comparing with the difficulty of building a totalitarian government. I was continuing from the last sentence—with my ”...”.
Sorry for misunderstanding you. I agree that making Friendly AI probably is harder than making Unfriendly AI, so if Friendliness is necessary then our only hope is if anyone smart enough to successfully build an AI is also smart enough to see the importance of friendliness.