It strikes me that this is the wrong way to look at the issue.
The problem scenario is if someone, anywhere, develops a powerful AGI that isn’t safe for humanity. How do you stop the invention and proliferation of an unsafe technology? Well, you can either try to prevent anybody from building an AI without authorization; or you can try to make your own powerful friendly AGI before anybody else gets unfriendly AGI. The latter has the advantage that you only have to be really good at technology, you don’t have to enforce an unenforceable worldwide law.
Building an AI that doesn’t want to get out of its box doesn’t solve the problem that somewhere, somebody may build an AI that does want to get out of its box.
you can either try to prevent anybody from building an AI without authorization; or you can try to make your own powerful friendly AGI before anybody else gets unfriendly AGI. The latter has the advantage that you only have to be really good at technology, you don’t have to enforce an unenforceable worldwide law.
...and the disadvantage that you are trying to solve a harder problem.
...and the disadvantage that you are trying to solve a harder problem.
How do you know its harder? The first problem (preventing anyone from building an AI) seems to require nothing short of world conquest (or at least setting up some kind of singleton, nothing weaker than that could hope to effectively enforce such a law), and while neither world conquest nor FAI has ever been achieved, more effort has been put into the former, so I would guess it is harder.
Sorry for misunderstanding you. I agree that making Friendly AI probably is harder than making Unfriendly AI, so if Friendliness is necessary then our only hope is if anyone smart enough to successfully build an AI is also smart enough to see the importance of friendliness.
It strikes me that this is the wrong way to look at the issue.
The problem scenario is if someone, anywhere, develops a powerful AGI that isn’t safe for humanity. How do you stop the invention and proliferation of an unsafe technology? Well, you can either try to prevent anybody from building an AI without authorization; or you can try to make your own powerful friendly AGI before anybody else gets unfriendly AGI. The latter has the advantage that you only have to be really good at technology, you don’t have to enforce an unenforceable worldwide law.
Building an AI that doesn’t want to get out of its box doesn’t solve the problem that somewhere, somebody may build an AI that does want to get out of its box.
...and the disadvantage that you are trying to solve a harder problem.
Yudkowsky recently said that his approach was to make incautious projects look stupid:
This seems to be a form of negative marketing.
How do you know its harder? The first problem (preventing anyone from building an AI) seems to require nothing short of world conquest (or at least setting up some kind of singleton, nothing weaker than that could hope to effectively enforce such a law), and while neither world conquest nor FAI has ever been achieved, more effort has been put into the former, so I would guess it is harder.
What I meant was that the disadvantage of this plan:
...was that the former problem is harder than the latter one.
A machine with safety features is usually somewhat harder to build than one without—it has more components and complexity.
I was not comparing with the difficulty of building a totalitarian government. I was continuing from the last sentence—with my ”...”.
Sorry for misunderstanding you. I agree that making Friendly AI probably is harder than making Unfriendly AI, so if Friendliness is necessary then our only hope is if anyone smart enough to successfully build an AI is also smart enough to see the importance of friendliness.