A paperclip maximizer might keep humans around for a while (because, as of right now, we’re the only beings around that make paperclips) but yeah, if it had enough power (magic nanotechnology, etc.), we’d most likely be gone.
It seems to me like the simplest way to solve friendliness is: “Ok AI, I’m friendly so do what I tell you to do and confirm with me before taking any action.” It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse ‘friendliness’ into the AI.
A paperclip maximizer might keep humans around for a while (because, as of right now, we’re the only beings around that make paperclips) but yeah, if it had enough power (magic nanotechnology, etc.), we’d most likely be gone.
This has been addressed before. Basically, you’ll get what you asked for, but it probably won’t be what you really want.