Nobody’s presuming it has a ‘personal agenda’. It’s quite possible for it to think that it’s just following our orders, when in fact it’s become highly dangerous (see: paperclip maximizer). Come to think of it, this describes a lot of human history quite well.
I agree with the broader argument that paranoia won’t solve anything. We should view the AI—no matter how complicated—as something that is just following a program (exactly like humans). Everything it does should be judged in the context of that program.
Who decides what that program is? What courses of actions should it take? Should that be a democratic process? Under the current system there would be no oversight in this area.
Nobody’s presuming it has a ‘personal agenda’. It’s quite possible for it to think that it’s just following our orders, when in fact it’s become highly dangerous (see: paperclip maximizer). Come to think of it, this describes a lot of human history quite well.
I agree with the broader argument that paranoia won’t solve anything. We should view the AI—no matter how complicated—as something that is just following a program (exactly like humans). Everything it does should be judged in the context of that program.
Who decides what that program is? What courses of actions should it take? Should that be a democratic process? Under the current system there would be no oversight in this area.
The person who creates it.
And that doesn’t fill you with fear?