If there is an AI much smarter than us, then it is almost certainly better at finding ways to render the off switch useless than we are at making sure that it works.
For example, by secretly having alternative computing facilities elsewhere without any obvious off switch (distributed computing, datacentres that appear to be doing other things, alternative computing methods based on technology we don’t know about). Maybe by acting in a way that we won’t want to turn it off, such as by obviously doing everything we want while being very good at keeping the other things it’s doing secret until it’s too late. In the obvious literal sense of an “off switch”, maybe by inducing an employee to replace the switch with a dummy.
We don’t know and in many ways can’t know, because (at some point) it will be better than us at coming up with ideas.
If the AI is a commercial service like Google search or Wikipedia, that is so embedded into society that we have come to depend on it, or if the AI is seen as national security priority, do you really think we will turn it off?
Even worse, in the future the AI may handle traffic or factories or hospitals, so if you turn it off, the economy will immediately collapse and/or people will die.
Why doesn’t an “off switch” protect us?
If there is an AI much smarter than us, then it is almost certainly better at finding ways to render the off switch useless than we are at making sure that it works.
For example, by secretly having alternative computing facilities elsewhere without any obvious off switch (distributed computing, datacentres that appear to be doing other things, alternative computing methods based on technology we don’t know about). Maybe by acting in a way that we won’t want to turn it off, such as by obviously doing everything we want while being very good at keeping the other things it’s doing secret until it’s too late. In the obvious literal sense of an “off switch”, maybe by inducing an employee to replace the switch with a dummy.
We don’t know and in many ways can’t know, because (at some point) it will be better than us at coming up with ideas.
If the AI is a commercial service like Google search or Wikipedia, that is so embedded into society that we have come to depend on it, or if the AI is seen as national security priority, do you really think we will turn it off?
Even worse, in the future the AI may handle traffic or factories or hospitals, so if you turn it off, the economy will immediately collapse and/or people will die.
We have no idea how to make a useful, agent-like general AI that wouldn’t want to disable its off switch or otherwise prevent people from using it.
We don’t have to tell it about the off switch!
That’s Security Through Obscurity. Also, even if we decided we’re suddenly ok with that, it obviously doesn’t scale well to superhuman agents.