Stop trying to “explain” and start trying to understand, perhaps.
I understand you completely—you are saying that an AGI can’t kill humans because nobody could generate electricity for it (unless a human programmer freely decides to build a robot body for an AGI he knows to be unfriendly). That’s not right.
The answer is no
I could do that in Hawking’s place with his physical limitations (through a combination of various kinds of various positive/negative incentives), so Hawking, with his superior intelligence, could too. That’s the same point you said before, just phrased differently.
You gotta have the ability to control the physical world.
Just like Stephen Hawking can control the physical world enough to make physical discoveries (as long as he was alive, at least), win prizes and get other people to do various things for him, he could also control it enough to control one cat.
We can make it harder—maybe he can only get his cat do something by displaying sentences on the display of his screen (which the cat doesn’t understand), by having an Internet connection and by having an access to the parts of the Internet that have a security flaw that allows it (which is almost all of it). In that case, he can still get his cat to do things. (He can write software to translate English to cat sounds/animations for the cat to understand, and use his control over the Internet to use incentives for the cat.)
We can make it even harder—maybe the task is for the wheelchair-less Hawking to kill the cat without anyone noticing he’s unfriendly-to-cats, without anyone knowing it was him, and without him needing to keep another cat or another human around to hunt the mice in his apartment. I’m leaving this one as an exercise for the reader.
I understand you completely—you are saying that an AGI can’t kill humans because nobody could generate electricity for it (unless a human programmer freely decides to build a robot body for an AGI he knows to be unfriendly). That’s not right.
I could do that in Hawking’s place with his physical limitations (through a combination of various kinds of various positive/negative incentives), so Hawking, with his superior intelligence, could too. That’s the same point you said before, just phrased differently.
Just like Stephen Hawking can control the physical world enough to make physical discoveries (as long as he was alive, at least), win prizes and get other people to do various things for him, he could also control it enough to control one cat.
We can make it harder—maybe he can only get his cat do something by displaying sentences on the display of his screen (which the cat doesn’t understand), by having an Internet connection and by having an access to the parts of the Internet that have a security flaw that allows it (which is almost all of it). In that case, he can still get his cat to do things. (He can write software to translate English to cat sounds/animations for the cat to understand, and use his control over the Internet to use incentives for the cat.)
We can make it even harder—maybe the task is for the wheelchair-less Hawking to kill the cat without anyone noticing he’s unfriendly-to-cats, without anyone knowing it was him, and without him needing to keep another cat or another human around to hunt the mice in his apartment. I’m leaving this one as an exercise for the reader.