Stop trying to “explain” and start trying to understand, perhaps. It’s not like you’re a teacher and I’m a student, here; we just have a disagreement. Perhaps you are right and I am wrong; perhaps I am right and you are wrong. One thing that seems clear is that you are way too certain about things far outside anything you can empirically observe or mathematically prove, and this certainty does not seem warranted to me.
I guess you’ve heard of Hawking’s cat, right? The question there is “would Hawking, a highly intelligent but physically limited being, be able to get his cat to do something”. The answer is no: intelligence alone is not always enough. You gotta have the ability to control the physical world.
Edit: on reflection, sending me to vaguely-related “sequences” and telling me to start reading, and implying it’s a failure of mine if I don’t agree, really does seem cult-like to me. Nowhere here did you actually present an argument; it’s all just appeals to philosophical musings by the leader, musings you’re unable to even reproduce in your own words. Are you sure you’ve thought about these things and came to your own conclusions, rather than just adopting these ideas due to the force of Eliezer’s certainty? If you have, how come you cannot reproduce the arguments?
IDK, I think it’s reasonable to link short written sources that contain arguments. That’s how you build up knowledge. An answer to “how will the AI get robots to get electricity” is “the way evolution and humans did it, but probably way way faster using all the shortcuts we can see and probably a lot of shortcuts we can’t see, the same way humans take a lot of shortcuts chimps can’t see”.
The AI will need to affect the physical world, which means robots. The AI cannot build robots if the AI first kills all humans. That is my point.
Before the AI kills humans, it will have to get them to build robots. Perhaps that will be easy for it to do (though it will take time, and that time is fundamentally risky for the AI due to the possibility of humans doing something stupid—another AGI, for example, or humans killing themselves too early with conventional weapons or narrow AI). Even if the AGI wins easily, this victory looks like “a few years of high technological development which involves a lot of fancy robots to automate all parts of the economy”, and only THEN can the AGI kill humans.
Saying that the AGI can simply magic its way to victory even if humans are dead (and its stored electricity is dwindling down, and it’s stuck with only a handful of drones that need to be manually plugged in by a human) is nonsensical.
In this case the “short written source” did not contain relevant arguments. It was just trying to “wow” me with the power of intelligence. Intelligence can’t solve everything—Hawking cannot get his cat to enter the car, no matter how smart he is.
I actually do think AGI will be able to build robots eventually, and it has a good chance of killing us all—but I don’t take this to be 100% certain, and also, I care about what those worlds look like, because they often involve humans surviving for years after the AGI instead of dying instantly, and in some of them humanity has a chance of surviving.
>Before the AI kills humans, it will have to get them to build robots.
Humanity didn’t need some other species to build robots for them, insofar as they’ve built robots. Evolution built extremely advanced robots without outside help.
Stop trying to “explain” and start trying to understand, perhaps.
I understand you completely—you are saying that an AGI can’t kill humans because nobody could generate electricity for it (unless a human programmer freely decides to build a robot body for an AGI he knows to be unfriendly). That’s not right.
The answer is no
I could do that in Hawking’s place with his physical limitations (through a combination of various kinds of various positive/negative incentives), so Hawking, with his superior intelligence, could too. That’s the same point you said before, just phrased differently.
You gotta have the ability to control the physical world.
Just like Stephen Hawking can control the physical world enough to make physical discoveries (as long as he was alive, at least), win prizes and get other people to do various things for him, he could also control it enough to control one cat.
We can make it harder—maybe he can only get his cat do something by displaying sentences on the display of his screen (which the cat doesn’t understand), by having an Internet connection and by having an access to the parts of the Internet that have a security flaw that allows it (which is almost all of it). In that case, he can still get his cat to do things. (He can write software to translate English to cat sounds/animations for the cat to understand, and use his control over the Internet to use incentives for the cat.)
We can make it even harder—maybe the task is for the wheelchair-less Hawking to kill the cat without anyone noticing he’s unfriendly-to-cats, without anyone knowing it was him, and without him needing to keep another cat or another human around to hunt the mice in his apartment. I’m leaving this one as an exercise for the reader.
Stop trying to “explain” and start trying to understand, perhaps. It’s not like you’re a teacher and I’m a student, here; we just have a disagreement. Perhaps you are right and I am wrong; perhaps I am right and you are wrong. One thing that seems clear is that you are way too certain about things far outside anything you can empirically observe or mathematically prove, and this certainty does not seem warranted to me.
I guess you’ve heard of Hawking’s cat, right? The question there is “would Hawking, a highly intelligent but physically limited being, be able to get his cat to do something”. The answer is no: intelligence alone is not always enough. You gotta have the ability to control the physical world.
Edit: on reflection, sending me to vaguely-related “sequences” and telling me to start reading, and implying it’s a failure of mine if I don’t agree, really does seem cult-like to me. Nowhere here did you actually present an argument; it’s all just appeals to philosophical musings by the leader, musings you’re unable to even reproduce in your own words. Are you sure you’ve thought about these things and came to your own conclusions, rather than just adopting these ideas due to the force of Eliezer’s certainty? If you have, how come you cannot reproduce the arguments?
IDK, I think it’s reasonable to link short written sources that contain arguments. That’s how you build up knowledge. An answer to “how will the AI get robots to get electricity” is “the way evolution and humans did it, but probably way way faster using all the shortcuts we can see and probably a lot of shortcuts we can’t see, the same way humans take a lot of shortcuts chimps can’t see”.
The AI will need to affect the physical world, which means robots. The AI cannot build robots if the AI first kills all humans. That is my point.
Before the AI kills humans, it will have to get them to build robots. Perhaps that will be easy for it to do (though it will take time, and that time is fundamentally risky for the AI due to the possibility of humans doing something stupid—another AGI, for example, or humans killing themselves too early with conventional weapons or narrow AI). Even if the AGI wins easily, this victory looks like “a few years of high technological development which involves a lot of fancy robots to automate all parts of the economy”, and only THEN can the AGI kill humans.
Saying that the AGI can simply magic its way to victory even if humans are dead (and its stored electricity is dwindling down, and it’s stuck with only a handful of drones that need to be manually plugged in by a human) is nonsensical.
In this case the “short written source” did not contain relevant arguments. It was just trying to “wow” me with the power of intelligence. Intelligence can’t solve everything—Hawking cannot get his cat to enter the car, no matter how smart he is.
I actually do think AGI will be able to build robots eventually, and it has a good chance of killing us all—but I don’t take this to be 100% certain, and also, I care about what those worlds look like, because they often involve humans surviving for years after the AGI instead of dying instantly, and in some of them humanity has a chance of surviving.
>Before the AI kills humans, it will have to get them to build robots.
Humanity didn’t need some other species to build robots for them, insofar as they’ve built robots. Evolution built extremely advanced robots without outside help.
Humanity already had the ability to physically manipulate.
Yes, but none of the other stuff needed for robots. Metals, motors, circuits…
Evolution, the other example I gave, didn’t already have the ability to physically manipulate.
I understand you completely—you are saying that an AGI can’t kill humans because nobody could generate electricity for it (unless a human programmer freely decides to build a robot body for an AGI he knows to be unfriendly). That’s not right.
I could do that in Hawking’s place with his physical limitations (through a combination of various kinds of various positive/negative incentives), so Hawking, with his superior intelligence, could too. That’s the same point you said before, just phrased differently.
Just like Stephen Hawking can control the physical world enough to make physical discoveries (as long as he was alive, at least), win prizes and get other people to do various things for him, he could also control it enough to control one cat.
We can make it harder—maybe he can only get his cat do something by displaying sentences on the display of his screen (which the cat doesn’t understand), by having an Internet connection and by having an access to the parts of the Internet that have a security flaw that allows it (which is almost all of it). In that case, he can still get his cat to do things. (He can write software to translate English to cat sounds/animations for the cat to understand, and use his control over the Internet to use incentives for the cat.)
We can make it even harder—maybe the task is for the wheelchair-less Hawking to kill the cat without anyone noticing he’s unfriendly-to-cats, without anyone knowing it was him, and without him needing to keep another cat or another human around to hunt the mice in his apartment. I’m leaving this one as an exercise for the reader.