IDK, I think it’s reasonable to link short written sources that contain arguments. That’s how you build up knowledge. An answer to “how will the AI get robots to get electricity” is “the way evolution and humans did it, but probably way way faster using all the shortcuts we can see and probably a lot of shortcuts we can’t see, the same way humans take a lot of shortcuts chimps can’t see”.
The AI will need to affect the physical world, which means robots. The AI cannot build robots if the AI first kills all humans. That is my point.
Before the AI kills humans, it will have to get them to build robots. Perhaps that will be easy for it to do (though it will take time, and that time is fundamentally risky for the AI due to the possibility of humans doing something stupid—another AGI, for example, or humans killing themselves too early with conventional weapons or narrow AI). Even if the AGI wins easily, this victory looks like “a few years of high technological development which involves a lot of fancy robots to automate all parts of the economy”, and only THEN can the AGI kill humans.
Saying that the AGI can simply magic its way to victory even if humans are dead (and its stored electricity is dwindling down, and it’s stuck with only a handful of drones that need to be manually plugged in by a human) is nonsensical.
In this case the “short written source” did not contain relevant arguments. It was just trying to “wow” me with the power of intelligence. Intelligence can’t solve everything—Hawking cannot get his cat to enter the car, no matter how smart he is.
I actually do think AGI will be able to build robots eventually, and it has a good chance of killing us all—but I don’t take this to be 100% certain, and also, I care about what those worlds look like, because they often involve humans surviving for years after the AGI instead of dying instantly, and in some of them humanity has a chance of surviving.
>Before the AI kills humans, it will have to get them to build robots.
Humanity didn’t need some other species to build robots for them, insofar as they’ve built robots. Evolution built extremely advanced robots without outside help.
IDK, I think it’s reasonable to link short written sources that contain arguments. That’s how you build up knowledge. An answer to “how will the AI get robots to get electricity” is “the way evolution and humans did it, but probably way way faster using all the shortcuts we can see and probably a lot of shortcuts we can’t see, the same way humans take a lot of shortcuts chimps can’t see”.
The AI will need to affect the physical world, which means robots. The AI cannot build robots if the AI first kills all humans. That is my point.
Before the AI kills humans, it will have to get them to build robots. Perhaps that will be easy for it to do (though it will take time, and that time is fundamentally risky for the AI due to the possibility of humans doing something stupid—another AGI, for example, or humans killing themselves too early with conventional weapons or narrow AI). Even if the AGI wins easily, this victory looks like “a few years of high technological development which involves a lot of fancy robots to automate all parts of the economy”, and only THEN can the AGI kill humans.
Saying that the AGI can simply magic its way to victory even if humans are dead (and its stored electricity is dwindling down, and it’s stuck with only a handful of drones that need to be manually plugged in by a human) is nonsensical.
In this case the “short written source” did not contain relevant arguments. It was just trying to “wow” me with the power of intelligence. Intelligence can’t solve everything—Hawking cannot get his cat to enter the car, no matter how smart he is.
I actually do think AGI will be able to build robots eventually, and it has a good chance of killing us all—but I don’t take this to be 100% certain, and also, I care about what those worlds look like, because they often involve humans surviving for years after the AGI instead of dying instantly, and in some of them humanity has a chance of surviving.
>Before the AI kills humans, it will have to get them to build robots.
Humanity didn’t need some other species to build robots for them, insofar as they’ve built robots. Evolution built extremely advanced robots without outside help.
Humanity already had the ability to physically manipulate.
Yes, but none of the other stuff needed for robots. Metals, motors, circuits…
Evolution, the other example I gave, didn’t already have the ability to physically manipulate.