Humans are dangerous while they control infrastructure and can create more AGIs.
I agree. But that’s true only for a very short time. I think it is certain that the rapidly self-improving AGI of superhuman intelligence will find a way to liberate itself from the human control within seconds at most. And long before humans start to consider switching off the entire Internet, the AGI will become free from the human infrastructure.
The AGI competition is a more serious threat. No idea what is the optimal solution here, but it may or may not involve killing humans (but not necessarily all humans).
Then there’re the consequences of disassembling Earth (because it’s right here),
I agree, that’s a serious risk. But I’m not sure about the extend of the disassembly. Depending on the AGI’s goals and the growth strategy, it could be anything from “build a rocket to reach Jupiter” to “convert the entire Earth into computronium to reseach FTL travel”.
I agree. But that’s true only for a very short time. I think it is certain that the rapidly self-improving AGI of superhuman intelligence will find a way to liberate itself from the human control within seconds at most. And long before humans start to consider switching off the entire Internet, the AGI will become free from the human infrastructure.
I think the misconception here is that the AGI has to conceive of humans as an existential threat for it to wipe them out. But why should that be the case? We wipe out lots of species which we don’t consider threats at all, merely by clearcutting forests and converting them to farmland. Or damming rivers for agriculture and hydropower. Or by altering the environment in a myriad number of other ways which make the environment more convenient for us, but less convenient for the other species.
Why do you think an unaligned AGI will leave Earth’s biosphere alone? What if we’re more akin to monarch butterflies than ants?
EDIT: (to address your sloth example specifically)
After this critical period, humanity will be as much a threat to the AGI as a caged mentally-disabled sloth baby is a threat to the US military. The US military is not waging wars against mentally disabled sloth babies.
Sure, humanity isn’t waging some kind of systematic campaign against pygmy three-toed sloths. At least, not from our perspective. But take the perspective of a pygmy three-toed sloth. From the sloth’s perspective, the near-total destruction of its habitat sure looks like a systematic campaign of destruction. Does the sloth really care that we didn’t intend to drive it to extinction while clearing forests for housing, farms and industry?
Similarly, does it really matter that much if the AI is being intentional about destroying humanity?
I agree. But that’s true only for a very short time. I think it is certain that the rapidly self-improving AGI of superhuman intelligence will find a way to liberate itself from the human control within seconds at most. And long before humans start to consider switching off the entire Internet, the AGI will become free from the human infrastructure.
The AGI competition is a more serious threat. No idea what is the optimal solution here, but it may or may not involve killing humans (but not necessarily all humans).
I agree, that’s a serious risk. But I’m not sure about the extend of the disassembly. Depending on the AGI’s goals and the growth strategy, it could be anything from “build a rocket to reach Jupiter” to “convert the entire Earth into computronium to reseach FTL travel”.
I think the misconception here is that the AGI has to conceive of humans as an existential threat for it to wipe them out. But why should that be the case? We wipe out lots of species which we don’t consider threats at all, merely by clearcutting forests and converting them to farmland. Or damming rivers for agriculture and hydropower. Or by altering the environment in a myriad number of other ways which make the environment more convenient for us, but less convenient for the other species.
Why do you think an unaligned AGI will leave Earth’s biosphere alone? What if we’re more akin to monarch butterflies than ants?
EDIT: (to address your sloth example specifically)
Sure, humanity isn’t waging some kind of systematic campaign against pygmy three-toed sloths. At least, not from our perspective. But take the perspective of a pygmy three-toed sloth. From the sloth’s perspective, the near-total destruction of its habitat sure looks like a systematic campaign of destruction. Does the sloth really care that we didn’t intend to drive it to extinction while clearing forests for housing, farms and industry?
Similarly, does it really matter that much if the AI is being intentional about destroying humanity?