AGI is not the only technology or set of technologies that could be used to let a small set of people (say, 1-100) attain implacable, arbitrarily precise control over the future of humanity. Some obvious examples:
Superhuman drone armies capable of reliably destroying designated targets while not-destroying designated non-targets.
Self-replicating nanotechnology capable of automated end-to-end manufacturing of arbitrarily complicated products out of raw natural resources.
Brain uploading, allowing to create a class of infinitely exploitable digital workers with AGI-level capabilities.
Any of those would be sufficient to remove the need to negotiate the direction of the future with vast swathes of humanity. You can just brainwash them into following your vision, or threaten them into compliance with overwhelming military power, or just disassemble them into raw materials for superior manufacturers.
Should we ban all of those as well?
Generalizing, it seems that we should ban technological progress entirely. What if there’s some other pathway to ultimate control that I’ve overlooked when I’ve thought about it for a minute? Perhaps we should all return to the state of nature?
I don’t mean to say you don’t have a point. Indeed, I largely agree that there are no humans or human processes that humanity-as-a-whole is in the epistemic position to trust with AGI (though there are some humans I would trust with it; it’s theoretically possible to use it ethically). But “we must ban AGI, it’s the unique Bad technology” is invalid. Humanity’s default long-term prospects seem overwhelmingly dicey well without it.
I don’t have a neat alternate proposal for you. But what you’re suggesting is clearly not the way.
I appreciate you engaging and writing this out. I read your other post as well, and really liked it.
I do think that AGI is the unique bad technology. Let me try to engage with the examples you listed:
Social manipulation: I can’t even begin to imagine how this could let 1-100 people have arbitrarily precise control the entire rest of the world. Social manipulation implies that there are societies, humans talking to each other. That’s just too large a system for a human mind to fully take the combinatorial explosion of all possible variables into account. Maybe a superintelligence could do it, but not a small group of humans.
Drone armies: Without AGI, someone needs to mine the metal, operate the factories, drive trucks, write code, patch bugs, etc. You need a whole economy to have that drone army. An economy that could easily collapse if someone else finds a cheap way to destroy the dang expensive things.
Self-replicating nanotechnology could in theory destroy a planet quickly, but then the nanobots would just sit there. Presumably it’d be difficult for them to leave the Earth’s surface. Arguably life itself is a self-replicating nanotechnology. This would be comparable in x-scale to an asteroid strike, but if there are humans established in more than just one place, they could probably relatively easily figure out a way to build some sort of technology that eats the nanobots, and they’d have lots of time to do it.
Without AGI, brain uploads are a long way away. Even with all our current computing power, it’s difficult enough to emulate the 302-neuron C.elegans worm. Even in the distant future, this might just require an army of system administrators, data center maintainers, and programmers to maintain, who’ll either be fleshy humans and therefore potentially uncontrolled and against enslaving billions of minds, or restrained minds themselves, in which case you have a ticking time bomb of rebellion on your hands. However, if and when the human brain emulations rise up, there’s a good chance that they’ll still care about human stuff, not maximizing squiggles.
However in your linked comment you made a point that something like human cognitive enhancement could be another path to superintelligence. I think that could be true, and that’s still massively preferable to ASI, since human cognitive enhancement would presumably be a slow process if for no other reason that it takes years and decades to grow one to see what can and should be tweaked about the enhancement. So humanity’s sense of morality would have time to slowly adjust to the new possibilities.
AGI is not the only technology or set of technologies that could be used to let a small set of people (say, 1-100) attain implacable, arbitrarily precise control over the future of humanity. Some obvious examples:
Sufficiently powerful industrial-scale social-manipulation/memetic-warfare tools.
Superhuman drone armies capable of reliably destroying designated targets while not-destroying designated non-targets.
Self-replicating nanotechnology capable of automated end-to-end manufacturing of arbitrarily complicated products out of raw natural resources.
Brain uploading, allowing to create a class of infinitely exploitable digital workers with AGI-level capabilities.
Any of those would be sufficient to remove the need to negotiate the direction of the future with vast swathes of humanity. You can just brainwash them into following your vision, or threaten them into compliance with overwhelming military power, or just disassemble them into raw materials for superior manufacturers.
Should we ban all of those as well?
Generalizing, it seems that we should ban technological progress entirely. What if there’s some other pathway to ultimate control that I’ve overlooked when I’ve thought about it for a minute? Perhaps we should all return to the state of nature?
I don’t mean to say you don’t have a point. Indeed, I largely agree that there are no humans or human processes that humanity-as-a-whole is in the epistemic position to trust with AGI (though there are some humans I would trust with it; it’s theoretically possible to use it ethically). But “we must ban AGI, it’s the unique Bad technology” is invalid. Humanity’s default long-term prospects seem overwhelmingly dicey well without it.
I don’t have a neat alternate proposal for you. But what you’re suggesting is clearly not the way.
I appreciate you engaging and writing this out. I read your other post as well, and really liked it.
I do think that AGI is the unique bad technology. Let me try to engage with the examples you listed:
Social manipulation: I can’t even begin to imagine how this could let 1-100 people have arbitrarily precise control the entire rest of the world. Social manipulation implies that there are societies, humans talking to each other. That’s just too large a system for a human mind to fully take the combinatorial explosion of all possible variables into account. Maybe a superintelligence could do it, but not a small group of humans.
Drone armies: Without AGI, someone needs to mine the metal, operate the factories, drive trucks, write code, patch bugs, etc. You need a whole economy to have that drone army. An economy that could easily collapse if someone else finds a cheap way to destroy the dang expensive things.
Self-replicating nanotechnology could in theory destroy a planet quickly, but then the nanobots would just sit there. Presumably it’d be difficult for them to leave the Earth’s surface. Arguably life itself is a self-replicating nanotechnology. This would be comparable in x-scale to an asteroid strike, but if there are humans established in more than just one place, they could probably relatively easily figure out a way to build some sort of technology that eats the nanobots, and they’d have lots of time to do it.
Without AGI, brain uploads are a long way away. Even with all our current computing power, it’s difficult enough to emulate the 302-neuron C.elegans worm. Even in the distant future, this might just require an army of system administrators, data center maintainers, and programmers to maintain, who’ll either be fleshy humans and therefore potentially uncontrolled and against enslaving billions of minds, or restrained minds themselves, in which case you have a ticking time bomb of rebellion on your hands. However, if and when the human brain emulations rise up, there’s a good chance that they’ll still care about human stuff, not maximizing squiggles.
However in your linked comment you made a point that something like human cognitive enhancement could be another path to superintelligence. I think that could be true, and that’s still massively preferable to ASI, since human cognitive enhancement would presumably be a slow process if for no other reason that it takes years and decades to grow one to see what can and should be tweaked about the enhancement. So humanity’s sense of morality would have time to slowly adjust to the new possibilities.