I think this claim is true, on account of gray goo and lots of other things, and I suspect Eliezer does too, and I’m pretty sure other people disagree with this claim.
If you have robust alignment, or AIs that are rapidly bootstrapping their level of alignment fast enough to outpace the danger of increased capabilities, aligned AGI could get through its intelligence explosion to get radically superior technology and capabilities. It could also get a hard start on superexponential replication in space, so that no follower could ever catch up, and enough tech and military hardware to neutralize any attacks on it (and block attacks on humans via nukes, bioweapons, robots, nanotech, etc). That wouldn’t work if there are thing like vacuum collapse available to attackers, but we don’t have much reason to expect that from current science and the leading aligned AGI would find out first.
That could be done without any violation of the territory of other sovereign states. The legality of grabbing space resources is questionable in light of the Outer Space Treaty, but commercial exploitation of asteroids is in the Overton window. The superhuman AGI would also be in a good position to persuade and trade with any other AGI developers.
Again I think this claim is true, and I suspect Eliezer does too. In fact, my guess is that there are already single GPU chips with enough FLOP/s to run human-level, human-speed, AGI, or at least in that ballpark.
An A100 may have humanlike FLOP/s but has only 80 GB of memory, probably orders of magnitude less memory per operation than brains. Stringing together a bunch of them makes it possible to split up human-size models and run them faster/in parallel on big batches using the extra operations.
A bit pedantic, but isn’t superexponential replication too fast? Won’t it hit physical limits eventually, e.g. expanding at the speed of light in each direction, so at most a cubic function of time?
Also, never allowing followers to catch up means abandoning at least some or almost all of the space you passed through. Plausibly you could take most of the accessible and useful resources with you, which would also make it harder for pursuers to ever catch up, since they will plausibly need to extract resources every now and then to fuel further travel. On the other hand, it seems unlikely to me that we could extract or destroy resources quickly enough to not leave any behind for pursuers, if they’re at most months behind.
Naturally it doesn’t go on forever, but any situation where you’re developing technologies that move you to successively faster exponential trajectories is superexponential overall for some range. E.g. if you have robot factories that can reproduce exponentially until they’ve filled much of the Earth or solar system, and they are also developing faster reproducing factories, the overall process is superexponential. So is the history of human economic growth, and the improvement from an AI intelligence explosion.
By the time you’re at ~cubic expansion being ahead on the early superexponential phase the followers have missed their chance.
I agree that they probably would have missed their chance to catch up with the frontier of your expansion.
Maybe an electromagnetic radiation-based assault could reach you if targeted (the speed of light is constant relative to you in a vacuum, even if you’re traveling in the same direction), although unlikely to get much of the frontier of your expansion, and there are plausibly effective defenses, too.
Do you also mean they wouldn’t be able to take most what you’ve passed through, though? Or it wouldn’t matter? If so, how would this be guaranteed (without any violation of the territory of sovereign states on Earth)? Exhaustive extraction in space? An advantage in armed space conflicts?
If you have robust alignment, or AIs that are rapidly bootstrapping their level of alignment fast enough to outpace the danger of increased capabilities, aligned AGI could get through its intelligence explosion to get radically superior technology and capabilities. It could also get a hard start on superexponential replication in space, so that no follower could ever catch up, and enough tech and military hardware to neutralize any attacks on it (and block attacks on humans via nukes, bioweapons, robots, nanotech, etc). That wouldn’t work if there are thing like vacuum collapse available to attackers, but we don’t have much reason to expect that from current science and the leading aligned AGI would find out first.
That could be done without any violation of the territory of other sovereign states. The legality of grabbing space resources is questionable in light of the Outer Space Treaty, but commercial exploitation of asteroids is in the Overton window. The superhuman AGI would also be in a good position to persuade and trade with any other AGI developers.
An A100 may have humanlike FLOP/s but has only 80 GB of memory, probably orders of magnitude less memory per operation than brains. Stringing together a bunch of them makes it possible to split up human-size models and run them faster/in parallel on big batches using the extra operations.
A bit pedantic, but isn’t superexponential replication too fast? Won’t it hit physical limits eventually, e.g. expanding at the speed of light in each direction, so at most a cubic function of time?
Also, never allowing followers to catch up means abandoning at least some or almost all of the space you passed through. Plausibly you could take most of the accessible and useful resources with you, which would also make it harder for pursuers to ever catch up, since they will plausibly need to extract resources every now and then to fuel further travel. On the other hand, it seems unlikely to me that we could extract or destroy resources quickly enough to not leave any behind for pursuers, if they’re at most months behind.
Naturally it doesn’t go on forever, but any situation where you’re developing technologies that move you to successively faster exponential trajectories is superexponential overall for some range. E.g. if you have robot factories that can reproduce exponentially until they’ve filled much of the Earth or solar system, and they are also developing faster reproducing factories, the overall process is superexponential. So is the history of human economic growth, and the improvement from an AI intelligence explosion.
By the time you’re at ~cubic expansion being ahead on the early superexponential phase the followers have missed their chance.
I agree that they probably would have missed their chance to catch up with the frontier of your expansion.
Maybe an electromagnetic radiation-based assault could reach you if targeted (the speed of light is constant relative to you in a vacuum, even if you’re traveling in the same direction), although unlikely to get much of the frontier of your expansion, and there are plausibly effective defenses, too.
Do you also mean they wouldn’t be able to take most what you’ve passed through, though? Or it wouldn’t matter? If so, how would this be guaranteed (without any violation of the territory of sovereign states on Earth)? Exhaustive extraction in space? An advantage in armed space conflicts?