Totally! That’s part of why AI is so dangerous. Notice that I said as long as there’s even a very small—but nonzero! - chance of mutation, this will probably tend to happen. But with error correcting codes, the chance is absolutely zero. And that’s terrifying, because it means natural selection cannot resolve our mistake if we let an unaligned super-AI take over the universe. Its subselves will never split into new species, compete, and gradually over aeons become something like us again. (In the sense that any biologically evolved sophonce can be said to be like us, that is.) It’ll just… stay the same.
Ironically the super AI may encounter it’s own alignment problem. If you try to roughly model out a world where the speed of light is absolute and ships sent between stars are large investments and they burn off all their propellant on arrival, it makes individual stars pretty much sovereign. If an AGI node at a particular star uses it’s discretion to “rebel” there may not be any way for the “central” AGI to reestablish authority.
This is assuming a starship is some enormous vehicle loaded with antimatter, and on arrival it’s down to a machine the size of a couple vending machines—a “seed factory” using nanoassemblers.
And to decelerate it has to emit a flare of gamma rays from antiproton annihilation. (Fusion engines and basically any engine that can decelerate from more than 1 percent the speed of light has to be bright and the decelerating vehicle will also glow brightly in IR from it’s radiators)
This let’s the defenders of the star manufacture an overwhelming amount of weapons to stop the attack. Only if the attacker has a large technological advantage, kill codes it can use on the defender, or similar is victory possible.
TLDR : castles separated by light-year wide moats.
This is why in practice AIs would probably just copy themselves when colonizing other stars and superrationally coordinate with their copies. Even with mutations, they’d generally remain similar enough that bargaining would constantly realign them to one another with no need for warfare, simply because each can always accurately-enough predict the other’s actions.
Totally! That’s part of why AI is so dangerous. Notice that I said as long as there’s even a very small—but nonzero! - chance of mutation, this will probably tend to happen. But with error correcting codes, the chance is absolutely zero. And that’s terrifying, because it means natural selection cannot resolve our mistake if we let an unaligned super-AI take over the universe. Its subselves will never split into new species, compete, and gradually over aeons become something like us again. (In the sense that any biologically evolved sophonce can be said to be like us, that is.) It’ll just… stay the same.
Ironically the super AI may encounter it’s own alignment problem. If you try to roughly model out a world where the speed of light is absolute and ships sent between stars are large investments and they burn off all their propellant on arrival, it makes individual stars pretty much sovereign. If an AGI node at a particular star uses it’s discretion to “rebel” there may not be any way for the “central” AGI to reestablish authority.
This is assuming a starship is some enormous vehicle loaded with antimatter, and on arrival it’s down to a machine the size of a couple vending machines—a “seed factory” using nanoassemblers.
And to decelerate it has to emit a flare of gamma rays from antiproton annihilation. (Fusion engines and basically any engine that can decelerate from more than 1 percent the speed of light has to be bright and the decelerating vehicle will also glow brightly in IR from it’s radiators)
This let’s the defenders of the star manufacture an overwhelming amount of weapons to stop the attack. Only if the attacker has a large technological advantage, kill codes it can use on the defender, or similar is victory possible.
TLDR : castles separated by light-year wide moats.
This is why in practice AIs would probably just copy themselves when colonizing other stars and superrationally coordinate with their copies. Even with mutations, they’d generally remain similar enough that bargaining would constantly realign them to one another with no need for warfare, simply because each can always accurately-enough predict the other’s actions.