+1 for thinking of unusual solutions. If it’s feasible to build long-term very-fast-relative-to-earth habitats without so much AI support that we lose before it launches, we should do that for random groups of humans. Whether you call them colonies or backups doesn’t matter. We don’t have to save all people on earth, just enough of humanity that we can expand across the universe fast enough to rescue the remaining victims of unaligned AI sometime.
I think an unaligned AI would have a large enough strategic advantage that such attempt is hopeless without aligned AI. So these backup teams would need to contain alignment researchers. But we don’t have enough researchers to crew a bunch of space missions, all of which need to have a reasonable chance of solving alignment.
+1 for thinking of unusual solutions. If it’s feasible to build long-term very-fast-relative-to-earth habitats without so much AI support that we lose before it launches, we should do that for random groups of humans. Whether you call them colonies or backups doesn’t matter. We don’t have to save all people on earth, just enough of humanity that we can expand across the universe fast enough to rescue the remaining victims of unaligned AI sometime.
I think an unaligned AI would have a large enough strategic advantage that such attempt is hopeless without aligned AI. So these backup teams would need to contain alignment researchers. But we don’t have enough researchers to crew a bunch of space missions, all of which need to have a reasonable chance of solving alignment.