I just briefly thought you could put a bunch of AI researchers on a spaceship, and accelerate it real fast, and then they get time dilation effects that increase their effective rate of research.
Then I remembered that time dilation works the other way ’round – they’d get even less time.
This suggested a much less promising plan of “build narrowly aligned STEM AI, have it figure out how to efficiently accelerate the Earth real fast and… leave behind a teeny moon base of AI researchers who figure out the alignment problem.”
+1 for thinking of unusual solutions. If it’s feasible to build long-term very-fast-relative-to-earth habitats without so much AI support that we lose before it launches, we should do that for random groups of humans. Whether you call them colonies or backups doesn’t matter. We don’t have to save all people on earth, just enough of humanity that we can expand across the universe fast enough to rescue the remaining victims of unaligned AI sometime.
I think an unaligned AI would have a large enough strategic advantage that such attempt is hopeless without aligned AI. So these backup teams would need to contain alignment researchers. But we don’t have enough researchers to crew a bunch of space missions, all of which need to have a reasonable chance of solving alignment.
I just briefly thought you could put a bunch of AI researchers on a spaceship, and accelerate it real fast, and then they get time dilation effects that increase their effective rate of research.
Then I remembered that time dilation works the other way ’round – they’d get even less time.
This suggested a much less promising plan of “build narrowly aligned STEM AI, have it figure out how to efficiently accelerate the Earth real fast and… leave behind a teeny moon base of AI researchers who figure out the alignment problem.”
More or less the plot of https://en.wikipedia.org/wiki/Orthogonal_(series) incidentally.
+1 for thinking of unusual solutions. If it’s feasible to build long-term very-fast-relative-to-earth habitats without so much AI support that we lose before it launches, we should do that for random groups of humans. Whether you call them colonies or backups doesn’t matter. We don’t have to save all people on earth, just enough of humanity that we can expand across the universe fast enough to rescue the remaining victims of unaligned AI sometime.
I think an unaligned AI would have a large enough strategic advantage that such attempt is hopeless without aligned AI. So these backup teams would need to contain alignment researchers. But we don’t have enough researchers to crew a bunch of space missions, all of which need to have a reasonable chance of solving alignment.