Believing that ‘a perfected human civilization spanning hundreds of galaxies’ is a loss condition of AI, rather than a win condition, is not entirely obviously wrong, but certainly doesn’t seem obviously right.
And if you argue ‘AI is extraordinarily likely to lead to a bad outcome for humans’ while including ‘hundreds of galaxies of humans’ as a ‘bad outcome’, that seems fairly disingenuous.
Believing that ‘a perfected human civilization spanning hundreds of galaxies’ is a loss condition of AI, rather than a win condition, is not entirely obviously wrong, but certainly doesn’t seem obviously right.
And if you argue ‘AI is extraordinarily likely to lead to a bad outcome for humans’ while including ‘hundreds of galaxies of humans’ as a ‘bad outcome’, that seems fairly disingenuous.