This seems like a relatively standard argument, but I also struggle a bit to understand why this is a problem. If the AI is aligned it will indeed try to spread through the universe as quickly as possible, eliminating all competition, but if shares our values, that would be good, not bad (and if we value aliens, which I think I do, then we would presumably still somehow trade with them afterwards from a position of security and stability).
An AI with potentially limitless lifespan will prioritise the future over the present to an extent that would, almost certainly be bad for us now. For example it may seem optimal to kill off all humans whilst keeping a copy of our genetic code so as to have more compute power and resources available to produce Von Neumann Probes to maximise the region of the universe it controls before encountering, and hopefully destroying, any similar alien AI diaspora. Only after some time, once all possible threats had been eliminated, would it start to recreate humans into our new, safe, galactic utopia. The safest time for this would almost certainly, be when all other galaxies had red-shifted beyond the future light cone of our local cluster.
This seems like a relatively standard argument, but I also struggle a bit to understand why this is a problem. If the AI is aligned it will indeed try to spread through the universe as quickly as possible, eliminating all competition, but if shares our values, that would be good, not bad (and if we value aliens, which I think I do, then we would presumably still somehow trade with them afterwards from a position of security and stability).
An AI with potentially limitless lifespan will prioritise the future over the present to an extent that would, almost certainly be bad for us now.
For example it may seem optimal to kill off all humans whilst keeping a copy of our genetic code so as to have more compute power and resources available to produce Von Neumann Probes to maximise the region of the universe it controls before encountering, and hopefully destroying, any similar alien AI diaspora. Only after some time, once all possible threats had been eliminated, would it start to recreate humans into our new, safe, galactic utopia. The safest time for this would almost certainly, be when all other galaxies had red-shifted beyond the future light cone of our local cluster.