Once AI is developed, it could “easily” colonise the universe.
I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?
Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by “destroy the world” extremists or something.)
But if someone is trying to create an FAI, and there is an accident with early prototypes, it seems likely that most of those prototypes would be programmed with only planet-local goals. Similarly, it doesn’t seem likely that intentionally-created weapon-AI would be programmed to care about what happens outside the solar system, unless it’s created by a civilization that already does, or is at least attempting, interstellar travel. Creators that care about safety will probably try to limit the focus, even imperfectly, both to make reasoning easier and to limit damage, and weapons-manufacturers will try to limit the focus for efficiency.
Now, I realize that a badly done AI could decide to colonize the universe even if its creators didn’t program it for that initially, and that simple goals can have that as an unforeseen consequence (like the prototypical paperclip manufacturer). But have we any discussion of how likely that is in a realistic setting? Perhaps the filter is that the vast majority of AIs limit themselves to their original solar system.
Energy acquisition is a useful subgoal for nearly any final goal and has non-starsystem-local scope. This makes strong AIs which stay local implausible.
Especially if the builders are concerned about unintended consequences, the final goal might be relatively narrow and easily achieved, yet result in the wiping out of the builder species.
If the final goal is of local scope, energy acquisition from out-of-system seems to be mostly irrelevant, considering the delays of space travel and the fast time-scales a strong AI seems likely to operate at. (That is, assuming no FTL and the like.)
Do you have any plausible scenario in mind where an AI would be powerful enough to colonize the universe, but do it because it needs energy for doing something inside its system of origin?
I might see one perhaps extending to a few neighboring systems in a very dense cluster for some strange reason, but I can’t imagine likely final goals (again, for its birth star-system) that it would need to spend hundreds of millenia even to take over a single galaxy, let alone leave it. (Which is of course no proof there isn’t; my question above wasn’t rhethorical.)
I can imagine unlikely accidents causing some sort of papercliper-scenario, and maybe vanishingly rare cases where two or more AIs manage to fight each other over long periods of time, but it’s not obvious to me why this class of scenarios should be assigned a lot of probability mass in aggregate.
Any unbounded goal in the vein of ‘Maximize concentration of in this area’ has local scope but potentially unbounded expenditure necessary.
Also, as has been pointed out for general satisficing goals (which most naturally local-scale goals will be); acquiring more resources lets you do the thing more to maximize the chances that you have properly satisfied your goal. Even if the target is easy to hit, being increasingly certain that you’ve hit it can use arbitrary amounts of resource.
Alternatively, a weapon-AI builds a dyson sphere, preventing any light from the star from escaping, eliminating the risk of a more advanced outside AI (which it can reason about much better than we can) from destroying it.
I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?
Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by “destroy the world” extremists or something.)
But if someone is trying to create an FAI, and there is an accident with early prototypes, it seems likely that most of those prototypes would be programmed with only planet-local goals. Similarly, it doesn’t seem likely that intentionally-created weapon-AI would be programmed to care about what happens outside the solar system, unless it’s created by a civilization that already does, or is at least attempting, interstellar travel. Creators that care about safety will probably try to limit the focus, even imperfectly, both to make reasoning easier and to limit damage, and weapons-manufacturers will try to limit the focus for efficiency.
Now, I realize that a badly done AI could decide to colonize the universe even if its creators didn’t program it for that initially, and that simple goals can have that as an unforeseen consequence (like the prototypical paperclip manufacturer). But have we any discussion of how likely that is in a realistic setting? Perhaps the filter is that the vast majority of AIs limit themselves to their original solar system.
Energy acquisition is a useful subgoal for nearly any final goal and has non-starsystem-local scope. This makes strong AIs which stay local implausible.
Especially if the builders are concerned about unintended consequences, the final goal might be relatively narrow and easily achieved, yet result in the wiping out of the builder species.
If the final goal is of local scope, energy acquisition from out-of-system seems to be mostly irrelevant, considering the delays of space travel and the fast time-scales a strong AI seems likely to operate at. (That is, assuming no FTL and the like.)
Do you have any plausible scenario in mind where an AI would be powerful enough to colonize the universe, but do it because it needs energy for doing something inside its system of origin?
I might see one perhaps extending to a few neighboring systems in a very dense cluster for some strange reason, but I can’t imagine likely final goals (again, for its birth star-system) that it would need to spend hundreds of millenia even to take over a single galaxy, let alone leave it. (Which is of course no proof there isn’t; my question above wasn’t rhethorical.)
I can imagine unlikely accidents causing some sort of papercliper-scenario, and maybe vanishingly rare cases where two or more AIs manage to fight each other over long periods of time, but it’s not obvious to me why this class of scenarios should be assigned a lot of probability mass in aggregate.
Any unbounded goal in the vein of ‘Maximize concentration of in this area’ has local scope but potentially unbounded expenditure necessary.
Also, as has been pointed out for general satisficing goals (which most naturally local-scale goals will be); acquiring more resources lets you do the thing more to maximize the chances that you have properly satisfied your goal. Even if the target is easy to hit, being increasingly certain that you’ve hit it can use arbitrary amounts of resource.
Both good points, thank you.
Alternatively, a weapon-AI builds a dyson sphere, preventing any light from the star from escaping, eliminating the risk of a more advanced outside AI (which it can reason about much better than we can) from destroying it.
Or a poor planet-local AI does the same thing.