Therefore, it very much seems like no self-replicating unfriendly artificial intelligence has arisen anywhere in the galaxy in the—very roughly − 10 billion years since intelligent life could have arisen somewhere in the galaxy.
I think a more correct formulation would be:
It very much seems that nothing that both (a) desires limitless expansion and (b) is capable of limitless expansion through interstellar distances has arisen anywhere in our galaxy.
I am not quite sure that the application of terminology like AI (and, in particular, FAI or UFAI) to alien entities is meaningful.
Any new intelligence would have to arise from (something like) natural selection, as a useful trick to have in the competition for resources that everything from bacteria upwards is evolved to be good at. I fail to imagine any intelligent lifeform that wouldn’t want to expand.
Even though the product of natural selection can be assumed to be ‘fit’ with regards to its environment, there’s no reason to assume that it will consciously embody the values of natural selection. Consider: birth control.
In particular, expansion may be a good strategy for a species but not necessarily a good strategy for individuals of that species.
Consider: a predator (say, a bird of prey or a big cat) has no innate desire for expansion. All the animal wants is some predetermined territory for itself, and it will never enlarge this territory because the territory provides all that it needs and policing a larger area would be a waste of effort. Expansion, in many species, is merely a group phenomenon. If the species is allowed to grow unchecked (fewer predators, larger food supply), they will expand simply by virtue of there being more individuals than there were before.
A similar situation can arise with a SAI. Let’s say a SAI emerges victorious from competition with other SAIs and its progenitor species. To eliminate competition it ruthlessly expands over its home planet and crushes all opposition. It’s entirely possible then that by conquering its little planet it has everything it needs (its utility function is maximized), and since there are no competitors around, it settles down, relaxes, and ceases expansion.
Even if the SAI were compelled to grow (by accumulating more computational resources), expansion isn’t guaranteed. Let’s say it figures out how to create a hypercomputer with unlimited computational capacity (using, say, a black hole). If this hypercomputer provides it with all its needs, there would be no reason to expand. Plus, communication over large distances is difficult, so expansion would actually have negative value.
It’s entirely possible then that by conquering its little planet [the AGI] has everything it needs (its utility function is maximized)
I don’t think it is possible. Even specifically not caring about the state of the rest of the world would make it useful for instrumental reasons, to compute more optimal actions to be performed on the original planet. The value of not caring about the rest of the world is itself unlikely to be certain, cleanly evaluating properties of even minimally nontrivial goals seems hard. Even if under its current understanding of the world, the meaning of its values is that it doesn’t care about the rest of the world, it might be wrong, perhaps given some future hypothetical discovery about fundamental physics, in which case it’s better to already have the rest of the world under control, ready to be optimized in a newly-discovered direction (or before that, to run those experiments).
Far too many things have to align for this to happen.
It is possible to have factors in one’s utility function which limit the expansion.
For example, a utility function might involve “preservation in an untouched state”, something similar to what humans do when they declare a chunk of nature to be a protected wilderness.
Or a utility function might contain “observe development and change without influencing it”.
And, of course, if we’re willing to assume an immutable cast-in-stone utility function, why not assume that there are some immutable constraints which go with it?
It’s definitely unlikely, I just brought it up as an example because chaosmage said “I fail to imagine any intelligent lifeform that wouldn’t want to expand.” There are plenty of lifeforms already that don’t want to expand, and I can imagine some (unlikely but not impossible) situations where a SAI wouldn’t want to expand either.
I think a more correct formulation would be:
It very much seems that nothing that both (a) desires limitless expansion and (b) is capable of limitless expansion through interstellar distances has arisen anywhere in our galaxy.
I am not quite sure that the application of terminology like AI (and, in particular, FAI or UFAI) to alien entities is meaningful.
Any new intelligence would have to arise from (something like) natural selection, as a useful trick to have in the competition for resources that everything from bacteria upwards is evolved to be good at. I fail to imagine any intelligent lifeform that wouldn’t want to expand.
Even though the product of natural selection can be assumed to be ‘fit’ with regards to its environment, there’s no reason to assume that it will consciously embody the values of natural selection. Consider: birth control.
In particular, expansion may be a good strategy for a species but not necessarily a good strategy for individuals of that species.
Consider: a predator (say, a bird of prey or a big cat) has no innate desire for expansion. All the animal wants is some predetermined territory for itself, and it will never enlarge this territory because the territory provides all that it needs and policing a larger area would be a waste of effort. Expansion, in many species, is merely a group phenomenon. If the species is allowed to grow unchecked (fewer predators, larger food supply), they will expand simply by virtue of there being more individuals than there were before.
A similar situation can arise with a SAI. Let’s say a SAI emerges victorious from competition with other SAIs and its progenitor species. To eliminate competition it ruthlessly expands over its home planet and crushes all opposition. It’s entirely possible then that by conquering its little planet it has everything it needs (its utility function is maximized), and since there are no competitors around, it settles down, relaxes, and ceases expansion.
Even if the SAI were compelled to grow (by accumulating more computational resources), expansion isn’t guaranteed. Let’s say it figures out how to create a hypercomputer with unlimited computational capacity (using, say, a black hole). If this hypercomputer provides it with all its needs, there would be no reason to expand. Plus, communication over large distances is difficult, so expansion would actually have negative value.
I don’t think it is possible. Even specifically not caring about the state of the rest of the world would make it useful for instrumental reasons, to compute more optimal actions to be performed on the original planet. The value of not caring about the rest of the world is itself unlikely to be certain, cleanly evaluating properties of even minimally nontrivial goals seems hard. Even if under its current understanding of the world, the meaning of its values is that it doesn’t care about the rest of the world, it might be wrong, perhaps given some future hypothetical discovery about fundamental physics, in which case it’s better to already have the rest of the world under control, ready to be optimized in a newly-discovered direction (or before that, to run those experiments).
Far too many things have to align for this to happen.
It is possible to have factors in one’s utility function which limit the expansion.
For example, a utility function might involve “preservation in an untouched state”, something similar to what humans do when they declare a chunk of nature to be a protected wilderness.
Or a utility function might contain “observe development and change without influencing it”.
And, of course, if we’re willing to assume an immutable cast-in-stone utility function, why not assume that there are some immutable constraints which go with it?
It’s definitely unlikely, I just brought it up as an example because chaosmage said “I fail to imagine any intelligent lifeform that wouldn’t want to expand.” There are plenty of lifeforms already that don’t want to expand, and I can imagine some (unlikely but not impossible) situations where a SAI wouldn’t want to expand either.