It’s entirely possible then that by conquering its little planet [the AGI] has everything it needs (its utility function is maximized)
I don’t think it is possible. Even specifically not caring about the state of the rest of the world would make it useful for instrumental reasons, to compute more optimal actions to be performed on the original planet. The value of not caring about the rest of the world is itself unlikely to be certain, cleanly evaluating properties of even minimally nontrivial goals seems hard. Even if under its current understanding of the world, the meaning of its values is that it doesn’t care about the rest of the world, it might be wrong, perhaps given some future hypothetical discovery about fundamental physics, in which case it’s better to already have the rest of the world under control, ready to be optimized in a newly-discovered direction (or before that, to run those experiments).
Far too many things have to align for this to happen.
It is possible to have factors in one’s utility function which limit the expansion.
For example, a utility function might involve “preservation in an untouched state”, something similar to what humans do when they declare a chunk of nature to be a protected wilderness.
Or a utility function might contain “observe development and change without influencing it”.
And, of course, if we’re willing to assume an immutable cast-in-stone utility function, why not assume that there are some immutable constraints which go with it?
It’s definitely unlikely, I just brought it up as an example because chaosmage said “I fail to imagine any intelligent lifeform that wouldn’t want to expand.” There are plenty of lifeforms already that don’t want to expand, and I can imagine some (unlikely but not impossible) situations where a SAI wouldn’t want to expand either.
I don’t think it is possible. Even specifically not caring about the state of the rest of the world would make it useful for instrumental reasons, to compute more optimal actions to be performed on the original planet. The value of not caring about the rest of the world is itself unlikely to be certain, cleanly evaluating properties of even minimally nontrivial goals seems hard. Even if under its current understanding of the world, the meaning of its values is that it doesn’t care about the rest of the world, it might be wrong, perhaps given some future hypothetical discovery about fundamental physics, in which case it’s better to already have the rest of the world under control, ready to be optimized in a newly-discovered direction (or before that, to run those experiments).
Far too many things have to align for this to happen.
It is possible to have factors in one’s utility function which limit the expansion.
For example, a utility function might involve “preservation in an untouched state”, something similar to what humans do when they declare a chunk of nature to be a protected wilderness.
Or a utility function might contain “observe development and change without influencing it”.
And, of course, if we’re willing to assume an immutable cast-in-stone utility function, why not assume that there are some immutable constraints which go with it?
It’s definitely unlikely, I just brought it up as an example because chaosmage said “I fail to imagine any intelligent lifeform that wouldn’t want to expand.” There are plenty of lifeforms already that don’t want to expand, and I can imagine some (unlikely but not impossible) situations where a SAI wouldn’t want to expand either.