Are you suggesting that an AGI that values anything at all is incapable of valuing the existence of other AGIs, or merely that this is sufficiently unlikely as to not be worth considering?
It can certainly value them, and create them, cooperate and trade, etc. etc. There are two exceptions that make such valuing and cooperation take second place.
First: an uFAI is just as unfriendly and scary to other AIs as to humans. An AI will therefore try to prevent other AIs from achieving dangerous power unless it is very sure of their current and future goals.
Second: an AI created by humans (plus or minus self-modifications) with an explicit value/goal system of the form “the universe should be THIS way”, will try to stop any and all agents that try to interfere with shaping the universe as it wishes. And the foremost danger in this category is—other AIs created in the same way but with different goals.
I’m a little confused by your response, and I suspect that I was unclear in my question.
I agree that an AI with an explicit value/goal system of the form “the universe should be THIS way”, will try to stop any and all agents that try to interfere with shaping the universe as it wishes (either by destroying them, or altering their goal structure, or securing their reliable cooperation, or something else).
But for an AI with the value “the universe should contain as many distinct intelligences as possible,” valuing and creating other AIs will presumably take first place.
But for an AI with the value “the universe should contain as many distinct intelligences as possible,” valuing and creating other AIs will presumably take first place.
That’s probably more efficiently done by destroying any other AIs that come along, while tiling the universe with slightly varying low-level intelligences.
I no longer know what the words “intelligence,” “AI”, and “AGI” actually refer to in this conversation, and I’m not even certain the referents are consistent, so let me taboo the whole lexical mess and try again.
For any X, if the existence of X interferes with an agent A achieving its goals, the better A is at optimizing its environment for its goals the less likely X is to exist.
For any X and A, the more optimizing power X can exert on its environment, the more likely it is that the existence of X interferes with A achieving its goals.
For any X, if A values the existence of X, the better A is at implementing its values the more likely X is to exist.
All of this is as true for X=intelligent beings as X=AI as X=AGI as X=pie.
Cool. So it seems to follow that we agree that if agent A1 values the existence of distinct agents A2..An, it’s unclear how the likelihood of A2..An existing varies with the optimizing power available to A1...An. Yes?
Yes. Even if we know each agent’s optimizing power, and each agent’s estimation of each other agent’s power and ability to acquire greater power, the behavior of A1 still depends on its exact values (for instance, what else it values besides the existence of the others). It also depends on the values of the other agents (might they choose to initiate conflict among themselves or against A1?)
I tend to agree. Unless it has specific values to the contrary, other AIs of power comparable to your own (or which might grow into such power one day) are too dangerous to leave running around. If you value states of the external universe, and you happen to be the first powerful AGI built, it’s natural to try to become a singleton as a preventative measure.
It’s certainly possible. My analysis so far is only on a “all else being equal” footing.
I do feel that, absent other data, the safer assumption is that if an AI is capable of becoming a singleton at all, expense (in terms of energy/matter and space or time) isn’t going to be the thing that stops it. But that may be just a cached thought because I’m used to thinking of an AI trying to become a singleton as a dangerous potential adversary. I would appreciate your insight.
As for values, certainly conflicting values can exist, from ones that mention the subject directly (“don’t move everyone to a simulation in a way they don’t notice” would close one obvious route) to ones that impinge upon it in unexpected ways (“no first strike against aliens” becomes “oops, an alien-built paperclipper just ate Jupiter from the inside out”).
Are you suggesting that an AGI that values anything at all is incapable of valuing the existence of other AGIs, or merely that this is sufficiently unlikely as to not be worth considering?
It can certainly value them, and create them, cooperate and trade, etc. etc. There are two exceptions that make such valuing and cooperation take second place.
First: an uFAI is just as unfriendly and scary to other AIs as to humans. An AI will therefore try to prevent other AIs from achieving dangerous power unless it is very sure of their current and future goals.
Second: an AI created by humans (plus or minus self-modifications) with an explicit value/goal system of the form “the universe should be THIS way”, will try to stop any and all agents that try to interfere with shaping the universe as it wishes. And the foremost danger in this category is—other AIs created in the same way but with different goals.
I’m a little confused by your response, and I suspect that I was unclear in my question.
I agree that an AI with an explicit value/goal system of the form “the universe should be THIS way”, will try to stop any and all agents that try to interfere with shaping the universe as it wishes (either by destroying them, or altering their goal structure, or securing their reliable cooperation, or something else).
But for an AI with the value “the universe should contain as many distinct intelligences as possible,” valuing and creating other AIs will presumably take first place.
That’s probably more efficiently done by destroying any other AIs that come along, while tiling the universe with slightly varying low-level intelligences.
I no longer know what the words “intelligence,” “AI”, and “AGI” actually refer to in this conversation, and I’m not even certain the referents are consistent, so let me taboo the whole lexical mess and try again.
For any X, if the existence of X interferes with an agent A achieving its goals, the better A is at optimizing its environment for its goals the less likely X is to exist.
For any X and A, the more optimizing power X can exert on its environment, the more likely it is that the existence of X interferes with A achieving its goals.
For any X, if A values the existence of X, the better A is at implementing its values the more likely X is to exist.
All of this is as true for X=intelligent beings as X=AI as X=AGI as X=pie.
As far as I can see, this is all true and agrees with everything you, I and thomblake have said.
Cool.
So it seems to follow that we agree that if agent A1 values the existence of distinct agents A2..An, it’s unclear how the likelihood of A2..An existing varies with the optimizing power available to A1...An. Yes?
Yes. Even if we know each agent’s optimizing power, and each agent’s estimation of each other agent’s power and ability to acquire greater power, the behavior of A1 still depends on its exact values (for instance, what else it values besides the existence of the others). It also depends on the values of the other agents (might they choose to initiate conflict among themselves or against A1?)
I tend to agree. Unless it has specific values to the contrary, other AIs of power comparable to your own (or which might grow into such power one day) are too dangerous to leave running around. If you value states of the external universe, and you happen to be the first powerful AGI built, it’s natural to try to become a singleton as a preventative measure.
I feel like a cost-benefit analysis has gone on here, the internals of which I’m not privy to.
Shouldn’t it be possible that becoming a singleton is expensive and/or would conflict with one’s values?
It’s certainly possible. My analysis so far is only on a “all else being equal” footing.
I do feel that, absent other data, the safer assumption is that if an AI is capable of becoming a singleton at all, expense (in terms of energy/matter and space or time) isn’t going to be the thing that stops it. But that may be just a cached thought because I’m used to thinking of an AI trying to become a singleton as a dangerous potential adversary. I would appreciate your insight.
As for values, certainly conflicting values can exist, from ones that mention the subject directly (“don’t move everyone to a simulation in a way they don’t notice” would close one obvious route) to ones that impinge upon it in unexpected ways (“no first strike against aliens” becomes “oops, an alien-built paperclipper just ate Jupiter from the inside out”).