I no longer know what the words “intelligence,” “AI”, and “AGI” actually refer to in this conversation, and I’m not even certain the referents are consistent, so let me taboo the whole lexical mess and try again.
For any X, if the existence of X interferes with an agent A achieving its goals, the better A is at optimizing its environment for its goals the less likely X is to exist.
For any X and A, the more optimizing power X can exert on its environment, the more likely it is that the existence of X interferes with A achieving its goals.
For any X, if A values the existence of X, the better A is at implementing its values the more likely X is to exist.
All of this is as true for X=intelligent beings as X=AI as X=AGI as X=pie.
Cool. So it seems to follow that we agree that if agent A1 values the existence of distinct agents A2..An, it’s unclear how the likelihood of A2..An existing varies with the optimizing power available to A1...An. Yes?
Yes. Even if we know each agent’s optimizing power, and each agent’s estimation of each other agent’s power and ability to acquire greater power, the behavior of A1 still depends on its exact values (for instance, what else it values besides the existence of the others). It also depends on the values of the other agents (might they choose to initiate conflict among themselves or against A1?)
I no longer know what the words “intelligence,” “AI”, and “AGI” actually refer to in this conversation, and I’m not even certain the referents are consistent, so let me taboo the whole lexical mess and try again.
For any X, if the existence of X interferes with an agent A achieving its goals, the better A is at optimizing its environment for its goals the less likely X is to exist.
For any X and A, the more optimizing power X can exert on its environment, the more likely it is that the existence of X interferes with A achieving its goals.
For any X, if A values the existence of X, the better A is at implementing its values the more likely X is to exist.
All of this is as true for X=intelligent beings as X=AI as X=AGI as X=pie.
As far as I can see, this is all true and agrees with everything you, I and thomblake have said.
Cool.
So it seems to follow that we agree that if agent A1 values the existence of distinct agents A2..An, it’s unclear how the likelihood of A2..An existing varies with the optimizing power available to A1...An. Yes?
Yes. Even if we know each agent’s optimizing power, and each agent’s estimation of each other agent’s power and ability to acquire greater power, the behavior of A1 still depends on its exact values (for instance, what else it values besides the existence of the others). It also depends on the values of the other agents (might they choose to initiate conflict among themselves or against A1?)