Not quite. Notice that the word “win” here is mapping onto a lot of different meanings- the one used in the grandparent and great-grandparent (unless I misunderstood it) is “the satisfaction of goals.” What one means by “goals” is not entirely clear- if I build a bacterium whose operation results in the construction of more bacterium, is it appropriate to claim it has “goals” in the same sense that a human has “goals”? A readily visible difference is that the human’s goals are accessible to introspection, whereas the bacterium’s aren’t, and whether or not that difference is material depends on what you want to use the word “goals” for.
The meaning for “win” that I’m inferring from the parent is “dominate,” which is different from “has goals and uses reason to perform better at fulfilling those goals.” One can imagine a setup in which an AI without explicit goals can defeat an AI with explicit goals. (The tautology is preserved because one can say afterwards that it was clearly irrational to have explicit goals, but I mostly wanted to point out another wrinkle that should be considered rather than knock down the tautology.)
Right—what I’m saying wasn’t true under all circumstances, and there are certainly criteria for “winning” other than domination.
What I meant was that as soon as you introduce an AI into the system that has domination as a goal or subgoal, it will tend to wipe out any other AIs that don’t have some kind of drive to win. If an AI can be persuaded to be indifferent about the future then the dominating AI can choose to exploit that.
Not quite. Notice that the word “win” here is mapping onto a lot of different meanings- the one used in the grandparent and great-grandparent (unless I misunderstood it) is “the satisfaction of goals.” What one means by “goals” is not entirely clear- if I build a bacterium whose operation results in the construction of more bacterium, is it appropriate to claim it has “goals” in the same sense that a human has “goals”? A readily visible difference is that the human’s goals are accessible to introspection, whereas the bacterium’s aren’t, and whether or not that difference is material depends on what you want to use the word “goals” for.
The meaning for “win” that I’m inferring from the parent is “dominate,” which is different from “has goals and uses reason to perform better at fulfilling those goals.” One can imagine a setup in which an AI without explicit goals can defeat an AI with explicit goals. (The tautology is preserved because one can say afterwards that it was clearly irrational to have explicit goals, but I mostly wanted to point out another wrinkle that should be considered rather than knock down the tautology.)
Right—what I’m saying wasn’t true under all circumstances, and there are certainly criteria for “winning” other than domination.
What I meant was that as soon as you introduce an AI into the system that has domination as a goal or subgoal, it will tend to wipe out any other AIs that don’t have some kind of drive to win. If an AI can be persuaded to be indifferent about the future then the dominating AI can choose to exploit that.