You’re right there’s something weird going on with fix-points and determinism: both agents are just an algorithm, and in some sense there is already a mathematical fact of the matter about what each outputs. The problem is none of them know this in advance (exactly because of the non-terminating computation problem), and so (while still reasoning about which action to output) they are logically uncertain about what they and the other outputs.
If an agent believes that the others’ action is completely independent of their own, then surely, no commitment race will ensue. But say, for example, they believe their taking action A makes it more likely the other takes action B. This belief could be justified in a number of different ways: because they believe the other to be perfectly simulating them, because they believe the other to be imperfectly simulating them (and notice, both agents can imperfectly simulate each other, and consider this to give them better-than-chance knowledge about the other), because they believe they can influence the truth of some mathematical statements (EDT-like) that the other will think about, etc.
And furthermore, this doesn’t solely apply to the end actions they choose: it can also apply to the mental moves they perform before coming to those actions. For example, maybe an agent has a high enough probability on “the other will just simulate me, and best-respond” (and thus, I should just be aggressive). But also, an agent could go one level higher, and think “if I simulate the other, they will probably notice (for example, by coarsely simulating me, or noticing some properties of my code), and be aggressive. So I won’t do that (and then it’s less likely they’re aggressive)”.
Another way to put all this is that one of them can go “first” in logical time (at the cost of having thought less about the details of their strategy).
Of course, we have some reasons to think the priors needed for the above to happen are especially wacky, and so unlikely. But again, one worry is that this could happen pretty early on, when the AGI still has such wacky and unjustified beliefs.
You’re right there’s something weird going on with fix-points and determinism: both agents are just an algorithm, and in some sense there is already a mathematical fact of the matter about what each outputs. The problem is none of them know this in advance (exactly because of the non-terminating computation problem), and so (while still reasoning about which action to output) they are logically uncertain about what they and the other outputs.
If an agent believes that the others’ action is completely independent of their own, then surely, no commitment race will ensue. But say, for example, they believe their taking action A makes it more likely the other takes action B. This belief could be justified in a number of different ways: because they believe the other to be perfectly simulating them, because they believe the other to be imperfectly simulating them (and notice, both agents can imperfectly simulate each other, and consider this to give them better-than-chance knowledge about the other), because they believe they can influence the truth of some mathematical statements (EDT-like) that the other will think about, etc.
And furthermore, this doesn’t solely apply to the end actions they choose: it can also apply to the mental moves they perform before coming to those actions. For example, maybe an agent has a high enough probability on “the other will just simulate me, and best-respond” (and thus, I should just be aggressive). But also, an agent could go one level higher, and think “if I simulate the other, they will probably notice (for example, by coarsely simulating me, or noticing some properties of my code), and be aggressive. So I won’t do that (and then it’s less likely they’re aggressive)”.
Another way to put all this is that one of them can go “first” in logical time (at the cost of having thought less about the details of their strategy).
Of course, we have some reasons to think the priors needed for the above to happen are especially wacky, and so unlikely. But again, one worry is that this could happen pretty early on, when the AGI still has such wacky and unjustified beliefs.