I’m not sure about that actually, it seems implementation dependent-it’s certainly outside the current framework.
What you said would be true if RandomBot is truly random. In reality, RandomBot would probably be written using a PRNG, which is deterministic. In this game, both bots are given their opponent’s source code as an input. Reasonably, that should include the PRNG, and its seed. Consequently, RandomBot would probably be treated in each round as a CooperateBot or DefectBot-whichever it is at the time. (This might break down the conclusion-the bots don’t actually reason about long term consequences, that is all outside the code. They only deal with the program they are against now, and whether ‘today’ (in the current round) they will cooperate, etc.). It might mess with the proofs-PrudentBot could find that RandomBot is either CooperateBot or DefectBot this round, but if it proving that RandomBot will cooperate with it, is the same as proving that RandomBot’s first call the PRNG gets back an even number, then this proof is about something that (potentially) already happened. Then, proving ‘how it will treat CooperateBot’, could end up being a proof about what it would do if CooperateBot was its opponent next round. This is however, a hypothetical implementation nitpick, about how the programs might not perform as intended outside their original context, in which they faithful implement an idea, in a way that might not generalize without additional work.
Introducing things this way would change the dynamics-a bot which changes how it behaves based on a variable that’s different every round can have a more complicated strategy, which takes advantage of the costs of modeling, or (by using a counter) can try to grow to dominance in the population, then change strategies.
One might open up a similar wealth of possibilities by allowing the bots to know the current population, or the lineup of rounds (the tournament’s bracket), or work out proofs that handle mixed strategies (there is a 99% chance my opponent will cooperate, and a 1% chance it’ll do something random because its a ML algorithm which does random things epsilon=1% of the time in order to learn, etc.).
I’m not sure about that actually, it seems implementation dependent-it’s certainly outside the current framework.
What you said would be true if RandomBot is truly random. In reality, RandomBot would probably be written using a PRNG, which is deterministic. In this game, both bots are given their opponent’s source code as an input. Reasonably, that should include the PRNG, and its seed. Consequently, RandomBot would probably be treated in each round as a CooperateBot or DefectBot-whichever it is at the time. (This might break down the conclusion-the bots don’t actually reason about long term consequences, that is all outside the code. They only deal with the program they are against now, and whether ‘today’ (in the current round) they will cooperate, etc.). It might mess with the proofs-PrudentBot could find that RandomBot is either CooperateBot or DefectBot this round, but if it proving that RandomBot will cooperate with it, is the same as proving that RandomBot’s first call the PRNG gets back an even number, then this proof is about something that (potentially) already happened. Then, proving ‘how it will treat CooperateBot’, could end up being a proof about what it would do if CooperateBot was its opponent next round. This is however, a hypothetical implementation nitpick, about how the programs might not perform as intended outside their original context, in which they faithful implement an idea, in a way that might not generalize without additional work.
Introducing things this way would change the dynamics-a bot which changes how it behaves based on a variable that’s different every round can have a more complicated strategy, which takes advantage of the costs of modeling, or (by using a counter) can try to grow to dominance in the population, then change strategies.
One might open up a similar wealth of possibilities by allowing the bots to know the current population, or the lineup of rounds (the tournament’s bracket), or work out proofs that handle mixed strategies (there is a 99% chance my opponent will cooperate, and a 1% chance it’ll do something random because its a ML algorithm which does random things epsilon=1% of the time in order to learn, etc.).