Let’s introduce a time limit. Say that after a maximum of S computations (i.e., computation steps using some standardized notion) have passed, each player is forced to make a decision.
Now, write a program that is opaque to introspection: to find out what it decides (i.e. to COOPERATE or DEFECT) , it must be simulated until it halts. This program could use cryptography or other obsfuscation systems (random numbers would be useful). Engineer this program so that it take exactly S steps to run to completion.
The simulating player does not have time to both simulate and interpret the results of its simulation.
Seemingly, restricting all machines to the same time limit serves to reduce the efficacy of many (all?) of these adversarial simulation strategies.
In the second scenario we can, losing a little generality, prohibit obfuscation by accepting only programs already organized as decision graphs (possible nodes: throw dice and branch; simulate opponent and branch; cooperate; defect). The problem stays meaningful because programs have no access to each other’s source and must resort to simulation. Maybe the problem thus formulated has a trivial solution, but I don’t see it yet.
Let’s introduce a time limit. Say that after a maximum of S computations (i.e., computation steps using some standardized notion) have passed, each player is forced to make a decision.
Now, write a program that is opaque to introspection: to find out what it decides (i.e. to COOPERATE or DEFECT) , it must be simulated until it halts. This program could use cryptography or other obsfuscation systems (random numbers would be useful). Engineer this program so that it take exactly S steps to run to completion.
The simulating player does not have time to both simulate and interpret the results of its simulation.
Seemingly, restricting all machines to the same time limit serves to reduce the efficacy of many (all?) of these adversarial simulation strategies.
...and so defects, because it’s obvious what the other player intends.
More interestingly, what if the program being simulated has a really clever algorithm that just happens to take S steps to compute?
A program can’t be “clever” if it is indistinguishable from a permanent defector to other programs.
In the second scenario we can, losing a little generality, prohibit obfuscation by accepting only programs already organized as decision graphs (possible nodes: throw dice and branch; simulate opponent and branch; cooperate; defect). The problem stays meaningful because programs have no access to each other’s source and must resort to simulation. Maybe the problem thus formulated has a trivial solution, but I don’t see it yet.