Hrm- is it different if they run my function from within theirs instead of constructing a full VM? I was considering ways to communicate with a copy of myself that my simulation of my opponent was running that it was a simulation, but couldn’t find any good ideas.
If they run your function from within theirs they simply tell the computer to start reading those instructions, possibly with a timer for stopping detailed in other parts of the comments. If they implement a VM from scratch they can mess with how the library functions work, for instance giving you a time that moves much faster so that your simulation must stop within 0.1s instead of 10 and they can run your code 100 different times to deal with randomness. Now implementing your own VM is probably not the optimal way to do this, you probably just want to do a transformation of the source code to use your own secret functions instead of the standard time ones.
I was considering simply measuring the behavior of my current opponent against programs that aren’t me and determining their behavior as cooperatebot, mutualbot, defectbot, cooperate if they simulate me, cooperate against long programs, cooperate IFF they cooperate IFF I cooperate, or some other beast. That allows their behavior to depend on my behavior versus them, but my behavior only depends on their behavior versus third parties.
I can’t see a way to check another program for the desired IFF behavior without going beyond my skill level, but I think a mutualbot with a tiny chance of cooperating followed by a mutualbot with a tiny chance of defecting comes close; the first puts a lot of copies on the stack and then the top one cooperates unilaterally; if the response of the opponent is mutuality, it will be cooperate all the way down. If my opponent defects at his top level because I didn’t cooperate for the right reason, it yields a defection… all the way down. A perfect program wouldn’t do that, because it could see that it was probably in a simulation.
Hrm- is it different if they run my function from within theirs instead of constructing a full VM? I was considering ways to communicate with a copy of myself that my simulation of my opponent was running that it was a simulation, but couldn’t find any good ideas.
If they run your function from within theirs they simply tell the computer to start reading those instructions, possibly with a timer for stopping detailed in other parts of the comments. If they implement a VM from scratch they can mess with how the library functions work, for instance giving you a time that moves much faster so that your simulation must stop within 0.1s instead of 10 and they can run your code 100 different times to deal with randomness. Now implementing your own VM is probably not the optimal way to do this, you probably just want to do a transformation of the source code to use your own secret functions instead of the standard time ones.
I was considering simply measuring the behavior of my current opponent against programs that aren’t me and determining their behavior as cooperatebot, mutualbot, defectbot, cooperate if they simulate me, cooperate against long programs, cooperate IFF they cooperate IFF I cooperate, or some other beast. That allows their behavior to depend on my behavior versus them, but my behavior only depends on their behavior versus third parties.
I can’t see a way to check another program for the desired IFF behavior without going beyond my skill level, but I think a mutualbot with a tiny chance of cooperating followed by a mutualbot with a tiny chance of defecting comes close; the first puts a lot of copies on the stack and then the top one cooperates unilaterally; if the response of the opponent is mutuality, it will be cooperate all the way down. If my opponent defects at his top level because I didn’t cooperate for the right reason, it yields a defection… all the way down. A perfect program wouldn’t do that, because it could see that it was probably in a simulation.