I’m sad that some people thought that more than two parenthesis was high risk to simulate/examine—I thought that the obvious thing to do was check to see if someone ever loads code or uses a random function, and if you don’t do either, you should be safe.
We’ll see how many people submitted simulators braver than mine, but simulators being timid seems like a natural consequence of the rules allowing you to nuke your opponent if you find out that you’re in a simulation, and a common enough perception that simulators might have enough of an advantage that they should be eliminated.
Static analysis is not very useful if the opponent’s code is at all obfuscated, which is likely is if your opponent is looking to nuke simulators. Does your static analysis catch the code getattr(__builtins__, ‘e’ + ‘x’ + ‘e’ + ‘c’)(base64.decode(God knows what)) ? Or however many dozens of other ways there are to do something like that?
The tournament might look significantly different if the rules were slanted in the simulator’s favor, maybe if you just had to avoid infinite simulation loops and keep runtime reasonable, and the worst the opponent was allowed to do if they found they were in a simulation was to play BullyBot in order to extort you or play randomly to make the simulation useless. The iterated prisoner’s dilemma with shared source code tournament a few years ago had a lot of simulators, so I assume their rules were more friendly to simulators.
I do think it would be hard to obfuscate in a way that wasn’t fairly easy to detect as obfuscation. Throw out anything that uses import, any variables with __ or a handful of builtin functions and you should be good. (There’s only a smallish list of builtins, I couldn’t confidently say which ones to blacklist right now but I do think someone could figure out a safe list without too much trouble.) In fact, I can’t offhand think of any reason a simple bot would use strings except docstrings, maybe throw out anything with those, too.
(Of course my “5% a CloneBot manages to act out” was wrong, so take that for what it’s worth.)
The iterated prisoner’s dilemma with shared source code tournament a few years ago had a lot of simulators, so I assume their rules were more friendly to simulators.
I know of two such—one (results—DMRB was mine) in Haskell where you could simulate but not see source, and an earlier one (results) in Scheme where you could see source.
I think in the Haskell one it would have been hard to figure out you were being simulated. I’m not sure about the scheme one.
We’ll see how many people submitted simulators braver than mine, but simulators being timid seems like a natural consequence of the rules allowing you to nuke your opponent if you find out that you’re in a simulation, and a common enough perception that simulators might have enough of an advantage that they should be eliminated.
Static analysis is not very useful if the opponent’s code is at all obfuscated, which is likely is if your opponent is looking to nuke simulators. Does your static analysis catch the code getattr(__builtins__, ‘e’ + ‘x’ + ‘e’ + ‘c’)(base64.decode(God knows what)) ? Or however many dozens of other ways there are to do something like that?
The tournament might look significantly different if the rules were slanted in the simulator’s favor, maybe if you just had to avoid infinite simulation loops and keep runtime reasonable, and the worst the opponent was allowed to do if they found they were in a simulation was to play BullyBot in order to extort you or play randomly to make the simulation useless. The iterated prisoner’s dilemma with shared source code tournament a few years ago had a lot of simulators, so I assume their rules were more friendly to simulators.
I do think it would be hard to obfuscate in a way that wasn’t fairly easy to detect as obfuscation. Throw out anything that uses
import
, any variables with__
or a handful of builtin functions and you should be good. (There’s only a smallish list of builtins, I couldn’t confidently say which ones to blacklist right now but I do think someone could figure out a safe list without too much trouble.) In fact, I can’t offhand think of any reason a simple bot would use strings except docstrings, maybe throw out anything with those, too.(Of course my “5% a CloneBot manages to act out” was wrong, so take that for what it’s worth.)
I know of two such—one (results—DMRB was mine) in Haskell where you could simulate but not see source, and an earlier one (results) in Scheme where you could see source.
I think in the Haskell one it would have been hard to figure out you were being simulated. I’m not sure about the scheme one.